Gamecraft

This blog is all about the craft of making games, and in particular, computer games. Gamecraft involves a broad range of topics, including design, development, quality control, packaging, marketing, management, and life experience.

Name: Gregg Seelhoff
Location: East Lansing, Michigan, United States

Friday, June 23, 2006

Friday Funny

Jon Stewart has taken notice of the absurdity of anti-game hearings.

Recently, on Comedy Central's The Daily Show, Jon Stewart had an amusing segment about the attempts to vilify video games by our government. In particular, he focused on the hearing held by a House of Representatives subcommittee on June 14. Individual clips of comments by legislators are shown (with appropriate commentary).

The entire segment is available for viewing here. (Thanks to Gamasutra for the link.)

I find it rather telling that in one of the clips where a legislator, the "father of three young boys", expresses concern about video game content, he seems to actually struggle to remember the ages of his own children. Stewart's commentary at this point is dead on accurate. It would be even funnier if people did not seem to take these idiots seriously.

Seriously, the House of Representatives is filled with insane jackasses.
-- Jon Stewart

Wednesday, June 21, 2006

Three Weeks to SIC 2006

The shareware industry's premier conference is just around the corner.

On July 12, the Shareware Industry Conference gets underway in Denver, Colorado. This will be the second year there, as it traditionally stays at the same venue for two consecutive years. There are rumors that it will be in Denver in 2007 as well, which would be the first time that a city has had it more than a couple times since the original Summer Shareware Seminar left its Indianapolis home. Of course, we will only know that for certain once we check in and read the back page of the conference booklet.

This will be my sixth time attending SIC, now in its 16th year. I mistakenly decided not to go to the very first one (back in 1991) for silly reasons, and I now seriously regret having failed to attend. I recommend this conference to anybody who has an interest in marketing and selling software online, regardless of whether or not he or she uses the term "shareware".

It appears that, for the first time, I may be speaking at the conference. I am tentatively scheduled to give a presentation entitled, Practical Interface Guidelines: Things they did not teach us in programming class. I am not really sure that this will happen until the SIC 2006 Schedule of Events lists my talk (presumably on Friday at 3:30pm). We shall see.

Anyway, for anybody attending SIC, I should be wandering the lobby areas and attending sessions for the duration. I should also be at all the official events, including the ASP luncheon, as well as the unofficial AISIP offsite lunch. I look forward to seeing friends and meeting new people, so be sure to say "Hi!".

Saturday, June 17, 2006

Red Card the Ref!

The "beautiful game" looked pretty ugly today.

After watching the United States vs. Italy game in the FIFA World Cup 2006, I am stunned at the lousy officiating in the game. The referee was probably the worst I have seen, including when I coached soccer in a youth recreational league. There, the refs were high school kids who were still learning, and we coaches sometimes had to help with finer points of the rules, but any of them would have done better.

For those who were not watching the game, the referee gave an Italian player a Red Card (game ejection) for deliberately elbowing an American player in the face (and drawing blood). This was well deserved, and it left the Italian team with only ten men. However, at the end of the half (last minute), the ref gave an undeserved "equalizer" to a US player for a common foul (if that), taking one of our productive players out of the game. Then, only two minutes into the second half, he removed another US player on a questionable call, dropping the team to only nine.

Despite the fact that we were (undeservedly) a man down, and the statistic that no team with only nine men has ever scored a goal in World Cup history, the Americans managed to bury one in the back of the net to take the lead on the scoreboard, 2-1 over the Italians. Unfortunately, the referee decided to make a late offsides call against a player not involved in the play, taking our "go ahead" goal away.

When my players used to complain about a poor referee or bad calls, I always advised them to take the refs out of the game by scoring goals. After all, if a game is not close, then a bad call or two cannot make the difference. However, when a referee blows three major calls, taking a team from having an extra man to being down a man and denying a legitimate goal, it really is too much to overcome, especially at this level of play. The United States team played valiantly, but had to settle for a 1-1 tie.

After the match, American television revealed its discovery that this particular referee was not allowed to officiate in the last World Cup because of unspecified "irregularities" that were so egregious that other refs complained about it. So, not only were his calls terrible (and, in my opinion, deliberately biased), but he has a history of such problems.

The big problem with this whole thing is that everybody loses in this situation. The United States was robbed of an excellent chance to win this game, and the Italians, too, were denied a fair fight. The fans were ripped off, as the outcome of the game has more to do with one incompetent referee than anything else. FIFA takes a major credibility hit when they were already on the defensive about the poor officiating (and too many cards, in particular).

I believe that a very exciting game was marred by the malfeasance of one individual, and I am looking for FIFA to take swift and decisive action against the referee. There is nothing that can be done about the game outcome but to move on. The United States played very well this game, and every team in the group still has a chance to advance to the next round.

The situation is this: Both remaining Group E games will be played simultaneously on Thursday, June 22 at 16:00 local time (10:00am Eastern US). The United States must win against Ghana (who scored a major upset today) to have a shot. If that happens and Italy wins over the Czech Republic, then the US advances. If the other game is a tie or a loss, then the US has to completely crush Ghana, by five or six goals. That is highly unlikely.

I have an official Italian team jersey that I would not wear today, but I can assure you that I intend to wear it on Thursday. Go USA! Go Italia!

Friday, June 16, 2006

Quote for the Day

Ya learn a lot more about life from the things you're not supposed to do.
-- Delbert McClinton

Thursday, June 15, 2006

HNT: Respect Your Customers

How Not To: Respect Your Customers

A while ago, Microsoft added a new "high-priority update" for Windows XP to their Windows Update site. Note that, since it is marked as a high priority, this particular update will be automatically downloaded and installed on all Windows XP systems configured to use its Automatic Updates feature, so most users will have no idea that it was installed.

This little update is called, euphemistically, Windows Genuine Advantage Notifications, and it is described in KB905474. It is an application that checks to see if you have a pirated copy of XP and, if so, attempts to sell you a legal one. In Microsoft marketing department language:

"The Windows Genuine Advantage Notification tool notifies you if your copy of Windows is not genuine. If your system is found to be a[sic] non-genuine, the tool will help you obtain a licensed copy of Windows."

According to the web page, if you voluntarily (knowingly or not) download this little application, and it detects (correctly or not) that a copy of XP is not genuine, it 1) notifies you at every logon and tries to sell you Windows, 2) puts another icon in your taskbar ("an icon will be available...") with balloon notifications, and 3) locks your desktop to display a message about software counterfeiting.

Of course, I spend hundreds of dollars each year to have the latest, and fully licensed, Microsoft operating systems, so assuming this works correctly and recognizes my genuineness, what does this do for me, a paying customer? Let's see. It runs another application in the background, hidden from me but stealing processing time. Oh, Goody! It also cannot be uninstalled.

In other words, this "update" treats me as a criminal and puts my machine at risk of the whims and mistakes of Microsoft. It provides me nothing of value whatsoever. This may be good for them, assuming that anybody using a pirated copy of XP gives a damn, but a legitimate customer should not be subject to such treatment. Oh, sure, somebody who bought their machine from a dishonest vendor who gave them an illegal copy may get caught, but that does not justify an uninstallable application being surreptitiously added to a system.

The story could end there, but it does not. Yesterday [June 14, 2006], Microsoft updated the application, so even those of us who told Windows Update to quit notifying us of this, now have to do it again ("... until a new release of the Notification Update is released"). I do not use the word "hide" in this respect, because it does anything but hide the update. Instead, every time I open Windows Update, I am greeted with the following message:

"You've hidden important updates
You've asked us not to show you one or more high-priority updates but your computer might be at risk until they are installed."

This is a lie. This update is only "important" to Microsoft; it is meaningless (at best) to paying customers. Classifying it as "high-priority" is inappropriate, and suggesting that my computer is at risk without it is simply wrong.

Do not get me wrong here. I despise software pirates, and I would like to see more prosecutions and appropriately severe punishments for willful copyright infringement. I also like Microsoft, earn a living using their software, and have even worked on a game for them, so I am not bent on their destruction or anything like that. However, I cannot help but feel insulted, first, and then just angry at such behavior. This is definitely an example of how not to respect paying customers.

Right now, I am going to work on a Macintosh, writing a game for Apple OSX. I am currently feeling more motivated to do so, far more than to work on any of my Windows games at the moment. Seriously.

Wednesday, June 14, 2006

How Not To...

It is quite useful to learn from the mistakes of others.

When my business partners and I decided to devote full time effort to our company, we had seen lots of errors in our combined experience in the game industry, as well as from business in general. We thought that we had witnessed enough pitfalls that we could succeed merely by avoiding the many mistakes that we had watched others make. Alas, there are always new and interesting ways to screw things up, and we stepped in a few ourselves.

I have long believed that the best way to learn something is to make a mistake at it, and the greater the disappointment or embarrassment (or even pain), the better the lesson. A child does not necessarily remember all of the words he got correct in a spelling bee, but he always knows how to spell the one word he got wrong to eliminate him.

Of course, it is much harder to get people to take a written or spoken, rather than experienced, lesson to heart. Nevertheless, I will face this challenge and attempt to illuminate some of the pitfalls one may want to avoid to make excellent games, run a successful business, or simply have an enjoyable and respectable life.

Every once in a while, I will post a true "How Not To..." story showing a mistake made by someone, whether an individual or an organization. These lessons will be prefixed with "HNT:" in the title and will describe the incorrect method of accomplishing something. I may or may not change or omit names or details to protected the guilty, depending on my mood. The correct or appropriate solution(s) will be left as an exercise for the reader.

Note that if you recognize yourself in one of these scenarios, remember that it is an opportunity to improve. If people say negative things about you, there are two basic possibilities: 1) they are right and you can learn, or 2) they are full of it and can be ignored. Decide honestly which category applies, and then move on.

Microsoft is a favorite target of many, but one particular recent practice has ticked me off enough that it was time to write the first of these lessons.

Saturday, June 10, 2006

Quality: The Index

Please link to this post. [permanent link]

After a few weeks of quality, here are convenient links to each of the separate articles:

Quality: An Introduction
Quality: The Process, Part I
Quality: The Process, Part II
Quality: The Process, Part III

One can following the links at the bottom of each section to read each article in its entirety.

Whew!

Thursday, June 08, 2006

Quality: The Process, Part III [Think Quality.]

[continued from Gamma testing?]

Think Quality.

The most important aspect of the entire quality assurance process is attitude. We all need to strive for quality and refuse to accept anything less. There is no process or technique that can overcome a lack of commitment to quality, but the right mindset will make the process and decisions that much easier.

As [independent software developers], we can work together to improve our own products and, in so doing, raise the bar for other software.

Think Quality.


Gregg Seelhoff is an independent game developer, and the results of [a previous] beta test can be found at www.goodmj.com.

Wednesday, June 07, 2006

Quality: The Process, Part III [Gamma testing?]

[continued from Something different]

Gamma testing?

Software is not complete when it is released, and this is especially true for shareware offerings, since updates are relatively simple when compared to retail products. Some quality assurance professionals use the term "gamma testing" to refer to the process of improving and evolving products after the initial release. Unfortunately, this also refers to checking for radiation, so I only use the term jokingly.

The concept, however, is fundamentally sound. It can be summed up with the following saying:
"The Customer is always right, even when he is wrong."

This phrase is often taken to suggest that one must appease every customer, and while this is a reasonable goal for good customer relations, it is not the only interpretation.

This saying also means that any feedback is valid, and no matter how unreasonable it seems. In terms of software, it means that every time a customer has a complaint or comment, it indicates a portion of the product or process that could be improved. As much as you know about your own software, you can never be the customer, so you need to listen to the feedback. For every person who contacts you about a problem, there are possibly hundreds of others with the same problem who do not bother.

For the same reasons, all reviews are beneficial. Good reviews are nice, but poor reviews, in fact, can do more to help you improve the quality of your product, if not your bottom line. Any negative aspects of a review can be corrected, and it will improve the software. Even where the reviewer makes an incorrect statement, such as overlooking a feature, this just shows that the interface or documentation should be improved to prevent that mistake from happening.

Note that there is no obligation to distort your software according to the whims of customers and reviewers. In fact, this can have detrimental effects on the product. You should be the "keeper of the vision" for your product and reject inappropriate suggestions. However, it is imperative to listen and consider.

[continued in Think Quality.]

Tuesday, June 06, 2006

Quality: The Process, Part III [Something different]

[continued from Standard treatment]

Something different

There are a number of other testing techniques that are used during development, and I want to touch briefly on a few.

One essential technique is known as "compatibility testing". As the name implies, this is testing the software for compatibility on a variety of different system configurations. There are companies that will perform extensive compatibility testing, but this is not inexpensive. Alpha and beta testing should cover a range of systems, but it will be far from comprehensive.

For a Windows product, one must test on some flavors of Win9x and NT, at an absolute minimum, and preferably on every supported operating system. Game and multimedia products need to be tested with different video cards and sound cards. Products with printing features need to be checked on different types of printers, including at least a color inkjet and a laser printer, from different manufacturers. In short, you must cover as much of your target audience as absolutely possible.

Another external testing technique, related to compatibility testing, is product certification. This involves submitting your software for certification according to the rules of some program. Instead of checking different system configurations, product certification programs check other criteria, depending on the goals of the particular certification. These range in cost from free to very expensive.

For a slightly less formal review of the usability and general quality of the software, one can conduct "focus group" testing. Focus groups are essentially a collection of people in the target audience who are brought together in one location specifically to give their opinions and feedback. Professional firms can conduct such groups with quasi-scientific questionnaires, hidden cameras, and written analysis, for a tidy sum.

The easier and, in my experience, no less effective method to perform focus group testing is to find a location, such as the computer lab in a local school, and advertise free pizza and drinks for computer users who will show up and try your new product. I cannot comment on how this would work for business products, but it works well for games.

Finally, throughout the entire testing process, you need to conduct "regression testing". Regression testing is a method of making sure that bugs that were fixed are not reintroduced into the program. This concept is really as simple as trying to reproduce each of the fixed bugs and making certain that they have not reappeared.

My first exposure to regression testing was a spiral notebook into which every bug was written as it was reported and checked as it was solved. Before we would send a game build to the publisher, we simply tested each item in the notebook as part of the test plan. It hardly needs to be more complicated than that.

[continued in Gamma testing?]

Monday, June 05, 2006

Quality: The Process, Part III [Standard treatment]

[continued from Beta move on]

Standard treatment

In most cases, companies use closed beta testing, limiting and controlling the distribution of beta versions of the software. Finding and managing beta testers becomes an issue, and finding good testers is a difficult challenge, so we need to discuss the closed beta process in more detail.

The unfortunate fact is that few users know how to properly test software, so if you are lucky enough to find a good tester, make certain that you keep that person happy. Useful feedback should be rewarded with a free copy of the program, at a minimum, and the tester should always be invited to participate in future beta tests. A good tester will outperform a dozen mediocre testers and, therefore, is very valuable.

A related problem is that many prospective testers will not provide any feedback at all, so it is necessary to invite more beta testers than you expect to need. You can anticipate that roughly half of the beta testers in a closed beta will not report anything at all, and some of the others will not be useful. In most cases, it is difficult to find enough beta testers, so it is unlikely that a product will get too many volunteers.

When looking for beta testers, cast a wide net. It is important to have as large a range of experience levels, methods of use, and system configurations as possible. It is a good idea to ask potential beta testers not only for contact information, but also about system configurations and software experience.

Remember, some of your potential customers are likely to be struggling with computer illiteracy, so it makes sense to have some less experienced testers as well. Knowledgeable users will often figure out how to do something, or find a workaround, on their own without indicating that there may be a problem. Neophytes, on the other hand, will ask questions that customers would ask. Do not rely solely on other developers for testing unless your product can only be used by programmers.

The best means of communication for a closed beta process is beta forum of some kind, in which beta testers can interact with each other. This helps establish a sense of community that works to support tester involvement and breeds loyalty to the product. From a practical standpoint, this also allows problems to be independently verified by other testers, and they will often work together to help you replicate a bug. There should also be an email address for bug reports, but forum participation should be encouraged.

It is important to remember that beta testing is not an adversarial process. Let me say that again. Beta testing is not an adversarial process. It can sometimes be very difficult to take criticism, but you must be certain not to get defensive. Always wear a (virtual) smile. Beta testers are there to help you, and it is far better to hear about problems now rather than after release.

All feedback is beneficial, so you should listen to everything that is reported. Try to respond to every report so that testers know you are listening and involved, which gives a psychological incentive to do a better job. Avoid being dismissive, as that discourages participation. Also, make it clear that you appreciate the reports, even the negative ones, since some testers are reluctant to report bugs or bad impressions if they feel that you will be insulted. Many reports are preceded by apologies.

One technique for keeping testers involved is to provide means of communication that does not necessarily involve bugs reports. Informal surveys about aspects of the program or system hardware questionnaires give testers a change to participate even if they cannot find any bugs (which is the goal, after all). In my last beta test, I decided to try a little contest. I found three unreported bugs in different areas of the game and challenged the testers to find them. The number of valid bug reports increased measurably.

[continued in Something different]

Sunday, June 04, 2006

Quality: The Process, Part III [Beta move on]

[continued from Greek to me]

Beta move on

When the program is feature complete, or approaching that stage, it is time to consider taking the next step. One step from alpha is beta, so we should now look at "beta testing".

Beta testing is the most recognized form of black box testing, in which the software is submitted to users outside the company for additional testing and feedback. Generally, these testers are not professionals, but rather should represent a typical cross-section of potential customers and users.

Since beta testing is often the first external exposure of your product, it is important that the alpha testing and glass box techniques have produced a reasonably solid program. It may be a cliché, but there is not a second chance to make a first impression. When a tester's first experience with a product is lousy, he or she will be less likely to get comfortable with it. If you know that there are lots of bugs, then your software is probably not ready for beta testing.

A practical reason for making sure the software already shows a standard of quality when beta testing begins is that obvious bugs will be reported multiple times, and less severe bugs will be overlooked. When a tester finds a number of problems, he or she may relax the reporting or assume that one bug is caused by another. Also, some bugs do cause a multiplicity of symptoms, and tracking becomes more convoluted.

There are two primary forms of beta testing, "open" and "closed". In open beta testing, the developer announces the availability of a "public beta" version of the software, and any interested party can download and test the software. For closed beta testing, the developer provides a "private beta" version of the software to a limited number of known testers.

Companies may use either or both forms of beta testing. The main advantage of open beta testing is that the software can be tested by lots of people to cover a wide array of systems and uses, at the expense of control and a possible impact on the marketing plan. On the other hand, closed beta testing provides the developer with better control of the process, but the disadvantage is that it is hard to find testers.

Some companies use both forms of beta testing, starting with a closed beta and then expanding to an open beta program once the program is closer to release. Microsoft, for example, runs an extensive closed beta testing program for DirectX, including the SDK and the runtimes, which lasts for several months each version, but near the end of this process, the beta runtimes are made available for public download. [Note: Microsoft has since ceased proper testing of DirectX SDK releases and is now a counter example, not to be followed.]

For either form of beta testing, you should insert a "drop dead" date in the code, so the program will not run after a certain fixed date. This prevents the beta from entering general circulation and reduces testing of outdated versions. Note that this technique should never be used for release versions, so you must remember to remove it before the final version. You must also remember to update the date with each new testing version lest you have a valid beta timeout prematurely.

Just as a feature complete product signals the approaching end of the alpha testing phase, the impending completion of the beta testing phase is signaled by a "release candidate". A release candidate is a version of the product that is potentially the release version of the software. At this point, testers should be instructed to report every bug they find, even if they have reported it previously, since all bugs should have been eliminated. If bugs are corrected, another release candidate should be created and tested.

For the first release of a product, the traditional beta version numbers start at 0.90 and approach 1.0, the release version. I know of one game product, on which I did not work, that had so many beta versions that the producer gave the team shirts that read "Version 0.99999999..." with the nines running all of the way down one of the sleeves.

[continued in Standard treatment]

Saturday, June 03, 2006

Quality: The Process, Part III [Greek to me]

[continued from Quality: The Process, Part III]

Greek to me

Every program with more than seven lines of source code has bugs. It is important that software developers do whatever is feasible to eliminate bugs. With mass market software, one can be confident that even rare bugs, when multiplied by thousands of users, will be discovered. Bugs in some specialized and vertical market software could actually cause damage or injury. In any case, when distributing shareware, bugs will cost you sales, so quality will directly help your bottom line.

The most innovative approach to elimination of bugs, which I must credit to Barry James Folsom, involved a simple corporate proclamation. As the new President, he called for a meeting and all of the several dozen developers in the company were gathered. After an introduction, he declared that none of our software would have "bugs". From that point forward, it could only have "defects".

It may not be terribly practical to simply redefine terms and create quality, but this dubious proclamation did have a point. When a customer or, in the case of shareware, a potential customer is using the software and it fails to work properly, that is a problem. "All software has bugs," is not comforting, so we need to look at the software the perspective of a user.

Let's start at the very beginning, with alpha, or more specifically, "alpha testing".

Alpha testing is a form of black box testing that is performed in-house. In practical terms, alpha testing is simply the developer using the software in the same way that a customer would, prior to making the software available to others.

After each version of the software is ready, I close all my development tools, clear the registry and data files, and pretend to be a user seeing the program for the very first time. I start by running the program installer, and then launching the game (in our case) using the installed shortcut, as opposed to the debugger. I will then just play the game for a while, recording any problems that arise.

Once I am comfortable that the program is working as intended on my development system, I then copy the installer to at least one other test system. Rather than install the software myself, though, I enlist somebody else to do it. This can be a colleague, friend, spouse, child, parent, pet, or benevolent stranger. I provide no other instruction, and note where any questions are asked. Any problems witnessed here will also be experienced by users on a larger scale.

In a formal testing environment, alpha testing involves testers systematically checking the software according to the specified test plan, combined with actual use of the software. In a corporate environment, the test plan is executed by the QA department. In small businesses, it generally falls on the programmers to follow the test plan. In either case, anybody willing should try using the software. In a larger company, I would throw an "open house" to show the software to other employees. As an independent, simply having the game available for play is sufficient.

Alpha testing should begin as soon as the software is usable, and this will necessarily overlap with program development. At some point during the alpha phase, the software should become "feature complete". This means that all intended features for this version are in the program and functional. It does not mean that the performance is optimized, nor does it mean that the interface is finalized, but it should do everything that it was intended to do.

[continued in Beta move on]

Friday, June 02, 2006

Quality: The Process, Part III

[This article was originally published in the January 2003 issue of ASPects.]

Good things come in threes. Literature is rife with examples. Jack (of Beanstalk fame) received exactly three magic beans for a reason. However, with deference to Sigmund Freud, sometimes an article is just an article.

In the first installment of this trilogy, I introduced some foundational concepts for testing, including planning, some quality assurance terminology, and classification and tracking of bugs. The second part, the story bridge, covered general tools and techniques that can be utilized during product development. In this, the conclusion, I will discuss testing methods used as the software reaches a functional stage.

[continued in Greek to me]

Thursday, June 01, 2006

Quality: The Process, Part II [Getting some help]

[continued from Automatic or manual]

Getting some help

Up to this point, I have discussed a variety of methods for improving the quality of software that can be implemented solely by the programmer during the development. However, as the program gets closer to completion, it becomes important to enlist the help of others for black box testing and feedback. That will be the topic for my next installment.

In the meantime, there is an opportunity to implement some of the above tools and practices into your development process.


Gregg Seelhoff is an independent game developer and charter member of the Association for Professional Standards [now defunct].