Tuesday, March 11, 2008

The Percentile Realism Myth

My high school chemistry teacher was quite a character. Among other things he was (in)famous for starting class the first day of school by announcing, "In this classroom I am God, and [holding up our textbook] this is the Good Book." He was a very demanding teacher, but he was also very smart and treated us like young adults rather than children. One of the lasting things he taught me was the difference between accuracy and precision. In the decades that I have been around gamers, I have often heard the perspective that role-playing games using percentile dice are superior because of the increased realism. To understand why this is a fallacy, one need only look back to my high school chemistry class.

According to Mr. Klaus and the Good Book accuracy and precision are distinct, and this distinction is important to understand. The term accuracy refers only to the correctness of a measurement. I could measure my desk accurately using feet, centimeters, or popsicle sticks. As long as other capable people would get the same number as I did, it was an accurate measurement. Precision, on the other hand, is a statement of how finely the instrument used is able to measure. In the preceding example centimeters is the most precise measurement because it is the smallest unit. I could be more precise by using millimeters or other small scales. Notice that precision says nothing of my ability to measure. If I hold the measuring stick upside-down, the result could still be precise, but it would be wildly inaccurate.

Now consider the mechanics of your favorite RPG. It is likely that characters' abilities are rated with a number on a certain scale. The d20 systems like D&D use 3-18 for normal attributes and 1-20 for most skill rolls. Other games like RoleMaster and WarHammer Fantasy RPG use percentile dice with a value of 1-100 for these numbers. It seems at first glance like the percentile systems would provide superior realism. Thinking this, however, is confusing precision and accuracy.

Let's look at a real world situation for some perspective. Testing of human beings is certainly a complicated topic, but Intelligence Quotient (IQ) is a well-known measure of intelligence that should suffice for illustrating the point. IQ tests are designed to assign a number to a person's general intelligence where 100 is the average. The high and low scores vary by test, but for the purpose of this discussion, assume the range is 30 to 200, from non-verbal to super-genius.

One would assume that grades earned in school could be predicted by looking at IQ. Researchers have found that only 25% of the variance in students' grades can be attributed to IQ. Even more surprisingly, barely half of the variability in college entrance exam scores for mathematics and English correlate to IQ score. (For references and an in-depth discussion of IQ, see the Wikipedia entry.)

The underlying problem is that we have no precise way to measure general intelligence accurately. Even though IQ tests assign a particular number as the score, that measurement is not necessarily reflective of level of performance even in areas that seem to be closely based on intelligence. In addition, repeated testing can give results that vary fairly widely.

Now let's apply this information to gaming. Most systems give characters an Intelligence score. Imagine that your character is a computer hacker in a cyberpunk-style RPG universe. You are attempting to crack a security system to open a door. This reasonably requires a test of your character's Intelligence. However, as we have just seen, the performance of a specific feat cannot be predicted accurately by a measure of general intelligence. Having a score of 10 on a scale of 3-18 is no more meaningful than having a 53 on a scale of 1-100.

If this is the case, then what are we left with? Do we have to throw away numbers entirely and go with a system like The Window? Don't despair. My next post will look at some basic statistics that might help with this dilemma.

No comments: