- Confirmation Bias
- Assimilation Bias
- Outcome Bias
- Naïve Realism
- Availability Cascade
- Gambler’s Fallacy
- Over-Inference from Small Samples
- Premise 1: I played against xxxxxx team on the ladder yesterday and I won easily.
- Premise 2: Any team I can easily defeat is bad.
- Conclusion: xxxxxx team is bad.
Formal logic is a set of rules that allow us to draw valid conclusions from a series of premises. It questions the validity of the conclusion based on whether or not the conclusion follows the premises. These premises are ideas or thoughts that we assume to be true, and are relevant to the eventual conclusion. If the conclusion cannot logically be deduced from the premises, the argument is known as a deductive fallacy.
Today we’re going to be talking about informal logic, which, unlike formal logic, examines the validity of the premises from which a conclusion is drawn. Examine the argument above. According to formal logic, the conclusion is valid. However, according to informal logic, the conclusion is not valid, as premise 2 is not valid. We’ll be discussing a case very similar to the one above in the section about Naïve Realism. More specifically, I’ll be talking about the common and predictable sources of error in informal logic that are brought about by natural mental shortcuts that we have a predisposition to make use of.
For those of you who don’t know me, I’m Jiwa; VGC player since 2015 and psychology student with a particular interest in cognitive psychology. In this article, I’ll be discussing seven different informal logical fallacies or cognitive biases, how they most commonly arise in the context of competitive Pokémon, and how to avoid falling victim to them.
The inclination to recruit and give weight to evidence that is consistent with the hypothesis in question, rather than search for inconsistent evidence that could falsify the hypothesis.
In the context of competitive Pokémon, this concept is most applicable to the teambuilding and play-testing phase. This is more likely to happen when testing a specific Pokémon or move that the individual believes to be particularly interesting. More specifically, Confirmation Bias may influence an individual’s recollections of particular games, resulting in a tendency to recall examples of instances in which said Pokémon or move choice was pivotal in winning a game, and a tendency to ignore situations in which an alternative would have been far more useful.
Recalling VGC 15, one example of this may be the choice to use Crunch over Sucker Punch on Mega-Kangaskhan. A biased tester may recall an incident in which they were able to KO an unsuspecting Aegislash in Blade Forme as it attempted to use Substitute, but disregard the numerous situations in which Sucker Punch would have been far more useful. Confirmation bias can also arise is when an individual has an emotional attachment to a given Pokémon due to previous success or favouritism, causing them to have a desire to perceive said Pokémon in a favourable light.
Confirmation bias in Pokémon can be circumvented somewhat by keeping track of statistics whilst testing. What’s my win percentage when bringing this Pokémon? How often would a particular alternative move/Pokémon be more useful? However, when doing this it’s important to be wary of Over-Inference from Small Samples.
The inclination to interpret ambiguous evidence in a manner that supports an initial hypothesis or supposition.
We hate being wrong. As humans, we instinctively protect our opinions and ideas from information that could potentially be interpreted as contradictory, and instead, we often assess information from an angle that allows it to support our initial hypothesis. This is not to say that (all) individuals are incapable of accepting that their initial opinions were misinformed, but that there is a clear tendency to defend our convictions from information that we feel doesn’t necessarily contradict them by interpreting such information in a way that supports these initial thoughts.
This is relevant to competitive Pokémon in a similar way to the previously discussed Confirmation Bias, but rather than the weighting of evidence based on whether or not it supports a hypothesis being skewed, it is the interpretation of the evidence itself that is biased. For instance, if a player were to have a theory that Gunk Shot was more useful than Poison Jab on their Alolan Muk, they might test this theory by playing some games with Gunk Shot on their moveset in place of Poison Jab. If the player were to play against a Tapu Lele, they might use Gunk Shot and successfully OHKO the Tapu Lele. A player influenced by Assimilation Bias would attribute this ability to pick up the one-hit KO to their use of Gunk Shot instead of Poison Jab, despite the fact that only the bulkier variants of Tapu Lele survive Poison Jab. Furthermore, by using Gunk Shot, the player risked the 20% chance that their attack would miss altogether, so there were both positives and negatives to this decision. As such, rather than interpreting this evidence as ambiguous, with the optimal choice dependent on the opposing Tapu Lele’s EV spread, they would chalk this up as evidence supporting their initial theory.
The ability to consistently avoid succumbing to assimilation bias would involve an extremely high degree of self-awareness and the ability to take a very analytical perspective. However, merely questioning one’s own conclusions can often be enough to notice situations in which we are interpreting information in a way that is biased. Try to remain objective as much as possible, and don’t get too attached to ideas or theories.
The tendency to evaluate a decision based on its eventual outcome, rather than the quality of the decision at the time it was made.
Given the vast numbers of events in the VGC circuit (in some regions), it’s easy to become highly outcome focused, allowing opinions about the viability of any given Pokémon or archetype to be formed solely by the results they achieve. Though viewing usage stats and metagame analyses is great ways to keep track of current threats to consider when teambuilding, there are simply too many variables at play for results to be considered a perfect illustration of viability. For example, you may have heard statements in the community along the lines of the following:
“How can you claim that xxxxxxx archetype/team is bad, this person won a regional/MSS with it?”
This approach is inherently flawed, as there are a number of alternative explanations for the given team’s success. For example:
- The player was fortunate in that they managed to avoid playing against any of the numerous tough match-ups the given archetype has.
- The team in question was a very specific meta-call based on information or expectations of the teams that others would bring to the event, rather than being used based on implicit strength.
- The player using the winning team was extremely experienced with the given team or archetype and used it not because it was particularly good, but because they happened to be extremely well practised with it.
- The majority of the crucial RNG rolls fell in their favour during the event, causing them to win games they may not have won very consistently.
Outcome Bias within Pokémon is related to more than just event results, however. Even within a single game, we retrospectively evaluate our decisions based on their outcomes. To illustrate this, picture this scenario: Each player has one remaining Pokémon; player 1 has Alolan Muk at 70% health, with its Figy Berry already consumed, and player 2 has Tapu Lele at full health. Player 1 has both Poison Jab and Gunk Shot on their Muk and knows that it will faint to two Moonblasts. Based on current metagame trends, player 1 expects the opposing Tapu Lele to be a bulky variant; running 252 HP EVs, possibly with some EVs also invested in defence. As such, they decide that Gunk Shot is their best option, as Poison Jab only has a 50% to KO 252 HP Tapu Lele. Gunk Shot misses and player 1 loses the game. Instinctively, player 1 may often regret their choice to use Gunk Shot, despite the fact that whether or not Gunk Shot misses has no bearing on the quality of the original decision. In comparison, should Gunk Shot hit, player 1’s original decision is immediately validated, and they may not think twice about whether or not it was the correct choice.
The avoidance of Outcome Bias involves a more analytical approach, considering potential alternate explanations to outcomes, and drawing conclusions from larger samples wherever possible. Watching entire games or reading full team reports is likely to give a better picture from an analytical standpoint than simply viewing final standings.
The conviction that one perceives objects and events as they are, rather than as they appear in light of one’s vantage point, prior beliefs, and expectations.
Individual experiences when playing Pokémon vary greatly from person to person. A person’s experiences shape their opinions and the way in which they perceive the world. Within the context of competitive Pokémon, our opinions of a given Pokémon or team archetype is formed by our interactions with it. An example of the expression of an opinion formed in such a way might look something like this:
“I played against xxxxxx archetype on the ladder a couple of times and won easily, that archetype is bad.”
As discussed, such an attitude is inherently flawed, as one’s own interactions with the given Pokémon or archetype may not be truly representative of its strength. Consider these alternative explanations:
- The team I was using was a particularly tough match-up for that archetype/Pokémon.
- My opponent didn’t play optimally and could have won if they had done so.
- My opponent’s build of that archetype/Pokémon was suboptimal, and with some tweaks, it could be good.
Each of these explanations is entirely plausible, but take more effort to consider than the initial attitude, so is generally not given much consideration.
There are a few simple and effective ways to avoid Naïve Realism. Discussing opinions and experiences with a number of other people is a good way to ensure that your own views are supplemented by a wider array of sources. Assessing the possibility of alternative explanations is also a potential method, though this may be difficult, as the list above is not necessarily comprehensive. In the case of assessing the viability of a particular Pokémon, combining one’s own experiences with a review of the given Pokémon’s usage in recent high-level tournaments or high-ladder usage stats is a good method to provide a broader perspective from which to draw a more accurate opinion.
A self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception increasing plausibility through its rising availability in public discourse.
Much of the VGC community is founded on the mutual sharing of opinions and information between friends, top players, small communities etc. It is not uncommon for opinions to be expressed and repeated to the point that they become almost dogmatic in nature; simply common knowledge, not worth questioning or querying because they are just true, and those that disagree are misinformed or out of the loop. This ties in with the concept of group polarisation, a concept by which similar opinions, when discussed in groups of people who hold similar beliefs, result in more extreme opinions and decisions than are truly representative of those held by each individual group member.
A sceptical mindset is a great tool for combatting the availability cascade – constant questioning of opinions, no matter how common or obvious. This does, however, have the potential to be extremely difficult and draining. A more simple alternative is to avoid “incestuous” teambuilding and testing; ensuring that the circle of people you interact with when teambuilding and play testing is not too narrow or repetitive. This is not to say that performing these tasks within small groups is necessarily a bad thing – having a close understanding of each others’ opinions can be extremely useful, but it’s important to be wary of the potential for an availability cascade when doing so.
The tendency to believe that if a particular outcome has not occurred for a while, it is “due” – i.e. the occurrence/non-occurrence of past events influences the probability of future events.
Given the reasonable element of chance involved within competitive Pokémon, the ability to confidently and rationally assess probabilities is an extremely useful skill when aiming to play optimally. For example, understanding the fact that Scald’s 30% chance to burn does not necessarily result in exactly 3 of any given 10 Scalds to cause a burn is something any Pokémon player should have a grasp of. Despite this, even those with a concrete understanding of probabilities may find themselves succumbing to Gambler’s Fallacy, particularly when they feel stressed or under pressure. Gambler’s Fallacy is most likely to arise whilst playing, particularly on the occasion that the player in question feels they have been unlucky. Instances of this may manifest in thoughts similar to the following:
“The last 5 times I’ve used Scald, it hasn’t burned the target. I should be due a burn around about now.”
“Last time I faced Ninetales it froze both of my Pokémon with Blizzard on turn 1, therefore I should lead with my Pokémon with Wide Guard this time even though it’s otherwise pretty useless in this match-up.”
Most of the players that have these thoughts are perfectly aware of the irrational nature of these statements, but it’s hard to alleviate these thoughts entirely when playing. It’s also important to bear in mind that though these thoughts are not particularly useful, they are only outright detrimental in instances where they influence a player’s actions, such as in the second example above.
Being aware of this phenomenon is perhaps the easiest and most effective way of preventing it from influencing your play, as this awareness may help you notice and suppress associated thoughts.
Over-Inference from Small Samples
The tendency to underestimate potential variance in small samples.
Play-testing is a huge part of competitive VGC, both on Battle Spot and Pokémon Showdown. Doing so allows players to test ideas and practice with teams prior to events. However, not everyone is able to play as many games as they would perhaps like to, and extrapolating more than is justifiable from small samples is a common issue. This is due to the fact that Pokémon is a game of high variance in a number of different areas, e.g. quality of opponents, frequency of favourable RNG, percentage of correct predictions in 50/50 situations, etc. As such, small samples are unlikely to accurately reflect true win-rates, and as a result, much of the analysis of such samples is arbitrary and useless.
Play-testing is not the only phase of competitive Pokémon in which over-inference can occur from small samples. When analysing a metagame, the analyst has to be careful not to be overzealous in the conclusions they draw from small samples of events. For example, the following is a statement similar to those that you might see floating around the community after a given event:
“I saw the results from the last regionals, it seems pretty clear that Tapu Fini + Kartana + Arcanine is the CHALK of this format.”
Given that any particular regionals involves a small sample of players (globally speaking), many of whom are likely to teambuild and play-test together, competing in what we have established to be something of a high-variance game with a constantly changing metagame, it is clear that using the usage stats of a single event to draw sweeping conclusions such as the one above is extremely inadvisable.
Circumventing over-inference is quite easy to do; simply being aware of the limited conclusions that can feasibly be drawn from small samples should be enough to prevent most people from making this mistake.
Cognitive biases are entirely normal occurrences, created by natural mental shortcuts that allow us to evaluate complex situations more quickly by simplifying them. This simplification process is generally effective, but can cause predictable errors. Being aware of the situations in which these errors most commonly arise is enough to avoid many of them, alongside the other potential solutions discussed within the relevant subheading of the article.
Finally, I’d like to give credit to these articles from Paulo Vitor Damo da Rosa and Werford for providing inspiration for this article, and this chapter from Risen & Gilovich for definitions of the cognitive biases and logical fallacies listed above.