Perceiving The Emotions of Pokémon

Ben J. Jennings1

1 Centre for Cognitive Neuroscience, Brunel University London, London, U.K. E-mail: ben.jennings (at) brunel.ac (dot) uk

Download PDF

The ability to reliably perceive the emotions of other people is vital for normal social functioning, and the human face is perhaps the strongest non-verbal cue that can be utilized when judging the emotional state of others (Ekman, 1965). The advantages of possessing this ability to recognise emotions, i.e., having emotional intelligence, include being able to respond to other people in an informed and appropriate manor, assisting in the accurate prediction of another individual’s future actions and additionally to facilitate efficient interpersonal behavior (Ekman, 1982; Izard, 1972; McArthur & Baron, 1983). In the current experiment the consistency with which emotions display by a human female face and a Pokémon character are investigated.

General Methods

The current study employed 30 hand drawings of Pikachu, a first generation electric-type Pokémon character, depicting a range of emotions (images used with permission from the illustrator,  bluekomadori [https://www.deviantart.com/bluekomadori]; based on the video game characters belonging to The Pokémon Company); see Fig. 1a for examples. Also, 30 photo-quality stimuli displaying a range of emotions, expressed by the same female model, were taken from the McGill Face Database (Schmidtmann et al., 2016); see Fig. 1b for examples. Ratings of arousal (i.e., the excitement level, ranging from high to low) and valence (i.e., pleasantness or unpleasantness) were obtained for each image using a similar method to Jennings et al. (2017).  This method involved the participants viewing each image in turn in a random order (60 in total: 30 Pikachu and 30 of the human female from the McGill database). After each image was viewed (presentation time 500 ms) the participants’ task was to classify the emotion being displayed (i.e., not their internal emotional response elicited by the stimuli, but the emotion they perceived the figure to be displaying).

The classification was achieved via “pointing-and-clicking” the corresponding location, with a computer mouse, within the subsequently displayed 2-dimensional Arousal-Valence emotion space (Russell, 1980). The emotion space is depicted in Fig. 1c; note that the red words are for illustration only and were not visible during testing, they are supplied here for the reader to obtain the gist of the types of emotion different areas of the space represent. Data for 20 observers (14 females) was collected, aged 23±5 years (Mean±SD), using a MacBook Pro (Apple Inc.). The stimuli presentation and participant responses were obtained via the use of the PsychToolbox software (Brainard, 1997).

Figure 1.  Panels (a) and (b) illustrate 3 exemplars of the Pokémon and human stimuli, respectively. Panel (b) shows the response grid displayed on each trial for classifications to be made within (note: the red wording was not visible during testing). Panels (d) and (e) show locations of perceived emotion in the human and Pokémon stimuli, respectively. Error bars present one standard error.

Results

The calculated standard errors (SEs) serve as a measure of the classification agreement between observers for a given stimuli and were determined in both the arousal (vertical) and valence (horizontal) directions for both the Pokémon and human stimuli. These are presented as the error bars in Fig. 1d and 1e. The SEs were compared between the two stimulus types using independent t-tests for both the arousal and valence directions; no significant differences were revealed (Arousal: t(58)=-0.97, p=.34; and Valence: t(58)= 1.46, p=.15).

Effect sizes, i.e., Cohen’s d, were also determined; Arousal: d=0.06, and Valence: d=0.32, i.e., effect sizes were within the very small to small, and small to medium ranges, respectively (Cohen, 1988; Sawilowsky, 2009), again indicating a high degree of similarity in precision between the two stimuli classes. It is important to note that the analysis relied on comparing the variation (SEs) for each classified image (reflecting the agreement between participants) and not the absolute (x, y) coordinates within the space.

Discussion

What could observers be utilizing in the images that produce such a high degree of agreement on each emotion expressed by each stimulus class? Is all the emotional information contained within the eyes? Levy et al. (2012) demonstrated that when observers make an eye movement to either a human with eyes located, as expected, within the face or non-human (i.e., a ‘monster’) that has eyes located somewhere other than the face (for example, the mythical Japanese Tenome that has its eyes located on the palms of his hands; Sekien, 1776) the observers’ eye movements are nevertheless made in both cases towards the eyes, i.e., there is something special about the eyes that capture attention wherever they are positioned. Schmidtmann et al. (2016) additionally showed that accuracy for identifying an emotion was equal when either an entire face or a restricted stimulus showing just the eyes was employed. The eyes of the Pikachu stimuli are simply black circles with a white “pupil”, however they can convey emotional information, for example, based on the positions of the pupil, the orientation of the eye lid, and by how much the eye is closed. It is hence plausible that arousal-valence ratings are made on the information extracted from only the eyes.

However, for the Pokémon stimuli Pikachu’s entire body is displayed on each trail, and it has been previous shown when emotional information from the face and body are simultaneously available, they can interact. This has the result of intensifying the emotion expressed by the face (de Gelder et al., 2015), as perceived facial emotions are biased towards the emotion expressed by the body (Meeren et al., 2005). It is therefore likely that holistic processing of the facial expression coupled with signals from Pikachu’s body language, i.e., posture, provide an additional input into the observers’ final arousal-valence rating.

Conclusion    

Whatever the internal processes responsible for perceiving emotional content, the data points to a mechanism that allows the emotional states of human faces to be classified with a high precision across observers, consistent with previous emotion classification studies (e.g., Jennings et al., 2017). The data also reveals the possibility of a mechanism present in normal observers that can extract emotional information from the faces and/or bodies depicted in simple sketches, containing minimal fine detail, shading and colour variation, and use this information to facilitate the consistent classification of the emotional states expressed by characters from fantasy universes.

 

References

Brainard, D.H. (1997) The psychophysics toolbox. Spatial Vision 10: 433–436.

de Gelder, B.; de Borst, A.W.; Watson, R. (2015) The perception of emotion in body expressions. WIREs Cognitive Science 6: 149–158.

Ekman, P. (1965) Communication through nonverbal behavior: a source of information about an interpersonal relationship. In: Tomkins, S.S. & Izard, C.E. (Eds.) Affect, Cognition and Personality: Empirical Studies. Spinger, Oxford. Pp. 390–442.

Ekman, P. (1982) Emotion in the Human Face. Second Edition. Cambridge University Press, Cambridge.

Izard, C.E. (1972) Patterns of Emotion: a new analysis of anxiety and depression. Academic Press, New York.

Jennings, B.J.; Yu, Y.; Kingdom, F.A.A. (2017) The role of spatial frequency in emotional face classification. Attention, Perception & Psychophysics 79(6): 1573–1577.

Levy, J.; Foulsham, T.; Kingstone, A. (2013) Monsters are people too. Biology Letters 9(1): 20120850.

McArthur, L.Z. & Baron, R.M. (1983) Toward an ecological theory of social perception. Psychological Review 90(3): 215–238.

Meeren, H.K.; van Heijnsbergen, C.C.; de Gelder, B. (2005) Rapid perceptual integration of facial expression and emotional body language. Proceedings of the National Academy of Sciences 102: 16518–16523.

Russel, J.A. (1980) A circumplex model of affect. Journal of Personality and Social Psychology 39(6): 1161–1178.

Schmidtmann, G.; Sleiman, D.; Pollack, J.; Gold, I. (2016) Reading the mind in the blink of an eye – a novel database for facial expressions. Perception 45: 238–239.

Sekien, T. (1776) 画図百鬼夜行 [Gazu Hyakki yagyō; The Illustrated Night Parade of a Hundred Demons]. Maekawa Yahei, Japan.


About the Author

Dr. Ben Jennings is a vision scientist. His research psychophysically and electrophysiologically investigates colour and spatial vision, object recognition, emotions, and brain injury. His favourite Pokémon is Beldum.


Check other articles from this volume

 

Cosplay at Armageddon Expo*

Paul Mountfort1, Anne Peirson-Smith2 & Adam Geczy3

1 Auckland University of Technology, Auckland, New Zealand. Email: paul.mountfort (at) aut.ac (dot) nz

2 City University of Hong Kong, Kowloon, Hong Kong. Email: enanneps (at) cityu.edu (dot) hk

3 University of Sidney, Sidney, Australia. Email: adam.geczy (at) sydney.edu (dot) au

* This is an extract from Chapter 3 of Planet Cosplay: Costume Play, Identity and Global Fandom, by Paul Mountfort, Anne Peirson-Smith and Adam Geczy (Bristol, UK: Intellect Books; Chicago, US: University of Chicago Press, 2018). Reprinted with permission by Intellect Books. Note that this version may display minor editorial differences to the final published version.

Download PDF

Cosplay is a performance medium in which embodied textual citation and photographic practices come together and sometimes collide. Moreover, photography both documents and preconditions elements of the cosplay performance, via visual genres typically spanning those of the fashion runway, studio and ‘hallway’ shoots. This chapter brings these textual and visual analyses together to present a situated photo-essay shot in the candid style. It documents five years of an Australasian-based fan convention that celebrated its twentieth anniversary in 2015, the Auckland Armageddon Expo. In doing so it offers a snapshot, as it were, of a half decade of ‘glocalized’ cosplay practice. The term ‘glocalization’ refers to twin processes at work in late capital. Firstly, capital and regulatory frameworks elide from the national upwards to the global scale and reciprocally downwards to the scale of the local. Secondly, economic activities and networks between business entities become simultaneously more localized, regionalized and transnational.[i] This model has been widely applied to the sphere of cultural capital and is of particular relevance to cosplay, which tends to grow by osmosis out of local conditions but owes its provenance to wider networks of cultural production and associated fandoms.

Armageddon is an instance of the organic way in which glocalized conventions develop and proliferate. It began as a comics and trading card event in Auckland, New Zealand, in 1995 with follow-ups in 1997, and within a few short years had spread to the capital city, Wellington (1998), and on to Melbourne, Australia (1999).[ii] Starting off in small community venues, progressing to more major urban events centres, and on to large-scale convention spaces, the Expo has evolved into a major regional sci-fi, comics and gaming convention with over 80 events to date, some 70,000 annual visitors in its home city and 130,000 across its Australasian diaspora. In aggregate, it is, therefore, close in scale to San Diego’s annual Comic-Con and exhibits a similar mix of cultural and industry practices. While the Auckland Expo has some factors that are specific to its geographic location, genealogy as a gaming and fan con, specifics of the main site and its mix of events, the photos in this chapter could have been taken at almost any con in the western world, both in terms of the diversity of participants and the franchises, storyworlds and other source media texts represented in the costumes on display. The first part of the commentary, which follows, discusses the range of sources being cited—the individual trees amid the forest of citations—along with some identifiable trends in the 50 photographs that comprise this selection.

Figure 1. Q from [C] The Money of Soul and Possibility Control (2011), contest event, Auckland Armageddon Expo 2012. © Paul Mountfort.
Figure 2. ‘Heath Ledger’s’ Joker from The Dark Knight (2008), Auckland Armageddon Expo 2012. © Paul Mountfort.

Figure 3. Applying prosthetics, Auckland Armageddon Expo 2012. © Paul Mountfort.

Figure 4. Scene outside the convention space, Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 5. Bane from The Dark Knight Rises (2012) and Harley Quinn from DC universe, Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 6. Display mannequin, Auckland Armageddon Expo 2013. © Paul Mountfort.

 

Figure 7. Thorin Oakenshield (left) from The Hobbit (2012–2014) and steampunk cosplayer (right), Auckland Armageddon Expo 2013. © Paul Mountfort.

 

Figure 8. Namine and Roxas (left and centre) from Kingdom Hearts (2002–), with Korra (right, background) from Legend of Korra (2012–14), Auckland Armageddon Expo 2013. © Paul Mountfort.

 

A cosphoto-essay

With the identity of the cosplayers included in this chapter being anonymous, the focus of discussion here is on the characters and source texts identifiable in the sample of photos on display, and the popular cultural milieu out of which they have arisen. Many of the sources being mined here are comparatively ‘timeless,’ harking back decades to milestones in their respective media, such as the 2014 San cosplay and crossplay (Figures 23 and 40) inspired by Studio Ghibli’s Princess Mononoke (Mononoke Hime) (1997).[iii] Half a decade is long enough, however, for micro-historical forces to operate in fan cultures, wherein recent movies, games and media elements enjoy rapid waves of meme-like popularity. Of course, even the most up-to-the-minute sources being cosplayed may spring from long-lived media franchises. For instance, Marvel or DC’s blockbuster transmedia storyworlds have comic book precursors going back to the 1930s and 1940s. However, particular movie or game adaptions are often very specific: for example, a 2012 costume of The Joker (Figure 2) is not any old joker but identifiably Heath Ledger’s Joker from Nolan’s The Dark Knight (2008). Similarly, the 2016 release of the movie Suicide Squad, set in the DC Comics universe, indelibly marked the portrayal of Harley Quinn in that year.[iv] Nor do new waves of influence always overwhelm old favourites: stormtroopers and even sets from the original Star Wars (1977–83) trilogy jostle alongside Sith and other characters from the more recent prequels and sequels (Figures 25, 35 and 36).[v]

Identifying the ‘trees’ in the forest of citations that comprise even a medium-size convention would prove a challenging, if not impossible, task for even the most pop culturally literate geek or otaku. This is because, as we have seen, cosplay draws on multiple media sources: comics, movies, manga, anime, games, pop idols and other media identities, as well as online memes. Most, though not all, of the costumes in this essay proved readily identifiable.[vi] However, others were more elusive, with some cosplay, being, in any case, modelled after what Matthew Hales terms a generic (as opposed to discrete) character type[vii] or fashion style rather than a titular protagonist—though these two dimensions (character type and style) often go hand in hand. Common western character types include vampires, zombies and other genera of the undead, who shuffle convention spaces alongside Japanese-inspired samurai, ninjas, shōnen (boys) and shōjo (girls), including sub-types such as bishōnen (beautiful boys) and mahō shōjo (magical girls). Among the most important generic styles—which may comprise not just fashion but lifestyles—are Lolita and steampunk. As previously discussed, these styles have often infected source media, such as anime and manga. Furthermore, crossovers and mash-ups abound, especially at larger cons with more established player communities who have the confidence to push cosplaying boundaries. This said, superhero action franchises, sci-fi and fantasy television shows, multi-season anime series and protagonists from popular gameworlds tend to be the dominant fauna at most cosplay cons.

Figure 9. Menma from Anohana (2011), Auckland Armageddon Expo 2013. © Paul Mountfort.

 

Figure 10. Naruto from Naruto Shippuden (2007–7), Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 11. Vendor with mood-reading nekomimi (cat ears), Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 12. Onision ‘I’m a banana’ meme cosplay, Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 13. Samurai cosplay, Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 14. Zipper face nurse meme cosplay, Auckland Armageddon Expo 2013. © Paul Mountfort.

Figure 15. Colossal Titan (centre, foreground) from Attack on Titan (2009–), Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 16. Armoured anime cosplay, Auckland Armageddon Expo 2014. © Paul Mountfort.

 

There are identifiable cultural fashions within cosplay, and one of the affordances of an extended photographic study is that we are able to see how the portrayal of certain characters, or iterations of certain characters, spike in relation to recent film, game and other media releases. Photos from Armageddon taken between 2012 and 2016 document a number of character iterations from Marvel and DC. Both are deep-rooted comics franchises from the early twentieth century that have had many iterations, adaptations and spin-offs over the decades, and which are now the subject of multiple big movie and television series versions. Marvel exerts a particularly powerful gravitational pull on western cosplay today, with Avenger’s franchise characters such as Captain America (Figure 32) much in evidence in the wake of the Captain America: The First Avenger (2011), The Winter Soldier (2014) and Civil War (2016) instalments.[viii] The interconnected nature of the Marvel universe, where the storylines of characters from discrete shows intersect at various junctures, rewarding fans focused on the detailed timelines and backstories, provides the perfect template for the kind of vast inter-referential networks that operate within the cosphere.

In recent years DC has made serious moves to mimic Marvel’s integrated storyworlds in an attempt to establish its own universe, though with mixed success. As mentioned, ‘Heath Ledger’s’ Joker (Figure 2) was cited at Armageddon in 2012, four years after the release of DC’s The Dark Knight (2008). Ledger’s Joker attained iconic status not just through his riveting performance and the relative critical acclaim of Christopher Nolan’s Batman trilogy[ix] but also due to the actor’s tragic death in the same year as the movie’s release, which cemented his cult following in popular culture and ensured both actor and character iteration a viral afterlife. Nolan’s trilogy restored a cachet to the Batman storyworld notably lacking for DC in the pantheon of contemporary popular culture, including cosplay circles. Hence characters such as the Scarecrow (Figure 25),[x] who was the only villain of genuine vintage to star in the entire rebooted Batman trilogy (2005–12), Bane and Harley Quinn (Figure 5) showing up in cosplaying circles following the 2012 release of The Dark Knight Rises, even though Quinn does not appear in this particular trilogy. She has had many iterations and her popularity spiked in 2016’s Armageddon in response to Suicide Squad’s (2016) fishnet stockings and baseball bat toting version (Figure 39, 50), even though the movie itself was ambivalently received. Superman and Wonder Woman undergo periodic revivals, with 2016’s Armageddon showcasing both female and crossplaying versions (Figure 48) in anticipation of the Wonder Woman’s 2017 Warner Brothers’ reboot directed by Patty Jenkins, while the Green Arrow (Figure 44) from DC’s The Arrow (2012–) television series reboot also put in a guest appearance.[xi]

Figure 17. Horse mask meme cosplay, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 18. Madame Vastra from Doctor Who Series 6 (2011), Auckland Armageddon Expo 2014. © Paul Mountfort.

 

Figure 19. Cosplayer with police jacket, Cloud from Final Fantasy (1988) left shoulder plate, and convention merchandise, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 20. Quidditch player from Harry Potter (2001–11) franchise, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 21. Ring Wraith from The Lord of the Rings (2001–03) movie trilogy, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 22. Titular lead character from DC Comics’ Scarecrow (1941–), Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 23. San from Mononoke Hime (1997) Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 24. Titular lead character from Sculduggery Pleasant (2007–), Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 25. Stormtroopers with fan-constructed backdrop from Star Wars IV: A New Hope (1977), Auckland Armageddon Expo 2014. © Paul Mountfort.

 

While some character iterations clearly follow more or less ephemerally on the heels of a movie or other media release, others enjoy relative longevity. For example, at Armageddon 2014 stormtroopers from the first Star Wars (1977) movie (Figure 25), a Ringwraith (Figure 21) and Quidditch player (Figure 20) were in evidence despite the original Star Wars trilogy dating back to 1977–83, Lord of the Rings from 2001–13 and Harry Potter from 2001–11.[xii] Of course, like the DC and Marvel storyworlds, these cinematic works have deep and massive roots in popular culture, functioning practically as cultural mythologies in the west, and continue to have currency courtesy of the follow up Star Wars prequels, sequels and spinoffs (1999–), The Hobbit movie adaptation (2013–14) and Potter prequel (2016).[xiii] The troupe of stormtroopers who posed in 2014 against a lovingly re-created backdrop from the original Death Star returned in 2015 to find themselves joined by a red guard (Figure 35) from Star Wars II: Attack of the Clones 2002 and a scruffy ‘sandtrooper’ from the extended Star Wars universe (Figure 36). Characters from the wider Star Wars universe may also make cameos, such as the Twi’lek woman from Armageddon 2014 (Figure 27). Although not an identifiable character from the canon, such as Aayla Secura, she is clearly a member of the alien species that figure in the television series Star Wars: The Clone Wars (2008–15). Creative adaptations from the storyworld are fairly common in cosplay, and could be described as fan-driven spinoffs, akin to fanfiction’s world building.

Legacy movies that are not part of a larger franchise or storyworld can also provide cosplayers with material, especially where the imagery is iconic or has proved to ‘have legs’ in popular culture. Examples include the ubiquitous V For Vendetta (2006) masks that reference not only the film, but the Occupy movement, the cyber-insurgent group Anonymous and, more recently, NBC-Universal hacktivist drama Mr. Robot (2015–), in a feedback loop of popular cultural inter-referentiality (Figure 29).[xiv] Of course, anonymous masks may also be a cheap and easy way to simulate cosplay while retaining an aura of subcultural capital that other mass-produced masks do not convey. A movie’s cult status may ensure the relative immortality of its characters in the cosphere, such as the appearance of the eponymous heroine (Figure 37) from Tim Burton’s Corpse Bride (2005) coming back to life in 2015.[xv] Long running movie series spread out over years mean that the distinction between legacy and current characters is often fluid. Pirates of the Caribbean’s (2003–)[xvi] Jack Sparrow is the source of numerous memes and has been widely cosplayed, there even being a professional cosplayer in Italy who has based his career on cosplaying Sparrow. ‘Jack’s’ appearance at Armageddon in 2016 could be a back reference to instalments 1–4 of the seemingly endless Pirates movie franchise mill, or may have anticipated 2017’s much dreaded Dead Men Tell No Tales.

Figure 26. An Ood from Doctor Who Series 4 (2006), Auckland Armageddon Expo 2014. © Paul Mountfort.

 

Figure 27. Twi’lek woman from the Star Wars: The Clone Wars (2008–15), Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 28. Fantasy figure, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 29. V for Vendetta (2006) mask, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 30. Yami from Yu–Gi–Oh franchise (1998–), Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 31. Zombie nurses cosplay meme, Auckland Armageddon Expo 2014. © Paul Mountfort.

Figure 32. Captain America (2011–), Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 33. Yukata and kimono cosplaying pair, Auckland Armageddon Expo 2015. © Paul Mountfort.

 

There are character iterations, and then there are regenerations (when dealing with a certain 2822-year-old Timelord). Among the many television shows that jostle for attention with characters from live action movies, the long-running British sci-fi series Doctor Who (1963–) is a particularly popular media source. Contemporary characters (e.g. Madame Vastra, Figure 18) rub shoulders with both ‘classic’ and more recent iterations of the Doctor, as do daleks and newer menaces such as Weeping Angels, the Master in ‘his’ gender bending guise of Missy and The Ood (Figure 26). Along with sci-fi shows, quasi-historical series such as Spartacus (2010–13), represented by a slave gang (Figure 38) and, particularly, fantasy TV shows have massive constituencies, with Game of Thrones (2011–) being a major source of cosplay performance.[xvii]Occasionally, characters from popular novels that are not transmediated, such as the titular hero (Figure 24) from Skulduggery Pleasant (2007–), are cosplayed, ostensibly based on book cover and fan art.[xviii]

Western animation is sometimes adapted for cosplay, notable examples being Avatar: The Last Airbender (2005–8) and The Legend of Korra (2012–14) (Figure 7).[xix] However, Japanese visual media comprise the twin lodestar, along with western live action films and television, around which contemporary cosplay gravitates globally. This is doubtless due to the sheer profusion of visual riches and the subcultural cachet afforded by Japanese manga, anime and gaming. As with live action, characters from classic anime staples continue to appear, such as the face-painted, dagger-wielding San (Figures 23 and 40) from Princess Mononoke (Mononoke Hime) (1997), along with many other Studio Ghibli characters and those from other anime studios, such as Toei Animation, Sunrise, Production I.G., Madhouse, Manglobe, Studio Pierrot, PA Works, Kyoto Animation and Bones. Characters from anime TV series spotted at Armageddon include Menma (Figure 9) from A-1 Picture’s Anohana: The Flower We Saw That Day (Ano Hi Mita Hana no Namae o Bokutachi wa Mada Shiranai) (2011), Q (Figure 1) from [C] The Money of Soul and Possibility Control (2011), Mami Tomo (Figure 42) from Puella Magi Madoka Magica (Mahō Shōjo Madoka Magika) (2011), along with abundant fauna from big ticket franchises such as One Piece (Wan Pīsu) (1997–), Bleach (Burīchi) (2001–) and Naruto (1999–) (Figure 10).[xx]

Figure 34. Sakura kimono cosplay, Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 35. Red Guard from Star Wars II: Attack of the Clones (2002), Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 36. Sandtrooper from Star Wars universe (2015), Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 37. Titular character from Corpse Bride (2005), Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 38. Slave gang cosplay from Spartacus (2010–13), Auckland Armageddon Expo 2015. © Paul Mountfort.

Figure 39. Harley Quinn, Suicide Squad (2016) iteration, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 40. San crossplay from Princess Mononoke (1997), Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 41. Lara Croft from the Tomb Raider (1997–) franchise, Auckland Armageddon Expo 2016. © Paul Mountfort.

 

Figure 42. Mami Tomo from Puella Magi Madoka Magica (2011), Auckland Armageddon Expo 2016. © Paul Mountfort.

Game characters are a widely represented—and perhaps the fastest growing—fictional demographic at cosplay cons, doubtless due to the massively increased penetration of gaming platforms into people’s homes in the early twenty-first century. Among the many examples of stand-alone game series characters in 2016, for example, was Shay Patrick Cormack (Figure 49) from Assassin’s Creed (2007–).[xxi] However, games are widely transmediated and evince complex relations with other media. There are, of course, the manga/anime/trading game tie-ins, resulting in cons being stacked with endless Pokémon (1995–)[xxii] characters along with identities from other systems such as Yami (Figure 30) from Yu–Gi–Oh! (Yū Gi–Ōh!) (1996–).[xxiii] These franchises are truly gargantuan, with Pokémon alone having grossed close to US $50 billion prior to the release in 2015 of the short-lived augmented reality (AR) craze for Pokémon GO.[xxiv] Their reach and formative influence on Millennials and Generation Z make it unsurprising that they constitute a major source for cosplay performance. Many characters and storyworlds migrate from manga to anime and onto gaming platforms, such as Naruto and One Piece. Indeed, the anime/games crossover is a huge subject that could easily comprise a book in itself.

Quite apart from trading games, there is a broad distinction between games that have evolved out of manga/anime source-texts and those that were games first but have subsequently been made into movies or television series. Thus, for example, the Colossal Titan (Figure 15) from Attack on Titan (Shingeki no Kyojin) (2009–) references an acclaimed series that has also spawned official and unofficial games, while Namine and Roxas (Figure 8) are avatars from Kingdom Hearts (Kingudamu Hātsu) (2002–), a role-playing action game in the crossover genre—in this case Japanese studio Square Enix’s characters occupying a setting from the Disney universe.[xxv] Final Fantasy (Fainaru Fantajī) (1987–) is a long-running gaming franchise that was transmediated from the original games into films, while Tomb Raider (1996–) started as a game and was adapted to comics and into movies.[xxvi] Lara Crofts of various iterations remain a convention favourite throughout the west (Figure 41), though she is not unknown in Asia. Some game characters riff off anime genres, such as the magical girl anime style of Monimi Usami (Figure 45) from Danganronpa 2: Goodbye Despair (Sūpā Danganronpa Tsū: Sayonara Zetsubō Gakuen) (2012),[xxvii] despite, or perhaps because of, the game itself being shōnon (young male). Indeed, the abstracted look of many avatars and certain generic conventions in the depiction of costuming and weapons both here and in some anime can make identification of such cosplay sources difficult. For example, some Samurai cosplay (Figure 13) and fantasy figures (Figure 28) can be hard to distinguish from the general type. Similarly, it can be difficult without asking to tell at first glance if a particular player is Game of Thrones’ Jon Snow or The Hobbit’s (2012–14) Thor Okenshield (Figure 7). There are whole books devoted to making Japanese Kimono-inspired costumes, ‘because doing so requires specialized dressmaking skills that are different from western dress-making techniques’[xxviii] and the resulting kimono and yakata cosplay (Figures 33 and 34) can be hard to distinguish as genera or specific character references.

Figure 43. Hatsune Miku, digital character from Hatsune Miku V4X Bundle (2007) synthesizer application, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 44. The Green Arrow from DC’s The Arrow (2012–) television series reboot, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 45. Monimi Usami from Danganronpa 2: Goodbye Despair (2012), Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 46. Unidentified cosplay, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 47. Captain Jack Sparrow from The Pirates of the Caribbean (2003–) movie franchise, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 48. Eponymous heroes from the long-running Superman (1938–) franchise and Wonder Woman (2017) reboot (left and right), Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 49. Shay Patrick from Assassin’s Creed (2007–) video game series, Auckland Armageddon Expo 2016. © Paul Mountfort.

Figure 50. Another Harley Quinn from Suicide Squad (2016), Auckland Armageddon Expo 2016. © Paul Mountfort.

 

In Japan, characters from transmedia storytelling franchises are sometimes also pop cultural idol (aidoru) figures who may embody, or are embodied by, real-life avatars, from media celebrities to café ‘maids’ and ‘butlers.’ Some also may be stand-alone complexes, so to speak. The Hatsune Miku cosplay (Figure 43) at Armageddon 2016 comes from a digital avatar used in a synthesizer application Hatsune Miku (2007–) by Crypton Future Media.[xxix] As a further complication, there are the previously mentioned generic character types such as zombies (Figure 31) and fashion subcultures, such as Lolita and steampunk (Figure 7) that may or may not allude to films and games in which specific Lolis and steampunk characters figure. In some cases one might initially mistake the sackcloth and noose tooting costume from 2014 that was DC’s Scarecrow (Figure 22) as a repurposed Halloween mask. Increasingly prevalent is meme cosplay, which is hard to identify for those not in on the joke, and which tends to have a fairly rapid turnover, though less so perhaps in coser circles than online. Examples of this include the Onision ‘I’m a Banana’ (Figure 12) meme from 2009 and zipper-face (Figure 14) and zombie nurse (Figure 31) memes observed at Armageddon 2013 and 2014, respectively (the former meme dates back to at least 2011). More generic garb, such as the not-uncommon ‘horse head’ masks (Figure 17), may be adopted as an easy way to come costumed to a convention and to create dramatic effect on the cheap. Finally, where the current gallery of photographs is concerned, there are shots that document typical kinds of convention activity from milling around outside the convention (Figure 4) to common commercial features of the covered exhibition halls. These include the promotional application of prosthetics (Figure 3), themed mannequins (Figure 6) and sale of merchandise, such as mood-reflecting nekomimi (cat ears) sold at booths on the convention floor (Figure 11). These ‘costplay’ zones await further documentation within the archives of cosphotography, as do many other domains, both physical and virtual, of the ever-expanding cosphere.


Endnotes

Note: Many comic, film, television and game series have multiple directors and are the result of collaboration between several studios, production houses and distributors. For the sake of brevity, the following references limit credit to the main one or two directors, with additional directors noted by et al. Author’s names appearing before titles refer to comics or literary works. Production credit is generally given to the distributor, often a dominant partner in the production, due to many works being the result of collaborations with multiple studios. Readers who wish to know more about the specific commercial and artistic collaborations that give rise to specific productions can find detailed info

[i] See Erik Swyngedouw, ‘Globalisation or “Glocalisation”? Networks, Territories and Rescaling,’ Cambridge Review of International Affairs 17, no. 1 (April 2004).

[ii]See Anon., ‘General-Info/History,’ accessed 1 January 2018, https://www.armageddonexpo.com/General-Info/History/

[iii] Princess Mononoke (Mononoke Hime), directed by Hayao Miyazaki (Tokyo: Studio Ghibli, 1997), Anime film.

[iv] Suicide Squad, directed by David Ayer (New York: Warner Brothers, 2016), Film.

[v] Star Wars I: The Phantom Menace, directed by George Lucas (Century City: 20th Century Fox, 1999), Film; Star Wars II: Attack of the Clones, directed by George Lucas (Century City: 20th Century Fox, 2002), Film; Star Wars III: Revenge of the Sith, directed by George Lucas (Century City: 20th Century Fox, 2005), Film; Star Wars IV: A New Hope, directed by George Lucas (Century City: 20th Century Fox, 1977), Film; Star Wars V: The Empire Strikes Back, directed by Irvin Kershner (Century City: 20th Century Fox, 1980), Film; Star Wars VI: Return of the Jedi, directed by Richard Marquand (Century City: 20th Century Fox, 1983), Film; Star Wars: The Clone Wars, produced by Dave Filoni (US: Disney/ABC, 2015), Film; Star Wars VII: The Force Awakens, directed by J. J. Abrams (Century City: 20th Century Fox, 2015), Film.

[vi] Grateful thanks to Jasmin Darnell, Fin Mountfort, Felix Mountfort and to Sye Johnson and his cosplaying circle, for assistance provided to the authors in the identification of cosplay characters and other storyworld, gameworld and media content for this chapter.

[vii] Matthew Hale, ‘Cosplay: Intertextuality, Public Texts, and the Body Fantastic,’ Western Folklore 73, no. 1 (2014): 10–14.

[viii] Captain America: The First Avenger, directed by Joe Johnston (Hollywood: Paramount Pictures, 2011), Film; Captain America: The Winter Soldier, directed by Antonio Russo and Joe Russo (Burbank: Walt Disney Studios, 2014), Film; The Avengers, directed by Antonio Russo and Joe Russo (Burbank: Walt Disney Studios, 2014), Film; Captain America: Civil War, directed by Antonio Russo and Joe Russo (Burbank: Walt Disney Studios, 2016), Film.

[ix] Batman Begins, directed by Christopher Nolan (New York: Warner Brothers, 2005), Film; The Dark Knight, directed by Christopher Nolan (New York: Warner Brothers, 2008), Film; The Dark Knight Rises, directed by Christopher Nolan (New York: Warner Brothers, 2012), Film.

[x] Scarecrow, Bob Kane and Bill Finger, et al. (Burbank: DC Comics, 1941), Comic book.

[xi] Superman, Jerry Siegel and Joe Shudter, et al. (Burbank: DC Comics, 1938–), Film; Wonder Woman, directed by Patty Jenkins (New York: Warner Brothers, 2017), Film; The Arrow, Greg Berlanti, Marc Guggenheim, and Andrew Kreisberg (New York: Warner Brothers, 2012), Film.

[xii] The Lord of the Rings: The Fellowship of the Ring, directed by Peter Jackson (Wellington, New Zealand: Wingnut Films, 2001), Film; The Lord of the Rings: The Return of the King, directed by Peter Jackson (Wellington, New Zealand: Wingnut Films, 2003), Film; The Lord of the Rings: The Two Towers, directed by Peter Jackson (Wellington, New Zealand: Wingnut Films, 2002), Film; Harry Potter and the Philosopher’s Stone, directed by Chris Columbus (New York: Warner Brothers, 2001), Film; Harry Potter and the Chamber of Secrets, directed by Chris Columbus (New York: Warner Brothers, 2002), Film; Harry Potter and the Prisoner of Azkaban, directed by Alfonso Cuarón (New York: Warner Brothers, 2004), Film; Harry Potter and the Goblet of Fire, directed by Mike Newell (New York: Warner Brothers, 2005), Film; Harry Potter and the Order of the Phoenix, directed by David Yates (New York: Warner Brothers, 2007), Film; Harry Potter and the Half-Blood Prince, directed by David Yates (New York: Warner Brothers, 2009), Film; Harry Potter and the Deathly Hallows—Part 1, directed by Mike Newell (New York: Warner Brothers, 2010), Film; Harry Potter and the Deathly Hallows—Part 2, directed by Mike Newell (New York: Warner Brothers, 2011), Film.

[xiii] The Hobbit: An Unexpected Journey, directed by Peter Jackson (New York: Warner Brothers, 2012), Film; The Hobbit: The Desolation of Smaug, directed by Peter Jackson (New York: Warner Brothers, 2013), Film; The Hobbit: The Desolation of Smaug, directed by Peter Jackson (New York: Warner Brothers, 2014), Film.

[xiv] V For Vendetta, directed by James McTeigue (New York: Warner Brothers, 2006), Film; Mr. Robot, Sam Esmail (US: NBC/Universal Television, 2015), TV series.

[xv] Corpse Bride, directed by Tim Burton (New York: Warner Brothers, 2005), Film.

[xvi] Pirates of the Caribbean: The Curse of the Black Pearl, directed by Gore Verbinski (Burbank: Walt Disney Studios, 2003), Film; Pirates of the Caribbean: Dead Man’s Chest, directed by Gore Verbinski (Burbank: Walt Disney Studios, 2006), Film; Pirates of the Caribbean: At the World’s End, directed by Gore Verbinski (Burbank: Walt Disney Studios, 2007), Film; Pirates of the Caribbean: On Stranger Tides, directed by Rob Marshall (Burbank: Walt Disney Studios, 2011), Film.

[xvii] Doctor Who, created by Sydney Newman, C. E. Webber and Donald Wilson (London: BBC, 1963–), TV series; Spartacus, Steven S. DeKnight (Meridian: Starz, 2010–13), TV series; Game of Thrones, directed by David Benioff and D. B. Weiss (New York: HBO, 2011–), TV series.

[xviii] Derek Landy, Skulduggery Pleasant (London: Harper Collins, 2007).

[xix] Avatar: The Last Airbender, Micheal Dante DiMartino and Bryan Konietzko (US: Nickelodeon, 2005–8), Animated TV series; The Legend of Korra, Michael Dante DiMartino and Bryan Konietzko (US: Nickelodeon, 2012–14), Animated TV series.

[xx] Anohana: The Flower We Saw That Day (Ano Hi Mita Hana no Namae o Bokutachi wa Mada Shiranai), directed by Tatsuyuki Nagai (Tokyo: A1 Pictures, 2011), Anime film; [C] The Money of Soul and Possibility Control, directed by Kenji Nakamura (Tokyo: Fuji TV, 2011), Anime TV series; Puella Magi Madoka Magica (Mahō Shōjo Madoka Magika), directed by Akiyuki Shinbo (Tokyo: Shaft, 2011), Anime TV series; One Piece: Defeat Him! The Pirate Ganzack! (Wan Pīsu: Taose! Kaizoku Gyanzakku), directed by Gorō Taniguchi (Tokyo: Fuji TV, 1988), Anime TV film; Eiichiro Oda, One Piece (Wan Pīsu) (Tokyo: Jump Comics, 1997), Manga; One Piece (WanPīsu), directed by Konosuke Uda et al. (Tokyo: Jump Comics, 2003), Anime TV series; One Piece: Romance Drawn Story! (One Piece: Romansu Dōn Stori), directed by Katsumi Tokoro (Tokyo: Toei Animation, 2003), Anime film; Tite Kubo, Bleach (Burīchi) (Tokyo: Jump Comics, 2001), Manga; Bleach (Burīchi), directed by Noriyuki Abe (Tokyo: TV Tokyo, 2004–12), Anime TV series; Bleach Nintendo Home Console (Sega, 2005), Console game; Masashi Kishimoto, Naruto (Tokyo: Shōnen Jump, 1999–2014), Manga; Naruto, directed by Hayato Date (Tokyo: TV Tokyo, 2002–7), Anime TV series; Naruto Shippuden, directed by Hayato Date (Tokyo: TV Tokyo, 2007–), Anime TV series.

[xxi] Assassin’s Creed (Carentoir, France: Ubisoft Entertainment SA, 2007–), Computer game.

[xxii] Pokémon, directed by Kunihiko Yuama et al. (Tokyo: The Pokēmon Company International, 1997–), Anime TV series.

[xxiii] Kazuki Takahashi, Yu-Gi-Oh (Yū Gi-Ōh!) (Tokyo: Weekly Shōnen Jump, 1996–2004), Manga; Yu-Gi-Oh (Yū Gi-Ōh!), directed by Hiroyuki Kakudō (Tokyo: Toei Animation, 1998), Anime TV series; Yu-Gi-Oh (Yū Gi-Ōh!) Duel Monsters, directed by Kumihisa Sugishima (Tokyo: TV Tokyo, 2000–4), Anime TV series.

[xxiv] ‘Pokémon,’ accessed 1 January 2018, http://vgsales.wikia.com/wiki/Pok%C3%A9mon

[xxv] Attack on Titan (Shingeki no Kyojin), directed by Hajime Isayama (Tokyo: Bessatsu Shōnen Magazine, 2009), Anime TV series; Kingdom Hearts (Kingudamu Hātsu), Tetsuya Nomura and Shinji Hashimoto (Tokyo: Nintendo Entertainment System, 2002), Anime TV series.

[xxvi] Final Fantasy (Fainaru Fantajī), created by Hironobu Sakaguchi (Tokyo: Nintendo Entertainment System, 1987), Console game; Tomb Raider (London: Eidos Interactive, 2001–), Console game; Tomb Raider (Los Angeles: Top Crow, 1997); Tomb Raider, directed by Simon West (Hollywood: Paramount Pictures, 2001), Film.

[xxvii] Danganronpa 2: Goodbye Despair (Sūpā Danganronpa Tsū: Sayonara Zetsubō Gakuen) (Tokyo: Spike Chunsoft, 2012), Computer game.

[xxviii] Yuniya Kawamura, Fashioning Japanese Subcultures (London: Berg, 2012), 79.

[xxix] Hatsune Miku V4X Bundle (Chūõku, SPK, Japan: Crypton Future Media, 2007–), Computer game.


Check other articles from this volume

 

Who is that Neural Network?

Henrique M. Soares

Independent researcher. São Paulo, SP, Brazil.

Email: hemagso (at) gmail (dot) com

Download PDF

Pokémon has been an enormous success around the globe for more than 20 years. In this paper, I tackle the “Who’s that Pokémon?” challenge from a machine learning perspective. I propose a machine learning pre-processing and classification pipeline, using convolutional neural networks for classification of Pokémon sprites.

COMPUTING AND IMAGE RECOGNITION

Since they were invented[1], computers became increasingly present in our everyday life. Initially restricted to mathematical problem-solving and military applications in ballistics and cryptography, their applications become more diverse by the minute. As of today, machines beat humans in lots of tasks, one of the most recent being AlphaGo’s victory over the Go world champion (Go Game Guru, 2017).

This achievement is a testament to the remarkable advances sustained by machines towards intelligent applications. Go, with its almost infinite combinations[2], is not an easy problem to solve by “brute force”[3], the strategy usually employed by computers against humans in other perfect information games.

But do not despair, for not all is lost in our fight against our future robot overlords, as computers still struggle with a task that humans were quite literally born to do: image and pattern recognition. However good a computer may be today, humans are still way better at noticing that, even though Figure 1 shows a car, something quite unusual happened to it.

Figure 1. Crashed car against a tree. This text was definitely not written by a robot overlord (yet). (Image extracted from Wikimedia Commons; Thue, 2005).

But computers are catching on! Advances in machine learning techniques, especially in supervised learning methods, and the ever-growing data available for feeding these algorithms have been enabling giant leaps in this field. In 2015, a 150 layers’ residual neural network ensemble, trained by the MSRA team, achieved a 62% average precision in the 2015 image classification challenge with a data set with more than 1,000 different objects (Large Scale Visual Recognition Challenge, 2015).

Figure 2. Some simple things may be hard to a computer. (“Tasks”; XKCD, available from https://xkcd.com/1425).

So, we wonder… How would our machines fare against a challenge tackled by children around the world for the last 22 years?

Figure 3. Who’s that Pokémon? (Screenshot from the Pokémon animated series.)

POKÉMON

Pokémon is an extremely successful franchise of games and animated series targeted at young audiences (although some people, as the author, disagree with this classification). The franchise was created by Satoshi Tajiri in 1995, with the publishing of two games for Nintendo’s handheld console Game Boy. In the game, the player assumes the role of a Pokémon trainer, capturing and battling the titular creatures. It was an enormous success, quickly becoming a worldwide phenomenon (Wikipedia, 2017b).

The franchise started with a total of 151 monsters (Fig. 4), but today the games have reached their seventh iteration, counting with a total of 802 monsters.

Figure 4. Left to right: Bulbasaur, Charmander and Squirtle. (Official art by Ken Sugimori; image taken from Bulbapedia, 2017).

Each Pokémon belongs to one or two types indicating its “elemental affinity”, as well as its strengths and weakness against other types. This feature is essential to the gameplay, establishing a deep and complex rock-paper-scissor mechanic that lays at the foundation of the combat system. There are 18 types (they were only 15 in the first game), as seen in Figure 5 (Bulbapedia, 2017).

Figure 5. The 18 Pokémon types, depicted with their usual background colors.

In this paper, I examine the performance of convolutional neural networks (also known as ConvNets) in a Pokémon Type classification task given a Pokémon game sprite. I will present the data collected, the pre-processing and training pipelines, ending with the performance metrics of the selected model. All the data, implementation code and results, as well as a Jupyter Notebook with the explanation of all the steps, are available in a GitHub repository (https://github.com/hemagso/Neuralmon).

DATA PREPARATION

Dataset Features

To train the models, I am going to use game sprites. The dataset (the sprite packs) was obtained at Veekun (2017). These packs contain sprites ripped from the games’ so-called generations 1 to 5. Although there have been new games (and new monsters) released since then, they use tridimensional animated models; making it harder to extract the resources from the games, as well as making it available in a format that can be fed to a machine learning method. As such, in this paper we will only use Pokémon up until the fifth generation of the games (649 in total).

Figure 6 depicts the sprites of the three first-generation starters throughout all the games considered in this study.

We can immediately see that detail level varies between games, due to the different hardware and capabilities of the gaming consoles. The first generation, released for Nintendo’s Game Boy, has almost no hue variation in a single sprite, although there is some hue information in the dataset (for instance, Bulbasaur is green, Charmander is red and Squirtle is blue; Fig. 6). As we go on, through Game Boy Advance to Nintendo DS, we see that the level of detail skyrockets, not only in terms of hue, but also in shapes.

At a first glance, we can also identify some typical problems encountered in image classification tasks. The images have different sizes. Even though the Aspect Ratio in all images stays at a one-to-one ratio, we have images ranging from 40-pixel width in the first generation to 96-pixel width in the fifth one (pay attention to the scales on the border on each sprite in Figure 6).

Figure 6. Example of the variation of the sprites for three Pokémon, as seen throughout games and generations.

Also, not all sprites fill the same space in each image. Sprites from the oldest generations seem to fill, in relative terms, a bigger portion of their images. This also happens within the same generation, especially in newer games, relating, in general, to the differences in size of each Pokémon and its evolutions (Fig. 7).

Figure 7. Bulbasaur’s evolutionary line, as seen in the game’s 5th generation. As the Pokémon evolves and gets larger, its sprite fills up a larger portion of the image.

Image Centering

To solve this problem, let’s apply some computer vision techniques to identify the main object in the image, delimitate its bounding box and center our image on that box. The pipeline for that is:

  1. Convert the image to grayscale.
  2. Apply a Sobel Filter on the image, highlighting the edges of the sprite. The Sobel filter is a 3×3 convolutional kernel (more about these handy little fellows later, but see also Sckikit-Image, 2017) that seeks to approximate the gradient of an image. For a given image ‘A’, the Sobel operator is defined as:

  1. Fill the holes in the image, obtaining the Pokémon’s silhouette.
  2. Calculate the Convex Hull of the silhouette, that is, the smallest convex polygon that includes all pixels from the silhouette.
  3. Define the square bounding box from the convex hull calculated before.
  4. Select the content inside the bounding box, and resize it to 64 x 64 pixels.

Figure 8. Examples of all steps of the sprite centering pipeline.

After following the pipeline outlined above, we obtain new sprites that maximize the filling ratio of the sprite on the image. Those steps were taken using skimage, an image processing library for the Python programming language. Figure 8 shows the results of our pipeline for the sprites of the three 1st generation starters and Venusaur.

Our proposed pipeline is extremely effective at the task at hand. That is to be expected, as our images are very simple sprites, with a very clear white background.

Finally, let’s apply our method on all our monsters and images. Figure 9 shows the results for a bunch of Pokémon.

Figure 9. Centering results over various 5th gen Pokémon.

Target Variable

Now that we have all our Pokémon images to build our image dataset, we have to classify them in accordance with the variable that we want to predict. In this paper, we will try to classify each Pokémon according to its correct type using only its image. For example, in Figure 10 we try to use the image inside the bounding box to classify the Pokémon in one of the 18 types, trying to match its true type (shown below each Pokémon).

Figure 10. Example Pokémon and their respective types. Top row: Ivysaur (left) and Pidgey (right). Bottom row: Steelix (left) and Lord Helix (right), praise be unto him.

But there is a catch. A significant portion of the Pokémon, like all those from Figures 9 and 10, have a dual type. That is, its true type will be a combination of two different types from that list of 18 types. In Figure 10, for instance, Ivysaur is both a Grass type and Poison type, and has the strengths and weakness of both types.

To take this into account, we would have to make our target classifications over the combination of types. Even if we disregard type order (that is, consider that a [Fire Rock] type is the same class as a [Rock Fire] one), we would end up with 171 possible classes. (Actually, this number is a little bit smaller, 154, as not all combinations exist in the games.)

To make things worse, some combinations are rare (Fig. 11), with only one or two Pokémon, thus limiting the available samples to learn from.

Figure 11. Some unique type combinations. Top row: Magcargo (left) and Surskit (right). Bottom row: Spiritomb (left) and Heatran (right).

Due to the reasons outlined above, I opted to disregard type combinations in this paper. As such, we are only taking into account the primary type of a Pokémon. For instance, in Figure 10 we would have: Ivyssaur: Grass; Pidgey: Normal; Steelix: Steel; Lord Helix: Rock.

MODEL TRAINING

Chosen Model

I used a convolutional Neural Network as a predictor on our dataset. Neural networks are one among many kinds of predictive models usually used in machine learning, consisting of an interconnected network of simple units, known as Neurons. Based on a loose analogy with the inner workings of biological systems, Neural Networks are capable of learning complex functions and patterns through the combination of those simple units (Wikipedia, 2017a).

In its simplest form, a Neuron is nothing more than a linear function of its inputs, followed by a non-linear activation function (Fig. 12). However, through the combination of several layers, neural networks are capable of modelling increasingly complex relationships between the independent and dependent variables at hand (Fig. 13).

Figure 12. The basic unit of a Neural Network.

Figure 13. A slightly more complex architecture for a neural network, with one hidden layer.

Neural networks are not exactly new, as research exists since 1940 (Wikipedia, 2017a). However, only with recent computational advances, as well as the development of the backpropagation algorithm for its training, that its use became more widespread.

OK, this is enough to get us through the Neural Network bit. But what the hell “convolutional” means? Let’s first talk a little about Kernels.

In image processing, a Kernel (also known as Convolution Matrix or Mask) is a small matrix used in tasks as blurring, sharpening, edge detection, among others. The effect is obtained through the calculation of the matrix convolution against the appropriate Kernel, producing a new image. We have already seen a Kernel used in this paper, in our pre-processing pipeline, where we applied a Sobel Kernel to detect the edges of a sprite.

Figure 14. Sobel Kernel effect on Venusaur’s sprite.

The convolution operation may be thought of as a sliding of the Kernel over our image. The values in the Kernel multiply the values below them in the image, element-wise, and the results are summed to produce a single value of the convolution over that window. (A much better explanation about the convolution operation can be found at http://setosa.io/ev/ image-kernels/.) In Figure 15, we apply a vertical Sobel filter to detect sharp variations in color intensity (ranging in our grayscale images from 120 to 255).

Figure 15. Convolution example. The red area highlighted in the image is being convoluted with a Vertical Edge detector, resulting in the red outlined value on the resulting matrix.

But what the heck! What do those Kernels have to do with neural networks? More than we imagine! A convolutional layer of a neural network is nothing more than a clever way to arrange the Neurons and its interconnections to achieve an architecture capable of identifying these filters through supervised learning. (Again, a way better explanation about the whole convolutional network-stuff may be found in http://cs231n.github.io/convolutional-network s/.) In our pre-processing pipeline, we used a specific Kernel because we already knew the one that would excel at the task at hand, but in a convolutional network, we let the training algorithm find those filters and combine them in subsequent layers to achieve increasingly complex features.

Our Neural Network’s Architecture

I used a small-depth convolutional network for our Pokémon classification task (Fig. 16).

Figure 16. Architecture of the Neural Network used here.

Each layer of the image represents a layer in our convolutional network. After each layer, we obtain a state tensor that represents the output of that layer (the dimension of the tensor is listed on the right side of each layer).

A convolution layer then applies the convolution operation. In the first layer, we apply 32 kernels of size 5 to the input image, producing 32 outputs of size 60 x 60 (with each convolution the image size diminishes due to border effects).

We also use max polling layers that simply reduce a tensor region to a single one by getting its maximum value (Fig. 17). As such, after the application of a 2 x 2 max polling layer, we get a tensor that is a quarter of the size of the original.

Figure 17. Example of the max pooling operation.

At the end, we flatten our tensor to one dimension, and connect it to densely connected layers for prediction. Our final layer has size 18, the same size as the output domain.

Train and Validation

To achieve our model training we are going to split our dataset in two parts: (1) the ‘training dataset’ will be used by our training algorithm to learn the model parameters from the data; (2) the ‘validation dataset’ will be used to evaluate the model performance on unseen data. In this way, we will be able to identify overfitting issues (trust me, we are about to see a lot of overfitting[4]).

But we can’t simply select a random sample of our sprites. Sprites from the same Pokémon in different games are very similar to each other, especially between games of the same generation (Fig. 18).

Figure 18. Sprites of Bird Jesus from Pokémon Platinum (left) and Diamond (right). Wait… was it the other way around?


Box 1. Performance Metrics

In this article, we used three performance metrics to assess our model performance:

(1) Accuracy: the percentage of predictions that got the right type classification of the Pokémon;

(2) Precision: the percentage of images classified as a class (type) that truly belonged to that class;

(3) Recall: the percentage of images of a class (type) that were classified as that class.

While accuracy enable us to get an overall quality of our model, precision and recall are used to gauge our model’s prediction of each class.


If we randomly select sprites, we incur on the risk of tainting our validation set with sprites identical to the ones on the training set, which would lead to a great overestimation of model performance on unknown data. As such, I opted for Pokémon-wise sample. That is, I assigned the whole Pokémon to a set, instead of assigning individual sprites. That way, if Charizard is assigned to the validation set, all its sprites would follow, eliminating the risk of taint.

I used 20% of the Pokémon for the test sample, and 80% for the training set, which leaves us with 2,727 sprites for training.

First Model: Bare Bones Training

For the first try, I fed the training algorithm the original sprites, while keeping the training/ validation split. The algorithm trained over 20 epochs[5], which took about a minute in total[6]. The results obtained in this first training session are presented in Figure 19 (see also Box 1 for an explanation of the performance metrics).

Figure 19. Performance of the training set in the first try.

Impressive! We got all the classifications right! But are those metrics a good estimation of the model performance over unseen data? Or are those metrics showing us that our models learned the training sample by heart, and will perform poorly on new data? Spoiler alert: it will. Let’s get a good look at it: Figure 20 exhibits those same metrics for our validation set.

It seems that our model is indeed overfitting the training set, even if it’s performing better than a random guess.

Figure 20. Performance of the validation set in the first try.

But wait a minute… why haven’t we got any Flying type Pokémon? It turns out that there is only one monster with Flying as its primary type (Tornadus; Fig. 21), and he is included in the training set.

Figure 21. Tornadus is forever alone in the Flying type.

Second Model: Image Augmentation

The poor performance our first model obtained for the validation set is not a surprise. Image classification, as said in the introduction, is a hard problem for computers to tackle. Our dataset is too small and does not have enough variation to enable our algorithm to learn features capable of generalization over a wider application.

To solve at least part of the problem, let’s apply some image augmentation techniques. This involves applying random transformations over the training images, thus enhancing their variation. A human being would be able to identify a Pikachu, no matter its orientation (upside down, tilted to the side etc.) and we would like our model to achieve the same. As such, I applied the following range of transformations over our training dataset (Fig. 22): (1) random rotation up to 40 degrees; (2) random horizontal shifts up to 20% image width; (3) random vertical shifts up to 20% image height; (3) random zooming up to 20%; (4) reflection over the vertical axis; and (5) shear transformation over a 0.2 radians range.

Figure 22. Images obtained through the image augmentation pipeline for one of Bulbasaur’s sprites.

I applied this pipeline to all sprites in our training set, generating 10 new images for each sprite. This way, our training set was expanded to 27,270 images. But will it be enough? After training over 30 epochs (this time it took slightly longer, a little over 10 minutes in total), I obtained the following results (Fig. 23).

Figure 23. Performance of the training set for the second model.

Wait a minute, has our model’s performance decreased? Shouldn’t this image augmentation thing make my model better? Probably, but let’s not start making assumptions based on our training set performance. The drop in overall performance is due to the increase in variation in our training set and this could be good news if it translates into a better performance for the validation set (Fig. 24).

Figure 24. Performance of the validation set for the second model.

And here we have it! Image augmentation actually helped in the model’s performance. The accuracy was raised by 14 percentage points, to a total of 39%. We could keep trying to get a better model, fiddling with model hyper-parameters or trying net architectures, but we are going to stop here.

Taking a Closer Look on the Classifications

There are some things that I would like to draw your attention to. The types with greater prediction Accuracy are: Fire (61%), Water and Poison (54% each), Grass (47%), Electric (46%). The types with greater Recall (see Box 1) are: Dark (92%), Fire (74%), Water (55%), Normal (49%), Grass (42%).

It’s no surprise that the three main types (Fire, Water and Grass) are among the top five in both metrics. These types have very strong affinities with colors, an information easily obtained from the images. They also are abundant types, having lots of training examples for the model to learn from.

Now let’s look at some correctly and incorrectly classified Pokémon (Figs. 25 and 26, respectively).

Figure 25. Some correctly classified Pokémon. Top row: Squirtle (left), Pikachu (center), Weepingbell (right). Bottom row: Moltres (left), Tyranitar (center), Shedinja (right).

Figure 26. Some incorrectly classified Pokémon. Top row: Mochoke (left), Our Good Lord Helix (center), Lugia (right). Bottom row: Gardevoir (left), Seviper (center), Vaporeon (right).

Even in this small sample, we can see that color plays an important part in the overall classification. For example, in the incorrectly-classified Pokémon, Machoke had good chances of being a Poison type, possibly due to its purple color. Likewise, Seviper was classified as a Dark type probably due to its dark coloration.

And why is that? Well, we may never know! One of the downsides of using deep neural networks for classification is that the model is kind of a “black box”. There is a lot of research going on trying to make sense of what exactly is the network searching for in the image. (I recommend that you search the Internet for “Deep Dream” for some very trippy images.)

For now, we can look at the first layer activations for some of the Pokémon and try to figure out what is it that each kernel is looking for. But as we go deeper into the network, this challenge gets harder and harder (Fig. 27).

Figure 27. First layer activations (partial) for the three 1st Gen starters.

CONCLUSION

39% accuracy may not seem that impressive. But an 18-class classification problem with as little data as this is a hard one, and our model achieves a 20 percentage points gain against a Zero Rule Baseline, which is to guess the most frequent class for all Pokémon. Table 1 lists the frequencies of each class on the test set, which gives us a 19.5% accuracy for Zero Rule.

Table 1. Type frequency for the test dataset.

But of course, we shouldn’t be measuring our machines against such clumsy methods if we expect them to one day become the dominant rulers of our planet, and computers still have a long way to go if they expect to beat my little brother in the “Pokémon Classification Challenge” someday. On the bright side, they probably already beat my old man. But this is a topic for another article…

REFERENCES

Bulbapedia. (2017) Type. Available from: http:// bulbapedia.bulbagarden.net/wiki/Type (Date of access: 20/01/2017).

Go Game Guru. (2017) DeepMind AlphaGo vs Lee Sedol. Available from: https://gogameguru. com/tag/deepmind-alphago-lee-sedol/ (Date of access: 07/Mar/2017).

Large Scale Visual Recognition Challenge. (2015) Large Scale Visual Recognition Challenge 2015 (ILSVRC2015). Available from: http://image-net.org/challenges/LSVRC/2015/results (Date of access: 20/01/2017).

Scikit-Image. (2017) Module: filters. Available from: http://scikit-image.org/docs/dev/api/skimage.fi lters.html#skimage.filters.sobel (Date of access: 07/Mar/2017).

Tromp, J. & Farnebäck, G. (2016) Combinatorics of Go. Available from: https://tromp.github.io/go/ gostate.pdf (Date of access: 20/01/2017).

Veekun. (2017) Sprite Packs. Available from: https:// veekun.com/dex/downloads (Date of access: 20/01/2017).

Wikipedia. (2017a) Artificial Neural Network. Available from: https://en.wikipedia.org/wiki/ Artificial_neural_network (Date of access: 07/Mar/2017).

Wikipedia. (2017b) Pokémon. Available from: https://en.wikipedia.org/wiki/Pok%C3%A9mon (Date of access: 20/01/2017). 


ABOUT THE AUTHOR

Henrique wants to be the very best, like no one ever was. When he isn’t playing games, devouring sci-fi literature or writing awesome articles to an obscure geek journal on the Internet, he works a full-time job applying machine learning to the banking industry. Sadly, he got misclassified by his own creation.  –  Grass? Come on!?


Gotta Train ’em All 

I wanna be the very best / Like no one ever was

To model them is my real test / To train them is my cause

 

I will travel across the data / Searching far and wide

Each model to understand / The power that’s inside

 

Neural Net, gotta train ’em all / It’s you and me / I know it’s my destiny

Neural Net, oh, you’re my best friend / The world we must understand

Neural Net, gotta train ’em all / A target so true / Our data will pull us through

 

You teach me and I’ll train you

Neural Net, gotta train ’em all / Gotta train ’em all

Yeah


[1] The exact date for the invention of the computer is quite difficult to pin down. Helpful devices for calculations have existed for centuries, but truly programmable computers are a recent invention. If we take as a cutoff criterion that the first computer must be Turing Complete (that is, being able to compute every Turing computable function), our first examples would be placed around the first half of the twentieth century. The first project of a Turing complete machine is attributed to Charles Babbage in the nineteenth century. His Analytical Engine, if ever built, would be a mechanical monstrosity of steel and steam that, although not very practical, would certainly be awesome.

[2] It is estimated that the game space of Go comprises around 2.08·10^170 legal positions or 208,168,199,381,979,984,699,478,633,344,862,770,286,522,453,884,530,548,425,(…)639,456,820,927,419,612,738,015,378,525,648,451,698,519,643,907,259,916,015,628,(…)128,546,089,888,314,427,129,715,319,317,557,736,620,397,247,064,840,935, if you want to be precise (Tromp & Farnebäck, 2016).

[3] Brute force search is a problem-solving strategy that consists in enumerating all possible solutions and checking which solves the problem. For example, one may try to solve the problem of choosing the next move in a tic-tac-toe game by calculating all possible outcomes, then choosing the move that maximizes the chance of winning.

[4] Ideally, we would split our dataset in 3 separate datasets: (1) the ‘training dataset’ would be used to learn the model coefficients; (2) the ‘validation dataset’ would be used to calibrate model hyperparameters, as the learning rate of the training algorithm or even the architecture of the model, selecting the champion model; (3) the ‘test dataset’ would be used to evaluate the performance of the champion model. That way, we avoid introducing bias in our performance estimates due to our model selection process. As we already have a way too small dataset (and we aren’t tweaking the model that much), we can disregard the test dataset.

[5] In machine learning context, an epoch corresponds to an iteration in which all the training data is exposed to the learning algorithm (not necessarily at once). In this case, the neural network learned from 20 successive iterations in which it saw all the data.

[6] I trained all models on Keras using the Tensorflow backend. The training was done in GPU, with a NVIDIA GTX 1080, on a PC running Ubuntu. For more details, see the companion Jupyter Notebook at GitHub (https://github. com/hemagso/Neuralmon).


Check other articles from this volume

Is the Great Attractor a Tengen Toppa Gurren Lagann?     

João V. Tomotani

Universidade de São Paulo; São Paulo, Brazil.

Email: t.jvitor (at) gmail (dot) com

Download PDF

Space is a big place. Quite big, actually. Huge may be a more appropriate adjective. So huge that it received the title of “final frontier” by a famous television series, since you are not really supposed to traverse it. (The question remains, though: for it to be a frontier, isn’t it supposed to have something on both sides?) The vastness of space is both mysterious and fascinating. Man, in his ceaseless curiosity and desire for knowledge (and also the need to understand the universe around him to avoid the discomfort of death by starvation or cold), developed the science known today as Astronomy, as an attempt to unveil the mysteries of the universe. Since space is such a humongous place, one can expect it is full of mysteries. Enters the Great Attractor.

THE GREAT ATTRACTOR

The Great Attractor is a gravitational anomaly, a massive (and controversial) one. It is indirectly “observable” by its effect on the motion of galaxies, and its presence, mass and position were estimated based on the peculiar velocity of the Local Group, the galaxy group that includes the Milky Way (Kocevski & Ebeling, 2006). A nice video explaining it was made by the SciShow Space (www.youtube .com/watch?v=N9qeOhJ9dbg). It is also a quite funny name with the potential for several jokes (Fig. 1 is not one of them, but it could be).

Great Attractor Figure 01

Figure 1. “Dark Flow” (XKCD, 2008; available from https://xkcd.com/502/). Note that the (rather controversial) Dark Flow phenomenon is not the same as the Great Attractor. I included this strip, though, because it is funny.

The observation of the Great Attractor is difficult when restricted to the length of optical waves, due to the presence of the Milky Way. The plane of the Milky Way outshines (due to its stars) and obscures (due to the dust) many of the objects behind it (NASA, 2013). This unobservable space is called the Zone of Avoidance (ZOA; a very neat name by the way), or “Zone of few Nebulae” as initially proposed by Proctor in 1878 (Kraan-Korteweg, 2000). Not to be confused with the Phantom Zone, the prison dimension to where the people of Krypton sent their prisoners.

The ZOA was “avoided” by astronomers because of the difficulties in analyzing the obscured galaxies known there (Kraan-Korteweg, 2000). Figure 2 shows a picture from the NASA/ESA Hubble Space Telescope taken in 2013 focusing on the Great Attractor. The region behind the center of the Milky Way, where the dust is thickest, is very difficult to observe at optical wavelengths.

Figure 2. “Hubble Focuses on ‘the Great Attractor’”. This field covers part of the Norma Cluster as well as a dense area of the Milky Way. The Norma Cluster is the closest massive galaxy cluster to the Milky Way. The huge mass concentrated in this area and the consequent gravitational attraction makes this region of space known to the astronomers as the Great Attractor. Picture retrieved from http://www.nasa.gov/mission_pages/hubble/science/great-attractor.html (NASA, 2013).

In this study, I shall propose a slightly unusual hypothesis to what could be the Great Attractor. Could this gravity anomaly be an outcome of the presence of a very big robot?


Box 1. The Local Group, Clusters, Superclusters and other ginormous things

The Great Attractor’s location is estimated to be at a distance of somewhere between 150 and 250 million light-years from the Milky Way (something between 1.4 x 1024 and 2.4 x 1024 meters and quite far indeed). But both the Great Attractor and our own Milky Way belong to the same structure, known as the Laniakea Supercluster (“Laniakea” means “immense heaven” in Hawaiian). The Milky Way resides in the outskirts of this supercluster, whose diameter is 500 million light-years, while the Great Attractor resides closer to its center. A supercluster is a (very) large group of smaller galaxy clusters or galaxy groups (like our own Local Group) and is among the largest known structures of the cosmos. The Laniakea Supercluster was discovered in 2014, encompasses 100,000 galaxies, contains the mass of one hundred million billion Suns, and consists of four subparts, previously known as separate superclusters (The Daily Galaxy, 2015).


THE TENGEN TOPPA GURREN LAGANN

Japan has a peculiar relationship with robots, which have an important and established position in the country’s pop culture. The word mecha (abbreviation of “mechanical”) now refers to a whole genre of movies, manga, anime and live-action series (the tokusatsu) involving mechanical objects (vehicles, robots etc.), autonomous or manned, which quickly became popular in Japan and abroad.

While the first robot appearance in sci-fi culture is usually attributed to the tripods of H.G. Wells in 1897, the first appearance of a giant humanoid robot is attributed to Tetsujin 28-Go, a manga from 1956 by Mitsuteru Yokoyama. However, perhaps the greatest symbol of the mecha genre, and Japanese culture in general, is from a 1952 manga by Osamu Tezuka: the boy-robot Tetsuwan Atom (Astro Boy in the West). This manga was released in post-war Japan, a moment of drastic changes in culture, industry and society, where science and technology promised economic growth and transformation of social structures (Hikawa, 2013). Astro Boy was adapted into anime in the 1960s and quickly made its way to the West. Other works of the mecha genre, particularly those with giant robots (e.g., Gundam, Mazinger Z, Macross, Neon Genesis Evangelion) influenced many western works like the Transformers cartoon, the Power Rangers TV series and the movie Pacific Rim.

Giant fighting robots are already a reality, by the way. Groups of American and Japanese engineers, in their desire to hasten Judgment Day, built giant robots of a few tons, capable of firing missiles and engaging in heavy fighting (Fig. 3).

Great Attractor Figure 03

Figure 3. The American (Megabots Inc.) and Japanese, named Kurata, giant robots (Suidobashi Heavy Industries). Source: http://www.popularairsoft.com/megabot-challenges-japanese-kuratas-giant-robot-duel.

What does this entire story about Japanese robots have to do with the massive gravity anomaly from the introduction, you ask? Well, a 2007 Japanese animation called Tengen Toppa Gurren Lagann decided to explore how “giant” a giant robot could be.

The Tengen Toppa Gurren Lagann (henceforth TTGL; Fig. 4) is the largest mecha shown in the anime. According to the official series guide, the TTGL is about 10 million light-years tall (Gurren Lagann Wiki, 2016). This is somewhere around 9.46 x 1022 meters, or about 100 times the diameter of the Milky Way. It is a fairly giant robot.

Great Attractor Figure 04

Figure 4. The Tengen Toppa Gurren Lagann, a quite big mecha. Official artwork from the series, available from Gurren Lagann Wiki.

The existence of a robot 10 million light-years tall is very questionable for some practical reasons. The usefulness of a robot of this size is also doubtful. How could a robot of this size engage in combat (or do anything, actually)? Since nothing restricted by the physics of our universe can move faster than light, the act of throwing a single punch would take a few million years. It would take a few million years more for the pilot of this robot to find whether the punch hit the target or not. It would be a long fight. These practical questions will henceforth be disregarded here. The question posed is only one: could the Great Attractor be a consequence of the existence of the TTGL?


Box 2. The Super TTGL

In the follow-up movie, a version of the robot entitled Super Tengen Toppa Gurren Lagann was introduced because, why not? The Super TTGL is 52.8 billion light-years tall according to the official guide book, making it about 58% the size of the universe. We shall not consider this robot. 


SO… IS IT POSSIBLE?

Well, not exactly.

The first thing I need to do is estimating what is the mass of a robot of this size. This is not that simple, since humanity has not yet been able to build something so gargantuan. A rather crude way to do this is by applying the square-cube law (see Box 3) based on smaller robots with known mass. Since we have the height and mass of the Kurata Japanese robot (4 meters, 4.5 tons; Wikipedia, 2016a), we can use it for our estimate.


Box 3. The square-cube law

The square-cube law was proposed by Galileo Galilei (1564–1642), who was apparently the first to notice that the volume of a particular object or being increased in cubic proportion to an increase in their linear dimensions, while the strength increases in square proportion (cross section of the muscles). A review of this concept was conducted by Froese (2006).

The square-cube law has a number of practical applications, including studies in Biology and civil engineering, besides being a very interesting concept to be assessed in pop culture. It is not uncommon that, for super heroes, strength and size are treated almost synonymously. Heroes and villains (e.g., the Hulk, Giganta, and Apache Chief), grow in size constantly for fighting or performing feats of strength. In practice, achieving an absurd size is not practical, since the square-cube law suggests that the weight of the heroes grow much faster than their strength (that would mean they would be unable to even stand up). This law is unfortunately a significant impediment to building colossal robots.

Interestingly, the spell enlarge person from the tabletop RPG Dungeons & Dragons agrees with half of the law (Cook et al., 2003). To double in size, the target of the spell has its weight multiplied by eight, in accordance with the “cube law”. However, the target receives a fixed Strength modifier of +2, instead of having an increase proportional to his/her base Strength value, which would make more sense.


Applying the square-cube law to estimate the mass of the TTGL, we get the results shown on Table 1. In addition to the mass of the TTGL, I estimated the mass of other fictitious robots. This comparison was made to assess whether this estimate would be appropriate, given that several of these giant robots have established weights in official guides and other “literature”.

The robots chosen for comparison were: the ATM-09-ST VOTOM (Vertical One-man Tank for Offense and Maneuvers) from the anime Armored Trooper Votoms (1983); the Gundam RX-78-2 from the anime Mobile Suit Gundam (1979); the T800 from the movie Terminator (1984; the height was defined as that of the actor Arnold Schwarzenegger; the weight of a T800 is unknown but it is thought to not exceed 1 ton, since the robot take actions such as riding a motorcycle); the autobot Optimus Prime from the movie Transformers (2007); the jaeger Gipsy Danger from the movie Pacific Rim (2013); and the real robot from Megabots Inc. mentioned above (the weight of the Megabot is known; the estimate is only for comparison purposes). Moreover, Table 1 has also the Sun and the Milky Way for comparison. We can see that, for larger robots (Optimus Prime and bigger), the estimated weight by the square-cube law becomes much greater than that given by the official guides. This lighter weights can be partially “explained” for some robots by using unknown material: Optimus Prime is made of Cybertron materials and Gundams from Luna Titanium or Gundarium. In the case of a Jaeger, I can only assume that the futuristic technology of Pacific Rim was able to develop lightweight robots to that extent (or that the movie producers just did not care).

Table 1. Height and weight of giant robots and other things. The “Estimate” column is the weight estimated using the square-cube law (having Kurata’s weight and height as basis). The “Official” column is the official (or actual) weight.

Great Attractor Table 01

The mass of the Great Attractor is estimated to be about 1.000 trillion times the mass of the Sun (Koberlein, 2014). This is equivalent to circa 2 x 1042 tons, well below the estimated mass of a TTGL of 6 x 1067 tons. Just from this difference, it appears that the Great Attractor could not be a TTGL, or the gravitational attraction would be many times stronger than the one perceived (even considering that the estimated weight is wrong by a few orders of magnitude). Moreover, this is not the only problem. Such a monstrous mass distributed in such a small space would probably collapse and become a black hole.

The Schwarzschild radius (or gravitational radius) is a concept that expresses what should be the radius of a sphere such that, if the mass of the entire object was within this sphere, the escape velocity of the surface of this sphere would be equal to the speed of light (i.e., you would not be able to escape its gravitational field). When the remains of a star, for example, collapse so that its size is below this radius, the light cannot escape its gravitational field and the object is no longer visible, becoming a black hole (Beiser, 2003). The Schwarzschild radius can be calculated by:

Great Attractor Equation 01

where: rs is the Schwarzschild radius; G is the gravitational constant; M is the mass of the object; and c is the speed of light in vacuum.

An object whose real radius is smaller than its Schwarzschild radius is called a black hole. Calculating the Schwarzschild radius for the Milky Way, the Sun, and the TTGL gives us Table 2.

Table 2. Comparison of the Schwarzschild radii of Sun, Milky Way and TTGL, with their real radii. The real radius of the TTGL is considered half its height.

Great Attractor Table 02

From Table 2, we can see that in the case of the TTGL, the Schwarzschild radius is many times larger than its actual size (even considering that the square-cube law has overestimated the mass of the robot by some orders of magnitude). This means that the robot, if existed, would become a giant supermassive black hole.

Incidentally, the estimated mass of the TTGL is also several times greater than the estimated mass of the observable universe (considering only ordinary matter), that is 1050 tons. Thus, it is unlikely that a robot this big exists.

SO… IS IT IMPOSSIBLE?

Well, not necessarily.

As shown by Table 1, many other robots in fiction do not follow the square-cube law to the letter. Some reasons may be proposed: they are made of fictional materials (supposedly not yet discovered by man), such as Gundarium or some Cybertron material; they were built by advanced and/or alien technology; or for any magical/supernatural reasons.

The same can be valid for the TTGL, in a way. The robot is made of “a mass of continuously materialized Spiral Power”, according to the anime lore (Gurren Lagann Wiki, 2016). This Spiral Power (Fig. 5) is presented in the anime as a physical model, the connection between living beings and the universe (besides being a religion of sorts). Such definition could make us treat the structure of the TTGL as strictly “magical”, discarding any physical interpretation of its existence. Nevertheless, the robot is composed of “mass”, so it has a gravitational field.

Great Attractor Figure 05

Figure 5. The protagonist of the Tengen Toppa Gurren Lagann anime overflowing with Spiral Power. Screenshot from the anime; image taken from Gurren Lagann Wiki.

As such, I propose a second analysis for the TTGL. Knowing the estimated mass of the Great Attractor as 2 x 1042 tons, I assume that to be the mass of the TTGL. Calculating the Schwarzschild radius for that mass, we have Table 3.

Table 3. Comparison of the Schwarzschild radius of the TTGL with its real radius, considering that the TTGL has the same mass as the Great Attractor.

Great Attractor Table 03

Thus, a mecha of this size and weight might not collapse into a black hole, also having a “Schwartschild radius / real radius” ratio not so different from those of the Sun and Milky Way.

SO… IT IS POSSIBLE!

Well, not really.

You see, early this year, scientists managed to identify a whole bunch of galaxies hidden in the Zone of Avoidance (Staveley-Smith et al., 2016). These researchers used a multibeam receiver on a 64-m Parkes radio telescope and uncovered 883 galaxies, many of which were never seen before.

Therefore, it is more likely that the gravity anomaly detected is because of this concentration of galaxies rather than due to the existence of a giant robot 10 million light-years tall. But you never know…

Great Attractor Figure 06

Figure 6. You never know… Image adapted from the video “Laniakea: our home supercluster”, by Nature Video; available from: https://www.youtube.com/watch?v=rENyyRwxpHo.


Box 4. The Ring (1994)

In his 1994 novel “The Ring”, fourth book of the “Xeelee Sequence”, British hard science fiction writer Stephen Baxter proposed yet another interesting hypothesis for the origin of the Great Attractor. In his novel, the alien race Xeelee was losing a war against beings of dark matter, and retreated through an escape hatch. This escape hatch (the Ring from the title) was made of something too small to be seen by the naked eye, a cosmic string, a flaw in space time. A single inch of this “material” would weight ten million billion tons on the surface of the Earth. The ring had a mass of several galactic clusters and measured 300 light-years across, 10 million light-years in diameter. In Baxter’s book, it is discovered that this immense construction is the reason behind the Great Attractor (Orbital Vector, 2007).


ACKNOWLEDGEMENTS

I am grateful to Henrique M. Soares for helping to formulate this study’s question and developing the analysis; and to Gabriel K. Kiyohara for comments that helped putting some things in perspective (pun intended).


REFERENCES

Beiser, A. (2003) Concepts of Modern Physics. 6th ed. McGraw-Hill, New York.

Blain, L. (2015) June 2016: America and Japan to face off in giant robot combat. Available from: http://www.gizmag.com/kuratas-sui dobashi-megabots-giant-robot-battle-20 16 -gundams/38352/ (Date of access: 13/ May/2016).

Cook, M.; Tweet, J.; Williams, S. (2003) Dungeons & Dragons Player’s Handbook: Core Rulebook 1, v. 3.5. Wizards of the Coast, Renton.

Daily Galaxy, The. (2015) The Great Attractor “exists within an Immense Supercluster of 100,000 Galaxies”. Available from:  http://www.dailygalaxy.com/my_weblog/2015/03/the-great-attractor-exists-within-an -immense-supercluster-of-100000-galaxies. html (Date of access: 13/May/ 2016).

Frose, R. (2006) Cube law, condition factor and weight-length relationships: history, meta-analysis and recommendations. Journal of Applied Ichthyology 22(4): 241–253.

Gurren Lagann Wiki. (2016) Tengen Toppa Gurren Lagann. Available from: http://gurrenlagann.wikia.com/wiki/Tengen_Toppa_Gurren_Lagann (Date of access: 13/May/ 2016).

Gundam Wiki. (2016) RX-78-2. Available from: http://gundam.wikia.com/wiki/RX-78-2_Gu ndam  (Date of access: 13/May/ 2016).

Hikawa, R. (2013) Japanese Animation Guide: The History of Robot Anime. Japan’s Agency for Cultural Affairs.

Kraan-Korteweg, R.C. (2000) Galaxies behind the Milky Way and the Great Attractor. Lecture Notes in Physics 556: 301–344.

Koberlein, B. (2014) What is the Great Attractor? Universe Today. Available from: http://www.universetoday.com/113150/what-is-the-great-attractor/ (Date of access: 13/May/ 2016).

Kocevski, D.D. & Ebeling, H. (2006) On the origin of the Local Group’s peculiar velocity. Astrophysics Journal 645: 1043–1053.

MAHQ. (2016) ATM-09-ST Scopedog. MAHQ Mecha and Anime Headquarters. Available from: http://www.mahq.net/mecha/votom s/atvotoms/atm-09-st.htm (Date of access: 13/May/ 2016).

NASA. (2013) Hubble focuses on “the Great Attractor”. Available from: http://www.na sa.gov/mission_pages/hubble/science/great-attractor.html (Date of access: 13/May/ 2016).

Orbital Vector. (2007) Xeelee Ring. Available from: http://www.orbitalvector.com/Mega structures/XEELEE%20RING.htm (Date of access: 16/May/ 2016).

Pacific Rim Wiki. (2016) Gipsy Danger. Available from: http://pacificrim.wikia.com/ wiki/Gipsy_Danger_(Jaeger) (Date of access: 13/May/ 2016).

Staveley-Smith, L.; Kraan-Korteweg, R.C.; Schröder, A.C.; Henning, P.A.; Koribalski, B.S.; Stewart, I.M.; Heald, G. (2016) The Parkes HI Zone of Avoidance survey. The Astronomical Journal 151(3): 1–42.

Transformers Wiki. (2016) Optimus Prime. http://transformers.wikia.com/wiki/Optimus_Prime_(Movie) (Date of access: 13/May/ 2016).

Wikipedia. (2016a) Kuratas. Available from: https://en.wikipedia.org/wiki/Kuratas (Date of access: 13/May/ 2016).

Wikipedia. (2016b) Arnold Schwarzenegger. Available from: https://en.wikipedia.org/ wiki/Arnold_Schwarzenegger (Date of access: 13/May/ 2016).

Wikipedia. (2016c) Sun. Available from: https://en.wikipedia.org/wiki/Sun (Date of access: 13/May/ 2016).

Wikipedia. (2016d) Milky Way. Available from: https://en.wikipedia.org/wiki/Milky_Way (Date of access: 13/May/ 2016).


Check other articles from this volume

The strongest starter Pokémon

Bruno L. Carli

Independent researcher, Curitiba, PR, Brazil.

Email: brunolcarli (at) gmail (dot) com

Download PDF

Earlier this year, an article entitled “Which is The Most Offensively Powerful Starter Pokémon?” (Codd, 2016) caused great controversy on the Internet among players and fans of the Pokémon franchise. This article compared the three classical starter Pokémon, based on the anime, and concluded that Charizard was the strongest one.

The present work aims to analyze and discuss the data presented by Codd (2016) regarding the following issues: (1) Does his anime-based data coincide with the game mechanics? (2) Can his study be applied to metagame prospects? (3) Is Charizard really the most “powerful” Pokémon in-game?

ABOUT POKÉMON

Pokémon™ is an entertainment franchise, created by Satoshi Tajiri in 1995, that started with video games, but now includes an anime, a trading card game, clothing and several other products. Needless to say, the main products of the franchise (the games and anime) caused a large impact in recent pop culture.

The first products to be released were the “twin games” Pokémon Red and Pokémon Green, in 1996 in Japan. These games were later (in 1998) released worldwide as the Red and Blue versions for Nintendo’s Game Boy console. (As a side note, in celebration of its 20 years of existence, earlier this year the Pokémon Company released a website containing a timeline of their products).

On TV, Pokémon was first released in Japan in 1997 with the episode “Pokémon – I Choose You” (released in the United States only in 1998; Wikipedia, 2016), triggering wide public attention. The franchise is now successful worldwide, attracting millions of fans and players of all ages, ethnic groups and social classes, and the games are often regarded extremely seriously by the players.

CODD’S THEORY

Codd (2016) concluded in his article that Charizard (the last form of the starter Charmander) was the most powerful of the three initial options (the grass-type Bulbasaur, the fire-type Charmander and the water-type Squirtle; Fig. 1). To reach this conclusion, Codd based his work on “data” provided by the anime, specifically (for Charizard) in the episode “Can’t beat the heat!” (aired 17/Feb/2002), from which he estimated variables such as weight (body mass), height and width of the Pokémon. Through a series of calculations, all very well-founded in Physics, Codd determined that the offensive power of Charizard is well ahead of its competitors.

Codd’s calculations are in fact quite accurate and may be applicable to the anime. But it behooves us a little analysis regarding the applicability of his results to the game. At the very start of his article, Codd states:

At the start of each Pokémon game, the player is given a choice of starter Pokémon. The options are almost always a choice between a fire type, a water type and a grass type. In most ways the most iconic of the starter Pokémon across all Pokémon generations are the original three; Charmander, Squirtle and Bulbasaur, which will fully evolve into Charizard, Blastoise and Venusaur respectively.

― Codd (2016: p. 1), my highlight

Therefore, the first sentence of this quotation makes it clear that the author refers to the games, with its challenging proposition of having to choose one of three possible options to continue. In the same paragraph Codd says:

Each of these Pokémon also have a signature move, one which is closely linked to them through the course of the anime and the games. For Charizard this is Flamethrower, for Blastoise this is Hydro Pump and for Venusaur this is Solar Beam.

― Codd (2016: p. 1), my highlight

Thus, the author establishes an intrinsic connection between anime and game. From this point on, he starts his analysis based on the size and proportions of the starting Pokémon gathered from the anime. Despite this, the authors surmises that his calculations may be applied to the game. The discordance between Codd’s arguments and the games is based on a simple fact: he used estimates and variables that are not true (or accounted for) in the native mechanics of the game, being thus irrelevant in determining the offensive capability of a given Pokémon. In the game,

Each Pokémon has six major Stats, which are as follows: HP, Attack, Defense, Special Attack, Special Defense and Speed. HP means ‘Hit Points’ and represents health (‘amount of vitality’) of a Pokémon. When it suffers damage, a numerical value is calculated by the game, and the result is subtracted from the current HP. When HP reaches zero, the Pokémon faints and is out of action.

― Vianna Sym (2015: p. 26), my translation

In the games, Pokémon are defined by certain features, among which are the above-mentioned Stats. Each Pokémon has a given number of points assigned differently to its Stats, making it tough, agile or strong. HP represents the Health Points (or Hit Points) of a Pokémon, and from the work of Codd (2016), it is understood that a Pokémon that is “powerful” is the one with the highest chances to take the opponent’s HP down to 0 more effectively.

Thus, to estimate how powerful a Pokémon is, one should not base his/her calculations on features estimated from the anime, but rather analyze the Stats distribution of a given Pokémon as it appears in the game. This study takes into account the Stats of each of the starting Pokémon to more thoroughly analyze how powerful each can become, that is, how much damage a Pokémon can cause in a battle.

 CASE STUDY

Let’s first set the game to be any of the so-called “Gen I” versions (Pokémon Red, Blue, Green or Yellow), released between 1996 and 1998. In these versions of the game, there were less Stats, only: HP, Attack, Defense, Special and Speed (also, there were no mega-evolutions). The distribution of stats between the starting Pokémon (in their last form) can be seen in Figure 1.

Pokestarters - Fig 1

Figure 1. Base stats of (from top to bottom) Venusaur, Charizard and Blastoise in Gen I. Source of the tables: Serebii.net. Original artwork of the Pokémon by Ken Sugimori; available through Bulbapedia.

By comparing the so-called Base Stats of the three starting Pokémon (from Fig. 1), we get the chart shown in Figure 2. This gives us a broader view of the Stats distribution of each Pokémon, distinguishing their higher and lower attributes. If we add up all the Base Stats of each Pokémon, we obtain a grand total score of Stats points (Fig. 3). From Figure 3, it can be seen that all three Pokémon sum up to the same value: 425 points. In the first versions of the games the Stats were kept in a balance during the development of these three Pokémon. Thus, the sum of Base Stats alone is not enough to show which starter is the strongest. There’s more to consider.

Pokestarters - Fig 2

Figure 2. Chart comparing the Base Stats of the three starters in Gen I.

Pokestarters - Fig 3

Figure 3. Sum of all Base Stats values of each starter Pokémon in its final form (Gen I).

Using the Base Stats, we can estimate the possible amount of damage (measured in hit points, or HP; Vianna Sym, 2015) that a Pokémon can cause with one of his moves. This is in fact based on a complex calculation depending on several variables, such as the attacking Pokémon’s level and offensive Stat and the opponent’s defensive Stat, alongside some occasional bonuses. By default, the formula is expressed as (Vianna Sym, 2015):

Pokestarters - Equation1where “Level” is the current character level of the attacking Pokémon, ranging between 1 and 100; “AttackStat” is the Base Attack Stat or Special Stat (depending on the kind of move, Physical or Special, used) of the attacking Pokémon; “DefenseStat” is the Base Defense Stat or Special Stat (again, depending on the kind of move used) of the opponent; “AttackPower” is the power of the move used (this is pre-defined in the game and each move has its own power value), where a greater value represents a greater damage output; “STAB” is an acronym for “Same-Type Attack Bonus”, which means that if the move used has the same type as that of the Pokémon using it, it increases in 50% (STAB = 1.5; otherwise, STAB = 1); “Weakness” is applied depending on whether the chosen move is super effective on the opponent (this variable can assume values of 0.25, 0.5, 1, 2 or 4, depending on the type of the move and of the defending Pokémon); “RandomNumber” is simply an integer assigned randomly by the game, ranging from 85 to 100.

Other in-game factors may cause changes in damage output, for example: weather effects (rain and sunshine), and the so-called “buffs” and “de-buffs”, which are respectively temporary increases and decreases in the Pokémon’s Stats caused by moves such as Agility, Dragon Dance, Swords Dance etc. Weather effects were not yet present in the first versions of the game, so they will not be considered in this study. Moreover, to keep the analysis simple (not to say feasible), increases/decreases in Stats will also not be taken into account. The calculations here use only the Base Stats of the Pokémon in question and the set Power value of the moves. Weakness will also not be applied.

Codd (2016) considered the “signature moves” of the starting Pokémon as: Solar Beam for Venusaur (grass type), Flamethrower for Charizard (fire type), and Hydro Pump for Blastoise (water type).

The Power of each of these moves can be seen in Figure 4, alongside other data: “Battle Type” is the type of the moves, which in this case are the same as the types of the starter Pokémon (so STAB = 1.5); “Category” refers to whether the move is a Physical Attack or a Special Attack (all are Special and thus use the Base Special Stat); “Power Points” (PP) represent the number of times that the move can be used; “Power Base” is the Power of the move (used in the equation above); “Accuracy” refers to the probability of success in hitting the opponent (in %).

Pokestarters - Fig 4

Figure 4. From top to bottom, the moves Solar Beam (formerly rendered as “Solarbeam” or “SolarBeam”), Flamethrower and Hydro Pump, showing their in-game Power values and type (in Gen I). The symbol in the “Category” entry means that the moves are all Special Attacks. Source: Serebii.net.

CALCULATING THE DAMAGE

To calculate the damage dealt by each of the starter Pokémon with their signature moves, I used a virtual calculator available at Smogon University, the “Pokémon Showdown”. (Smogon University is a community dedicated to the competitive world of Pokémon games, giving the players some useful tools.) The moves have the Power values shown in Figure 4 and the defending Pokémon will be a Chansey (see Fig. 5 for Base Stats), which is neutral (that means, neither weak nor strong) towards the starters and their signature moves. All Pokémon are considered to be Level 100.

Pokestarters - Fig 5

Figure 5. Base stats of Chansey in Gen I. Source of the table: Serebii.net. Original artwork of the Pokémon by Ken Sugimori; available through Bulbapedia.

By putting all the values in the Pokémon Showdown calculator, we have:

  • Venusaur (Solar Beam): Note that the Gen I version of Solar Beam is not present in the Pokémon Showdown database, so I used the Gen II version instead (the Power is the same). The damage output falls in the interval 125 to 147 points, which represents 17 to 20% of Chansey’s total HP. Venusaur needs to land 5 blows to knock out its target.
  • Charizard (Flamethrower): The damage output falls in the interval 90 to 106 points, which represents 12 to 15% of Chansey’s total HP. Charizard needs to land 7 blows to knock out its target.
  • Blastoise (Hydro Pump): The damage output falls in the interval 113 to 133 points, which represents 16 to 18% of Chansey’s total HP. Blastoise needs to land 6 blows to knock out its target.

Just in case, these numbers were checked on another calculator, built by myself (Pokémon Damage Calculator; Carli, 2016). An algorithm was developed based on the damage equation from above, translated in some programming languages (available at: https://github.com/brunolcarli/pokeDamageCalc) and then translated into APK format so it can be installed on any mobile device running on Android (Fig. 6) or Windows operating systems. Feel free to download the app at: https://build.phonegap.com/apps/1824036/install. The results were very similar (Fig. 6): 127 to 144 points of damage for Venusaur’s Solar Beam; 84 to 98 points of damage for Charizard’s Flamethrower; and 106 to 122 points of damage for Blastoise’s Hydro Pump.

Pokestarters - Fig 6

Figure 6. Screenshots of the Pokémon Damage Calculator app (Carli, 2016: v. 1.0.0, running on Android OS), showing the maximum damage output for Venusaur’s Solar Beam (left), Charizard’s Flamethrower (middle) and Blastoise’s Hydro Pump (right).

Organizing all these numbers (from both the Pokémon Showdown and the Pokémon Damage Calculator) in a chart (Fig. 7), it is possible to clearly see the minimum and maximum damage each of the initial Pokémon can inflict, with their signature moves, against a neutral target. It can be seen that Charizard is actually the Pokémon that causes the least amount of damage, while Venusaur can deal the greatest amount of damage. Thus, Venusaur can be regarded as the “most potent” starter if we are referring to the sheer amount of damage caused.

CONCLUSION

The present study thus shows that Codd’s (2016) analysis is not applicable to the game itself, since it is not based on the variables and values present in the game mechanics. Also, as shown above, Venusaur and not Charizard is the “most potent” starter considering just the raw amount of damage it can cause. However, this is true only for a single attack in a single round of battle (which is important for the so-called “one-hit knockout”). Of course, as every player knows, one should not think that damage output alone makes a Pokémon more effective in battle. The game has much greater complexity and we would be reducing it to nothing if we just consider maximum damage. For instance, Solar Beam is a move that needs to spend 1 turn of the battle recharging, while both Flamethrower and Hydro Pump can be used every round. Furthermore, there are other factors, like Hydro Pump having an accuracy of 80% (meaning it misses one out of every five times) and Flamethrower being able to leave the defending Pokémon with the burn status condition. However, this is a matter for another day; for now, Charizard has lost its crown.

Pokestarters - Fig 7

Figure 7. Simple chart showing the maximum (red) and minimum (blue) points of damage each of the starters can inflict with their signature moves (Solar Beam for Venusaur, Flamethrower for Charizard, and Hydro Pump for Blastoise). The chart takes into account the values obtained by both the Pokémon Showdown and the Pokémon Damage Calculator.

REFERENCES

Bulbapedia. (2016) The Community-driven Pokémon Encyclopedia. Available from: http://bulbapedia.bulbagarden.net/wiki/Main_Page (Date of access: 08/May/2016).

Carli, B.L. (2016) Pokémon Damage Calculator, version 1.0.0. Available from: https://build.phonegap.com/apps/1824036/install (Date of access: 23/Apr/2016).

Codd, T. (2016) Which is the most offensively powerful starter Pokémon? Journal of Interdisciplinary Science Topics: https://physics.le.ac.uk/jist/index.php/JIST/article/view/153/94 Date of access: 23/Apr/2016).

Pokémon Company. (2016) Pokémon 20th Anniversary. Available from: http://www.pokemon20.com/en-us/ Date of access: 23/Apr/2016).

Pokémon Showdown. (2016) Pokémon Showdown! BETA. Available from: https://pokemonshowdown.com/damagecalc/ (Date of access: 23/Apr/2016).

Serebii.net. (2016) Serebii.net – Where Legends Come to Life. Available from: http://www.serebii.net/index2.shtml (Date of access: 23/Apr/2016).

Smogon University. (2016) Competitive Pokémon Community. Available from: http://www.smogon.com/ (Date of access: 23/Apr/2016).

Vianna Sym, Y. (2015) A Arte do Pokémon Competitivo. 2 ed. Available from: http://1.pokeevo.net/topic/8946178/1/ (Date of access: 23/Apr/2016).

Wikipedia. (2016) Pokémon. Available from: https://en.wikipedia.org/wiki/Pok%C3%A9mon (Date of access: 23/Apr/2016).


Check other articles from this volume