Portal 2 and transfer of learning in playful environments

Portal 2 is a 3D environmental puzzle game from Valve. The game has won several awards including the coveted Game developers conference awards for excellence in game design, writing and and audio. The gameplay revolves around solving navigational problems using portal guns and other devices available in the scenario. The improvisational use of the multi-functional portal guns is the core mechanic of the game. The interactions between portals and elements like light-bridge, cubes, lasers and gels allow players to generate innovative solutions to overcome well crafted puzzles.

Portal 2 has several well designed elements, but the level design stands apart in a class of its own. Starting from the controlled test chambers in the earlier levels to the extravagantly open spaces in the later scenarios, Portal 2 levels manage to pose challenging levels without overwhelming the players. The introduction of elements and abilities is managed through well designed puzzles, which manage to educate and challenge the players simultaneously. This allows portal to effortlessly train players without any explicit instructions. Portal’s approach to level design is a great example of facilitating transfer of learning in a playful environment. In the following sections, i break down multiple aspects and approaches in the level design of portal, which follow learning design principles for facilitating transfer of knowledge. Although the game also includes a well designed cooperative mode, i only discuss the level design of the single player campaign in this article.

Test chambers as controlled learning environments

A controlled learning environment plays a key role in efficient learning. The term ‘control’ refers to the intentional simplification of complex learning concepts. Consequently, controlling a learning environment refers to the process of identifying and extracting core concepts of a topic and designing a curricula around them. The challenging aspect of this process is not limited to including important parts, but also involves excluding irrelevant information.

iceberg-1321692_960_720

As an analogy, imagine the body of knowledge of a topic as an iceberg. This body of knowledge is so vast that it is impossible to understand it all at once, especially for novice learners. Moreover, novice learners, who are still at the surface level cannot comprehend the knowledge hidden inside the depths of the water. Trying to explain the submerged part of the iceberg will only confuse them further. A controlled learning environment excludes ‘submerged’ information and introduces ‘surface level’ concepts before delving deep. It directs learners towards positive inclinations, which guide the interpretation of additional knowledge. It creates a simplified conceptual model, while ignoring the depths of knowledge beneath it. A common example of this is the falsification of previous knowledge as learners advance towards higher education. Concepts and facts learned in lower grade levels are constantly falsified and replaced with newer conflicting information. This is not a mere mistake, rather it is an intentional effort at simplifying concepts to make it digestible for novice learners. In other words, it is a real life example of a controlled learning environments.

pic2
Chapter 1: Test Chamber 0

Portal 2 implements controlled learning environments in the form of test chambers. The campaign begins in aperture relaxation chambers and introduces players to basic controls of the game. After the first level, players go through a series of test chambers. All test chambers have a clearly marked entry and exit and pose a simple goal for the player: to escape the chamber. This provides players with a clear sense of direction and a well defined goal. As a result, players are able to direct all their cognitive resources on problem-solving. This design decision allows players to quickly learn new concepts and apply them immediately without any major consequences. The deliberate inclusion of a small number of elements and the intentional exclusion of the full inventory of objects make the test chambers an efficient learning environment.

The test chambers are grouped together into chapters, with each chapter containing about 7-8 chambers. Each chapter follows a design pattern conducive to transfer of learning. There are several clever design decisions that ensure efficient learning while also providing engaging gameplay. Firstly, the first few test chambers in each chapter introduce the player to a new element. These chambers generate an ideal situation to intuitively inform the player about the utility of a new item. For example, ‘Test chamber 1’ introduces the player about navigating through portals. To accomplish this learning goal the test chamber utilizes pre-deployed portals, which can be activated with buttons. Interestingly enough, button triggered portals are hardly utilized in the game after this chamber. However, using them in the first test chamber helps generate an ideal situation to teach the concept of portal navigation without the additional effort of deciding where to place the portal using portal guns. The video walkthrough shared below demonstrates this design.

Secondly, each chapter references previously introduced elements and let’s player explore the interactions between all elements. This provides a platform for deliberate practice and repetition, which enhance retention and understanding of learned concepts. For example, Test chamber 1 in Chapter 2 requires players to utilize the portal gun mechanic in combination with the laser activation tool. This scenario introduce s lasers while enhancing the understanding of portal mechanics through repetition.

Thirdly, test chambers implement the worked example principle described in Cognitive Load Theory. The principle states that cognitive load can be considerably reduced and learning can be enhanced, when learners are shown a worked out example of the type of problem they are trying to solve. Portal 2 manages to do this time and again by introducing semi- functional elements. For example, the test chamber shown above cinematically demonstrates how lasers work. The solution then, is to manipulate a functioning laser, rather than activating it. Moreover, the deliberate exclusion of other elements in the chamber helps guide the players attention to the learning content.

To summarize, Portal 2 test chambers help players learn new concepts through controlled situations, remember learned content through repetition and enhance their understanding by utilizing their knowledge in various situations.

‘The Escape’ towards transfer of learning

After going through a series of test chambers in the first four chapters, the level design of Portal 2 takes a drastic turn. The familiarity of a marked entry, exit and goal is stripped away and the noise of a ‘real’ environment comes into play. The fifth chapter called  ‘The Escape’ sets the precedence of pushing players to apply their knowledge in unfamiliar scenarios. This idea, commonly known as ‘transfer of learning’, refers to application of knowledge in situations unlike the ones where it was learned. It is an important factor in assessing the efficiency of learnin, as the goal of educating is to prepare learners to apply their knowledge in the real world.

Screen Shot 2017-04-24 at 5.19.03 PM
Portal navigation

Through it’s level design, Portal 2 pushes players to demonstrate what is commonly known as ‘near transfer’. Near transfer is the application of prior knowledge in situations that are different yet somewhat similar to the learning context. It requires an understanding of the content and adequate experience of practicing that knowledge in a variety of situations. In the case of Portal 2, the test chambers ensure that the players comply to these requirements. As a result, the following chapters are set on the right platform to challenge players to transfer their skills. They challenge players to navigate for necessary resources and ideate unique solutions using familiar tools.

The level design for these chapters is noteworthy as the challenges posed to the players are conducive to abstraction of knowledge. They direct players to extract general rules and principles from their existing knowledge of the game. For example the portal guns used for solving puzzle are now used for transportation, leading to the realization of the principle of portal travel. Similarly, the principle of conservation of momentum is taught by posing the players with challenges that are not as straightforward as before and require an understanding of the concept. The design of these levels is significantly different than those of the test chambers yet equally well implemented. This is demonstrated by the subtle visual hints and environmental guidance provided by the levels. For example, Puzzles that require projectile movement contain diagonally facing portals, which imply a portal jump as a part of the solution. Similarly, arrows and signs guide the navigation of the players without spoon-feeding the answers.

These factors make Portal 2’s level design an important artifact for educators and learning scientists. It exhibits an efficient implementation of learning principles in a playful environment. The ease with which the game manages to train its players and impart a substantial amount of knowledge is noteworthy and makes it an inspirational piece in the domain of learning design.

Rocket League, the illusion of mastery, and the emergence of complexity

Since its release in July, 2015, Rocket League has risen to the level of a gaming phenomenon with a staggering 25 million downloads and 1.1 million active players per day reported in January 2017.  The game has also managed to bag several awards including ‘The Game Award’ for best Indie game, ‘British Academy Award’ for best Multiplayer game and the BAFTA for best sports game. At first glance Rocket League is just an imitation of soccer using rocket-powered cars, but there’s much more to the game under the hood. In this post, I deconstruct some subsystems of Rocket League through the lenses of game design and learning sciences. The following sections include discussions regarding game modes, matchmaking, competitive play, aesthetics, level design and Rocket League communities.

Gameplay

Rocket league consists of car-based versions of well-known sports like football, basketball and ice hockey. The soccer version is significantly more popular than others and is the only competitive mode of the game. Thus, this discussion focuses solely on the soccer modes of the game. The ‘Standard’ soccer mode of the game consists of two teams of 3 players each, competing in matches lasting 5 minutes. Other soccer modes include 1v1, 2v2 and a special 3v3 mode called Rumble. These modes are discussed in detail in following sections. The core game dynamic of rocket league is driving. During a match, each player controls a rocket-propelled car in an indoor stadium to score goals with a massive soccer ball. In addition to driving, the cars are also equipped with jumping and flying abilities. The three-dimensional movement of players and the ball is constrained by the ceiling and walls of the stadium. The combination of these three mechanics within the constrained space of an indoor stadium provides a deep and complex space of possibility for gameplay.

A fundamental skill in Rocket League is the ability to predict and react to the movement and trajectory of the ball and other players. It demands mastery of internal visualization of a 3-d space. Thus, Visio-spatial skills including spatial perception, mental rotation and spatial visualization play a pivotal role throughout the course of the game. These skills are typically challenged in First person shooters

The illusion of mastery and the emergence of complexity

One of the reason behind the immense success of rocket league is its low barrier to entry. Anyone who has driven a car in a videogame can pick up Rocket league within minutes. Games are different from other media like movies or books when it comes to getting people to start engaging with the media. Although movies and books also require cognitive effort, games demand intentional learning and understanding before the user can start enjoying the content. Casual games overcome this need with simple rules that build upon human intuition. For example, in Flappy Bird a simple rule is to avoid obstacles by maneuvering a bird by tapping on the screen. Similarly, Angry Birds starts off with a simple objective of flinging birds to hit pigs. These simple bite-sized concepts allow players to quickly understand a game and start playing immediately. Contrastingly, complex games like Chess, StarCraft and Magic the Gathering have high barriers to entry as players need to understand a significant amount of information before they can start playing the game. A high entry barrier is usually caused by the presence of multiple interconnected subsystems, which are all crucial for meaningful gameplay. The trade-off for having a high entry barrier is to have a broader space of possibility and complexity in the game, which facilitates intricate decision making, learning and depth in content and gameplay. In many cases, a high entry barrier is tackled with explicit tutorials in the form of special game modes, overt instructions or manuals. Another common way to overcome this issue is with implicit tutorials in the form of embedded learning levels, which slowly introduce concepts and subsystems of the game. From a design standpoint, it is difficult to create a game with a low barrier to entry, which is also deep and complex enough to provide long durations of gameplay.

Rocket league handles this problem magnificently. It extends on the default game-driving interface and introduces jumping and boost concepts in a quick tutorial. With some practice of these concepts, new players are at a level ground with experts as they have all the functional tools that experts have at their disposal. This relatively low-barrier to entry allows new players to start competing in the multiplayer mode almost immediately. Leveling the playing field in terms of game abilities empowers new players as they do not feel left-behind in the game. Moreover, the decision promotes mastery of skills by rewarding gameplay ability rather than time investment. This setup allows the game to create an illusion of mastery for players at various levels of ability. New players who believe that they have mastered the controls are matched with players of similar abilities and get enough gameplay to practice basic gameplay. Then, as players get better, they are matched with higher ranked opponents who play differently and do things with their cars that a beginner didn’t even know were possible. This is the point when things get interesting. The beginner encounters new skills that were never introduced by the game, but were performed by an opponent or a teammate. At this point, the beginner’s illusion of mastery is broken and new complexities emerge in gameplay. The beginner now understands and starts practicing new skills as her opponents are using them against her to gain an upper hand. An appealing factor of the game is that this process never stops. Even after hundreds of hours of gameplay, Rocket League has more to offer in terms of practice and perfection.

 

UX design for the Subconscious mind

“Man is a rational animal. So at least we have been told. Throughout a long life I have searched diligently for evidence in favor of this statement. So far, I have not had the good fortune to come across it.” – Bertrand Russell

We overestimate the role of rationality and meaning in our lives. It is easy to fall into the trap of believing that all our decisions are based on logical reasoning. For example, there is no reason behind most of our preferences in color, fragrance and sound. Although it is possible that you might have a reason for liking RED over GREEN, in most cases this reason is subconscious. One doesn’t just sit down and contemplate why one fragrance is better than the other, it’s just a natural feeling, most of which is subconscious.

Understanding the role of the subconscious mind in decision making is an important step for designers. So, working on a color scheme for a couple of hours is completely worth its time and might be more important than the awesome new feature that you could’ve developed instead. Look and feel aren’t things that JUST make things ‘pretty’. Because having a ‘pretty’ design might play a pivotal role in the client choosing your product over your competition.

Donald Norman talks about three levels of design in his book Emotional Design: Why we love or hate everyday things. The first level is Visceral, which relates to our natural tendencies, all of which are subconscious. The second level is Behavioral, which we as designers tend to focus on the most. Behavioral design deals with the functionality and the usability of products, it’s about what a product can actually do for the user. This level, for the most part is the conscious aspect of using products. However, as mentioned earlier is not the only important factor behind good design. The third level is reflective, which is more about social implications of the design. Hence, questions like, how do people perceive your product? what does owning your product convey? what type of people use your product? play a major role at this level of design.

When reflecting on all these factors it becomes clear that although functionality and usability are important, our natural tendencies guide our choice. Of course having good features is crucial, but it is not the only thing that matters. Designers also have to deal with the subconscious mind to create a good product and human-centered-design is a good approach to do so.

 

A short summary of committee Feedback

For my thesis projects, i have been approaching my thesis committee, field experts and peers to collect feedback on design, development and conceptual framework of my project. Having an existing alpha version of the game has helped me immensely in collecting better quality

  • I received feedback from Clancy Blaire from NYU who is an expert on Executive Functions that the underlying task in the game is representative of the Executive Function task targeted by the game. Moreover, he suggested other tasks which could be incorporated in the game to trigger some aspects of the Executive Function which are currently missing.
  • I received feedback from Melissa Biles regarding game data logging and player modeling. We discussed ways of adapting the game in different ways using game telemetry. On a conceptual level, she suggested using an emotional framework for adaptation instead of a very messy construct of flow, which was my initial idea.
  • I received Feedback from Jan Plass regarding the conceptual argumentation of my theory of change. After i presented my logic model, he pointed out that my argumentation was backed up by theories from conflicting domains.  This was because, theory of adaptivity in learning games didn’t completely apply to my project, which is about cognitive training and not learning in the conventional sense. He also, suggested that i should look into research which shows higher cognitive gains from difficult Video Games as compared to simple games and use that to back up the argumentation of using an adaptive algorithm to strain the cognitive abilities of players.
  • I received feedback from Ralph Vacca regarding the design of the existing game and the development of procedural content generation algorithm for the adaptivity. He pointed out possible UX issues in the game mechanics. Moreover he helped me surface assumptions in my logic model and gave tips on eliminating possible confounding variables which might undermine the results of my research findings.
Image Source: Scott Maxwell

My process of collecting and incorporating feedback

My game design endeavors have made me very receptive and in some ways extremely greedy about feedback on my work. In terms of game design, I believe that feedback based design changes are what make or break a game. Due to the overwhelmingly positive results of this playtesting – feedback methodology i now approach all of my work in a similar way. For the project in hand, i first figure out the optimal way to convey the core ideas of the project and then build a prototype based on that. After that it’s all about carrying around that prototype to engage in a discussion with people.

It might seem that this process is very specific to games or media, but actually it has wider applications than that. For example, when writing an academic paper, i have created prototypes in the form of logic models, index cards with core ideas of paragraphs and also slideshows to convey my ideas to experts and peers alike. The end goal of these is to get my ideas through and hence get better quality of feedback from participants. In game design projects, it is a common approach to create prototypes in the form of card games and board games even when the actual game is supposed to be digital. I did this with Monkey Swing over a year ago where i tested a board game version with kids before i wrote even a single line of code.

For my thesis project, my feedback collection approach has been very different depending on the committee member or peer i’m getting feedback from and also on the stage of the design. When i require feedback on conceptual arguments in my paper i usually present a visualization of my logic model. On the other hand, when the feedback is required on technical aspects i present the code or the flow chart of my algorithm. One common approach however is to provide a context of the project for which i demo my game with the expert, regardless of the type of feedback i’m looking for.

It goes without saying that i have incorporated tons of valuable feedback into my design. On a personal level it is extremely difficult to explain the exact way the feedback influenced my design. This is because whenever someone throws out an idea i try to build upon it and immediately start collaborating with them. This usually leads to a hybrid design change which is neither theirs nor mine. However, in some cases it is easier to differentiate ideas as they contradicting completely. In such cases i write them down in my notebook and reconsider the reasons which led me to choose option A instead of B.

To summarize, i have found the playtesting approach of feedback collection extremely helpful for my thesis project. I’ve created my own hybrid mess of a method for feedback collection which makes it difficult for me to isolate feedback from original ideas, but helps me immensely in my design process. This is a trade-off i’m happy to accept as i believe that good design and content is what matters the most in the end.

Image Source: OpenClipartVectors

Using Procedural Content Generation (PCG) to implement DDA

Adaptivity in games is generally refers to changing the game content to meet the users needs and preferences. In previous blog posts i have expressed the reasoning behind adapting learning and cognitive training games. Moreover, i have also described the DDA framework for adapting games. In this post i will explain a method called Procedural Content Generation (PCG). I will then discuss how this method can be used to implement the DDA Framework of adaptivity. Okay, so let’s begin.

Procedural content generation (PCG) is an approach to create content automatically with the use of algorithms instead of manual effort (Yannakakis & Togelius, 2011; Togelius et al., 2010). The developments in this field are driven by three main reasons (Togelius et al., 2010). First, PCG shortens the gap between the overwhelming demand for new game content but the lack of finances to generate content manually, which improves the replayability of games as it constantly generates new content. Second, using PCG improves the performance of games in terms of memory consumption as content is ‘unpacked’ only when it is required. Third, exploring PCG might lead to innovations in game design and hence promote creation of completely new genres of games.

What PCG essentially does, is that it creates new game content like maps, enemies, weapons, etc. automatically using computer algorithms. These algorithms are constrained by the parameters defined by the game designer to generate quasi-random yet appropriate and playable game content. As a result, the job of a game designer shifts from designing game content to designing algorithmic parameters. Parameter design defines the structure of a game and clearly states how the algorithm will create the content for the user.

On the other hand, PCG can also be used to cognitive and affective engagement of players by creating content based on the players needs and preferences.This approach is described as EDPCG: Experience-Driven Procedural Content Generation (Yannakakis & Togelius, 2011) and can be utilized by serious game designers to create highly adaptive games which respond to player needs at a fine level.

Image Source: Artificial Intelligence by geralt

Dynamic Difficulty Adjustment (DDA) for Serious games

Adapting serious games using DDA is a way to adjust the difficulty of the game to match the competence of the play-learner. This doesn’t always mean making the game easier but sometimes means the exact opposite. In this post i will discuss the methods, affordances and constraints of using DDA for serious games.

When playing a game, a player becomes frustrated when the game is too difficult. This is not necessarily a bad thing as an important aspect of games is to make players fail to let them master the skill to advance a scenario or a level. However, it is important to control the type of failure which occurs during gameplay. Good games provide appropriate level of challenge (Clifford, 1984) which motivates players to overcome a challenge rather than get frustrated and give up. On the other hand, if the difficulty is too low, the players get bored and disengaged from the gameplay. Due to these factors, it is important to maintain an optimum level of difficulty in the game to keep the player engaged.

However, it is extremely difficult for a designer to create a game which matches the skill level of each player at all times during the game. This is a bigger problem for serious games because they don’t have the leisure of having a self-selecting audience which picks a game based on their skill level. If we want to incorporate serious games in areas like classrooms, healthcare, and military organizations, it is important that we create games which are appropriate for all participants and not a select few. This is where DDA can play an important role.

A common approach to difficulty adjustment in games is to let the player choose the difficulty level at the beginning of the game. In some cases, games provide the option to change difficulty level during gameplay. The problem associated with this type of difficulty adjustment is the ambiguity. How is a player supposed to know what easy, medium or hard means, before playing the game? Even with the option to change difficulty later on, it is difficult to infer what aspects of the game will become more difficult or easier. What if a player is good at shooting enemies but bad at combo moves? Similarly, in serious games, what if while playing a language acquisition game, a player is good at writing, but bad at pronunciations? Should she be made to replay a level just because she’s bad at one aspect of the game? Or should she be provided a level where the focus is to develop her pronunciation while the difficulty of writing is still challenging enough?

A DDA framework, has two core components: player modeling and game content manipulation. Player modeling is the process of inferring relevant skills, competencies and preferences of a player (Nguyen & Do, 2008). A player model refers to a persistent database profile of a player which stores relevant information about a player like their gameplay data, preferences, playing style, skill level in different aspects of the game, etc. This allows the game to understand the player better and hence facilitate appropriate levels of difficulty accordingly.

Game content manipulation is the process of changing elements of the game based on the player model and the game context to adjust the level of difficulty. This can be achieved by adjusting variables like speed (in tetris), number of enemies (in plants vs zombies), gravity (in platformers), etc. using adaptation algorithms. The type of manipulation and the effect it has on the difficulty is dependent on the type of game and on the type of player. This makes the process of manipulation a little tricky, as the designer has to create levels/scenarios in such a way that adaptation of content is possible in real-time. Moreover, it is a lengthy and sometimes even impossible for a designer to pre-calculate all such possibilities and design all the necessary content in advance. A new approach that overcomes this problem and also provides several other affordances is called PCG. I talk more about this method in a dedicated blog post about PCG.

The research on use of DDA in serious games has shown positive results (Soflano, Connolly & Hainey, 2015; Hwang et al., 2012). Even simple adaptive algorithms based on base level player models have achieved these results (Sharek & Wiebe , 2015; Harrison, 2014). In terms of future research, it is necessary to provide further work in this area using sophisticated modeling and algorithms to achieve larger gains in outcomes. With my thesis project i’m trying to address this need by creating an adaptive game for cognitive training.

Bibliography

Image Source: "Girl plays Pac Man" by Lars Frantzen - Own work. Licensed under CC BY-SA 3.0 via Commons

Eco-Duel: A brief intro to the prototyope

We are working on a game which tracks household usage of resources and produces a daily rating of 1-10 for each player (Eco-Score). This rating is then used as the strength of players in the game. The core idea of the game is ‘Real life actions lead to Virtual Vulnerability’.

We are creating a paper prototype to test the core interactions of the game. The prototype is a Live Action Role Playing Game where players attack each other and wager ‘gold’ by trying to predict the opponent’s score, the winner of each duel is the person with the better score, but the amount won/lost depends on how players interact (their confidence in their own score and prediction of the opponents score).

Design narrative

    • Does the game work as intended? Are there loopholes? Are the rules ambiguous?
    • Is dueling fun? We are testing if the core game mechanic is fun or not. This is important because we expect players to do this over and over again and if this interaction is boring then there is no reason for players to engage with this media. 
    • How does the Eco-Score affect player actions? Will players want to have higher Eco-scores? Will they be motivated enough to change their real life actions to improve their game progress?

Method

This week we did an in-house playtest with one external participant to get some outside perspective. The goal of the playtest was to polish the gameplay derived from the rules which were created last week. We used the rule sheet to play the game to completion. The reason to do this was to refine the rules to remove ambiguity and loopholes in the gameplay.

We plan to do an external playtest in the coming week with the following details:

  • Participants: 8 Players | Young adults who are acquaintances | Share a work place (Office / School/ Workshop)
  • Measures: Direct Observations | Video Recording (Single camera for group) | Semi-Structured Interview
  • Method: Fill out personal info through online form -> Brief intro to the game -> Start of gameplay with guided instructions from coordinators -> Play until completion or 20 minutes -> Players will be interviewed in 4 groups of 2 players each (By 4 coordinators).

Results

  • In this playtest we found out that there were several gaps and loopholes in the rules. During the playtest we refined the rules until we could play a game to completion. We also tested made-up scenarios which were vulnerable to ambiguity and contradictions.
  • We also found that the core game mechanic was not fun, walking up to an opponent and sharing scores wasn’t fun at all.
  • We also found out that players with a bad Eco-Scores were easily disengaged from the game

Design Implications

  • We changed the rules to incorporate a more fun way to ‘duel’, so that the core game mechanic isn’t boring.
  • We refined the rules, and are considering making it a mobile RPG rather than a Simulation RPG, as we want to leverage the direct interactions between friends & acquaintances. We have changed our prototype to reflect this interaction.
  • Moreover we want players to be engaged in the game and hence we have added mechanics in the game to support players even when they have a bad Eco-score, so that bad Eco-scores don’t lead to withdrawal from the media, but rather drive players to make gradual changes in lifestyle.
  • As mentioned above the next key step is to do an external playtest to further confirm the  social-emotional aspects of our research questions. 
Image Source: "Yevgeny Onegin by Repin" by Ilya Repin - [1]. Licensed under Public Domain via Commons

Interventions to improve Executive Functions

Executive functions can be developed at any point in a persons lifetime. Studies have shown improvement in executive function through interventions at all stages mentioned below:

  • Preschool age: Thorell et al.( 2009) ; Dowsett & Livesey (2000)
  • School age: Karbach & Kray (2009); Klingberg et al. (2005)
  • Adolescence: Zinke et al. (2012)
  • Adulthood: Karbach & Kray (2009)
  • Old age:  Basak et al. (2008); Karbach & Kray (2009)

Interventions which have shown to be effective to improve EF include:

  • Physical exercise: Hillman, Erickson & Kramer (2008)
  • Musical training: Rauscher, Shaw & Levine (1997)
  • Martial arts and mindfulness training: Lakes & Hoyt  (2004)
  • Computerized training: Holmes, Gathercole & Dunning (2009)

The most well studied intervention for EF development has been an online cognitive training tool called CogMed. It is a guided computer based online working memory training program which guides its user through a rigorous routine of solving computerized game-like tasks. Several studies have shown that using CogMed has led to improvements in EF.

Recently, researchers have started to explore the use of video games for training executive functions. (Strobach, Frensch & Schubert, 2012; Maillot, Perrot & Hartley, 2012; Andrews & Murphy, 2006) while others have used games specifically designed for this purpose. The reason behind this might be that games provide several affordances over other mediums. These affordances include but are not limited to factors like interactivity, intrinsic motivation, narrative design, timely feedback and well paced difficulty adjustment. 

Games which have been specifically designed for developing EF have showed positive results. A few examples of such games are:

  • BrainAge: Nouchi et al. (2012)
  • The Great Brain Experiment: McNab, Zeidman & Rutledge (2015)
  • Jungle Memory: Alloway & Alloway (2008)
  • Odd Yellow: Der Molen & Luit (2010)

Although these games have tapped into several affordances of games to facilitate EF training, they, more or less, lack in creating engaging content, incorporating affective design, providing timely and relevant feedback and most importantly adjusting game difficulty based on the user’s skill and affective state. Monkey Swing tries to overcome these shortcomings by providing a narrative based engaging game which adjusts its difficulty based on the user’s competence, hence facilitating better engagement and on-task behavior of the player.

Bibliography

Image Source: Efraimstochter

What role do Executive Functions play in our lives?

Executive functions (EF) are crucial in learning and cognition. They are the building blocks of  cognitive control functions required to concentrate, think and regulate impulsive behavior (Diamond & Lee, 2011). EF have been found to be predictors of cognitive skills like:

  • Metacognition : Bryce, Whitebread & Szűcs (2014)
  • Language acquisition:  Mary Wagner et al. (2014)
  • Math skills: Bull & Scerif (2001)
  • Theory of mind: Carlson, Moses & Breton (2002)

EF are better predictors than IQ in detecting school readiness for Preschoolers (Blair and Razza, 2007). Moreover, they have also been shown to be strong predictors of academic outcomes of students (Yeniad, Malda & Mesman, 2013; Best, Miller, & Naglieri, 2011), even in longitudinal (Bull, Espy & Wiebe, 2008) and cross cultural (Thorell, Veleiro, & Siu, 2013) studies.

Several studies have related executive functions with a number of learning disabilities and mental health issues including:

  • Schizophrenia: Nieuwenstein, Aleman & Haan (2001)
  • Parkinson’s disease: Dagher et al. (2001)
  • Antisocial personality disorder: Morgan & Lilienfeld (2000)
  • And most widely with ADHD: Willcutt et al., 2005; Martinussen et al., 2005; Biederman, Monuteaux & Doyle, 2004

The mentioned research shows the impact EF have on our lives. They are at the core of our cognitive processes and broadly influence our lives. It is clear that having low executive functions creates a huge gap in the level playing field in education and life as a whole. As a result, it is important to design interventions which improve EF for children and adults who are lacking in these skills.

Bibliography

Image Source: derekdavalos