As the world in the 21st century has become more dynamic and unpredictable, the need for adaptive behavior in the military is of increasing importance. A serious game (SG) seems to be a suitable intervention for improving adaptability to prepare the military to deal with unpredictability. The purpose of this study is to explore the game design for enhancing adaptability of the military in an ill-structured complex decision making context. We introduce rule changes in the game to stimulate learners’ sensitivity to detect the applied changes and to develop an appropriate strategy.
The procedure of our SG intervention design and development is described within the framework of the Cognitive Flexibility Theory and that of Reversal Learning. The Job Oriented Training approach as well as rule change is embedded in the game structure. This paper summarizes the results of a pilot (n=12) with the game. The participants’ score, time spent to complete the game, and adaptive performance score are described. Survey data shows players’ detection of rule change, their experience on difficulty, engagement, motivation, and concentration level of this game play. Finally, we discuss issues and future direction of this study.
Introduction
Training military personnel to be flexible and to prepare them for unexpected, changing circumstances has great importance for defense and security. Countries such as the U.S., Canada, the U.K., and many EU countries have been seeking for ways to build adaptable forces to effectively handle dynamic and unpredictable operational environments [1]. In the Netherlands, TNO is conducting a research program called ‘Human and Organizational Adaptability’ (HOA) for the Armed Forces. As part of this program, the current paper focuses on a Serious Game (SG) design that aims to improve the adaptability of military personnel to deal effectively with changing work environments.
What is adaptability?
Adaptability is defined as the ability to effectively adjust to novel, unforeseeable and changing situations [2]. Pulakos et al. [3] list eight dimensions of adaptability. These are (1) creative problem solving, (2) dealing effectively with unpredictable and changing situations, (3) learning new skills, knowledge, and procedures, (4) interpersonal adaptability, (5) cultural adaptability, (6) dealing with emergencies, (7) coping with stress, and (8) physical adaptability. All eight dimensions of adaptability are relevant to military operations [4]. For instance, military personnel have to be creative in making strategic plans during unpredictable missions, they have to adapt to other cultures in foreign countries during missions, and they have to physically adapt to extreme situations such as heat.
Serious Gaming for learning adaptability
Our assumption is that adaptability of military personnel could be improved by training in order to prepare them optimally for unforeseen situations. One of the interventions could be a serious game. SGs have been used to provide an authentic context and natural learning environment. Using SGs in military training is deemed beneficial in terms of time and cost compared to field training [5]. Moreover, SGs, in particular wargames, have been used in military training for at least 200 years. Therefore, military personnel are familiar with learning through games, be they board games or digital games.
Some SGs that aim at improvement of adaptability applied change of environments during game play besides other types of interventions. For example, a SG called ‘Team Wargame Interaction Simulation Training (TWIST)’ [6], forces players to be flexible and adapt to new and different settings by conducting tank operation tasks in various locations such as open pasture, jungle or archipelago to successfully complete tasks. Another SG called ‘Apache attack helicopter’ [7] creates a learning environment encouraging adaptive performance of players by providing a variety of terrains to operate an attack helicopter. In the above-mentioned games, learners are given tasks or missions. While performing such missions, learners face situations where the environment suddenly changes. In those cases, as learners are not explicitly trained how to perform in the new environment, they will have to find that out by themselves. Adaptability is applied when learners adjust to the new environment and continue the task in different ways. However, it is not clear whether performing learned tasks in different environments are sufficient for adaptability training. Therefore, there is a strong need for designing and developing a serious game that can train adaptability involving more ill-structured tasks and more fundamental changes (i.e., rules that influence complex decision making). To this end, we used concepts developed in Cognitive Flexibility theory and Reversal Learning, to be discussed in the next paragraph.
Rationale
Cognitive flexibility
Cognitive flexibility (CF) is strongly related to adaptability, especially the more cognitive elements of adaptability dimensions [3]. It could be regarded as a predictor, highly influencing adaptive behavior [8]. CF is defined as “the ability to spontaneously restructure one’s knowledge, in many ways, in adaptive response to radically changing situational demands [9].” CF theory has been used in various fields to explain and improve learning in ill-structured and complex domains. When situations change, cognitively flexible individuals recognize that a situation has changed. After assessing the new situation, they are capable of adjusting strategies to deal with the new situation. They can provide non-routine (adaptive) responses to successfully perform in new situations. To effectively train CF, it is important to focus on learning how to detect situational change and on how to (re-) define strategies according to the change [10].
Our assumption is that a SG (with ill-structured, complex tasks) which can enhance players’ flexible thinking should improve their adaptability. We developed a complex decision making game with the aim to enhance the CF of higher-level military officers.
Reversal learning
In line with CF, Reversal Learning refers to behavioral change [11], [12]. Several studies examined how individuals adapt their behavior in changing environments. Reversal learning focuses on how quickly people learn rules, and subsequently how quickly they adjust to changing rules. It could be considered as a specific form of cognitive flexibility, focusing on learning existing and changing rules. This is a slightly different approach than learning how to adapt to changing environments, although both contribute to becoming more cognitively flexible and adaptive.
Testing cognitive flexibility
Various tests have been developed to measure how cognitively flexible people are, such as the Wisconsin Card Sorting Test (WCST), Iowa Gambling Task (IGT) and reversal learning task [12]. These tests have some characteristics in common. Usually, individuals learn to perform a simple task (i.e., card sorting in the WCST or IGT). Direct feedback such as right or wrong (WCST), or a financial reward (IGT) is given after each action. At a certain point, the rules suddenly change (i.e., a different rule is introduced for sorting cards in WCST) and the learner is not informed that the rules are changed. The learner must detect the rule change when they receive negative, direct feedback for the same performance that was positively rewarded before the rule change. CF is assessed by measuring how long it takes a learner to detect the rule change and adjust his behavior after the rule change, and how many good answers the learner gives. CF tasks such as the Wisconsin Card Sorting Test, Iowa Gambling task and the reversal-learning task all measure the individual’s cognitive flexibility. However, these tasks are simple and procedural with testing as their main purpose instead of learning. As the tasks are presented without a real life context, learning CF using these tasks has its limits. Therefore, the assumption for our game design is that a SG with a rule reversal learning mechanism (similar to that of existing CF tests), yet requiring learners to do complex tasks (complex decision making) in a rich military context can improve CF. Hence, the hypothesis is that adaptability of military officers can be increased via SG-based rule reversal learning in a realistic context, relying on the main principle of rule change adopted from CF testing.
Game design
Didactical approach: Job Oriented Training
Job Oriented Training (JOT) has been recognized as a successful military training method and claimed to accelerate adaptability of learners [13]. By using SGs in a JOT setting, military students are encouraged to learn and perform in a safe yet realistic environment [14]. Hence, our game design embedded some of the JOT characteristics. These are:
- Planning-execution-reflection: The game starts with a briefing and ends with a reflection phase.
- Active learning: Learners are active decision makers during the game play. They learn by trial-and-error. Explicit instruction is not present.
- Relevant reality: Learners play a company commander role, making decisions in the game to complete a military operation.
- Challenge: Learners need to plan and make decisions under time pressure while the situation is complex and information is missing.
- Cooperative and reflective learning: Individual reflection is conducted before the second briefing and at the end of the game play. Players have to answer questions regarding the rules and decision making for a self-reflection moment. After the individual reflection, learners discuss and share their strategies, thoughts and decisions on the game play during the group reflection phase.
- Although group reflection is a listed feature of JOT, we did not embed it within this particular game. As the game can be played individually or in a training session, separate questions were developed to facilitate the group reflection upon the completion of the game. A facilitator (or trainer) is required to facilitate the session and to give appropriate guidance to players.
Game structure
Figure 1: Structure of the game
Figure 2: Examples of the game play
A PC-based decision making game was developed to enhance adaptability of individual players. The game consists of five phases (see Figure 1). During the briefing, players are informed about background information, the current situation at the onset of the game scenario, and the objectives of the operation. Maps were added to the game to help players visualize the area. To complete the game, players have to make a total of 21 decisions (cases) by choosing answers based on a case description. Case means an assignment to players. Feedback is provided after every case to inform the player about the results of chosen actions. During the rule-learning phase, players learn three rules while solving nine cases. Four different courses of action are presented to fulfil each case and players can select two choices per case. The game provides players with feedback only on the chosen options as a result of actions. Case 1 to 3, 4 to 6, and 7 to 9 are designed to learn three main rules. Cases 3, 6, and 9 are used to test whether players learned the rules. During the consolidation phase, players practice the assignment with the original rules. At each case administered during the consolidation phase, players are tested on the learned rules. If players were not able to learn the rules by playing case 1 to 9 (rule learning phase), the consolidation phase provides an extra opportunity to learn the rules. During the rule change phase, players need to figure out the altered rules by assessing the feedback to the selected responses. The rules learned in previous phases no longer apply. The responses selected by players on the cases 14, 15, 18 and 21 are used to measure whether the players have mastered and adapted to the new rules.
Figure 3: Hypothesized attentional process model of the game players
Building narrative
The game contains a rich narrative for ill-structured complex decision making. We created a fictitious scenario involving military operations against a robot army. The rationale for creating a fictitious scenario instead of using existing military scenarios is that in the latter, some players may have more background knowledge on the scenario than others, possibly confounding the results. It is important that players detect the rule change not by using their military experience, but by using the feedback (results of the action) in the game that contains situational cues.
In the game, a player is a commander of a Dutch military unit deployed in a fictitious country in 2030. The enemy has an advanced robot army and the mission is to defeat the enemy and evacuate civilians. During the game play, players have to discover three rules: 1) the behavior of turrets (weaponized mounts) guarding the walls of the target location, 2) functions of each robot type (red, blue and green-colored robots), and 3) the specific vulnerabilities of each of these robot types. After the consolidation phase, the event of a solar storm is introduced in the story. Players are not told that this event changes the rules governing the behavior of the turrets and robots, and the vulnerabilities of the robots. Rather, players have to discover changed rules by using the feedback on the selected responses. Below is an example of the game play. The green bar represents the remaining time to complete the game. The color of the bar changes to orange and then red as the player approaches the time limit.
Rule change
As discussed in the rationale of this paper, rule change (based on reversal learning) is the crucial element of this game for training CF. Players should not be informed of the rule change, yet they will have to detect that rules governing the robot and turret behavior have changed and they will have to change their decision making accordingly.
Our focus within the game design is players’ detection of the changes occurring in the turret and robot behavior (rule change) and whether players adapt their responses (choosing actions appropriate for the changed situation). Therefore, a minimum amount of situational cues were given. These were provided gradually so that learners can actively figure out the rule change. Below is the hypothesized attentional process model of the game players before and after the rule change. The model is taken from the CF theoretical framework [10].
Game mechanics
In this section, we describe the game mechanics.
- Role-play: Players take the role of commanders and need to make decisions to successfully conduct a mission. The role-playing element aims to achieve realism as well as immersion in the game play.
- Selecting two actions: For every case, players have to choose two options (taking actions) out of the given four. This gives players a feeling of active control of the game. It is the player who actively creates the story. It also provides a learning environment that allows players to try out different strategies. Moreover, this mechanism adds that players’ actions are limited and the limited selection forces players to make the best decisions.
- Feedback: The types of feedback available in the game are negative, positive, and neutral. Negative feedback indicates that chosen actions caused negative results. For example, assault vehicles were destroyed due to the action made by the commander (player). Positive feedback shows positive results from the chosen actions. For instance, ordering to stay covert when facing combat robots results in no casualties and it allows the unit to continue the mission with limited loss of time. Neutral feedback provides information that can be helpful to learn the rule or situation.
- Noise options: Every time players choose an action, they receive relevant feedback. However, some feedback contains information that is not directly relevant to the rules (hence, called a ‘noise option’). For example, when a player orders a unit to search nearby empty houses for civilians, he receives feedback that only a cat was found in the houses. The noise options are added for realism that in reality, not all actions have direct actionable results.
- Low physical fidelity: This game intends to train players’ cognitive skill (decision making). Hence, high physical fidelity is unnecessary for this game and might even confuse players.
- Fog of war: As frequently used in many war games, this element adds realism and increases the difficulty of the game. We purposely limited players’ access to information (i.e., clouds around some locations so players find out about the whole area gradually by playing the game). Also, it provides an opportunity for players to deal with unknowns and make decisions in circumstances where information is missing.
- Time pressure: In reality, military officers make decisions under time pressure. Time pressure is added in the game as narrative (i.e., ‘It is urgent for the remaining units to get safely into the target location and evacuate the citizens as soon as possible.’) for realism and setting the difficulty of the game.
- Visual aids: The game puts a high cognitive load on players because they have to constantly make decisions in complex and unknown situations. It is not our intention to measure either memory or cognitive load. Thus, we provide visual aids (i.e., maps) to help players with their decision making.
- Scoring system: Scores are calculated automatically within the game system based on the actions chosen by the players. The action quality as well as the inclination to build up situational awareness are measured. For example, plus points are given for situational awareness actions (i.e., actions to get more intel). Minus points are given for wrong actions (i.e., order to send a transport helicopter when the turrets will destroy the helicopter) and plus points are given for correct actions (i.e., order to capture green robots knowing that green robots carry intel). During the rule change phase, adaptive actions (actions chosen to apply new rules) are given plus points and actions based on obsolete rules are given minus points. The final score is calculated automatically by the system, ranging from 0 to 100.
Game testing
The purpose of the game testing to be described below was to validate the game design rather than to find statistically significant effects of the game play on adaptability. Therefore, we used a convenience sample of students rather than military personnel.
Participants
Twelve ‘Game Study’ Master’s students (one female and eleven male) play-tested the game during the Game Master’s introductory workshop at a University in the Netherlands. The students have no military background. All of them have extensive gaming experience and knowledge.
Procedure
First, the students were informed about the purpose of the game testing and received a brief introduction to the game. The topic of rule change was intentionally not mentioned during the introduction of the game. Subsequently, an overview of how to play the game as well as the procedure of the game testing session was given using PowerPoint slides. All students used laptops or tablets to play the game. We provided a paper-based glossary with descriptions and pictures to help students with the concepts and entities used in the game. Also, pens and blank papers were distributed for students to take notes of relevant information during the game play. The rationale for taking notes is that memory should not be a factor to play a role in the game play. A survey was conducted after the game testing session. Due to time limitations and for practical reasons, formal group reflection was not conducted during the game testing session. However, one of the game designers gave informal group reflections (few students at a time) upon the completion of the survey. The testing session took approximately 90 minutes. Due to a technical problem, one student could not download his game play data file. Therefore, we present 11 game play data results and 12 survey results.
Results
The students’ total game scores (n=11, M=72, sd= 9.08) varied from a minimum of 54 to a maximum of 86 out of the total score 100. The amount of time students took to complete the game varied widely from a minimum of 23 minutes to a maximum of 67 minutes. The Pearson’s correlation coefficient (r=.30, p=.38) between time spent to complete the game and the total game score was positive but not significant with the low number of participants. The high standard deviation of the students’ game scores and the low correlation between the total time and total score might be explained by the individual differences of players such as differences in information processing, decision making, detection of rule change, and cognitive load.
Figure 4: Correlation between players’ total game scores and time spent
The game data file gives insight into which options players chose throughout their game play. Case 14,15, 18 and 21 contain adaptive options (options that are contradictory to previous rules but appropriate to changed rules). Therefore, adaptive performance can be measured by examining those answers. 27% of the students chose adaptive answers in case 14 and 63 % of students chose adaptive answers in case 15. Considering cases 14 and 15 are pertinent to the changed rule 1 (behavior of turrets), the results show gradual detection of the rule change (27 % -> 63 %, the number of students choosing adaptive options increased) with some individual differences (not all students chose adaptive options). Thirty two percent of the students selected adaptive options in case 18 with revised rule 2 (functions of the robots) and 68 % chose adaptive options in case 21 with the changed rule 3 (vulnerability of robots). As rule 3 is closely related to rule 2 (functions and vulnerability of robots changed depending on the colors of robots), it is evident that players also gradually detected and applied new rules similar to cases 14 and 15, although the point of application differs individually.
The survey was conducted to examine the game play experiences of the students. Students were asked whether they detected any changes in the scenario (i.e., behavior of turrets) and when they detected the change for the first time. Out of 12, 11 students reported that they detected the changes in the scenario. The detection moment varied per player. Most of the students said detecting the changes was obvious and easy. However, the student who reported ‘did not detect the change’ described that the amount of information in the game overwhelmed the player to detect any changes. This participant scored total of 59 out of 100 and left the comment that ‘I had a long day today.’
The visibility and usability of the game was assessed by means of open questions. Some students reported that maps were unclear and the game contains too much text and information. Others reported that the maps are useful and the game is very easy to use. Afterwards, students were asked to answer an open question on what they thought the game was about. Most of students mentioned complex decision making and one participant specifically wrote that the game is about dealing with situational change.
Figure 5: Results of students’ assessment on the game
Figure 5 shows the students’ opinion ratings on difficulty, engagement, motivation, and concentration in the game play. Overall, the engagement and motivation were scored positively. The difficulty of the game differs per individual. It is possible that the game is difficult for some students due to their unfamiliarity with decision making in a military context. Another possible reason could be the amount of complex information and missing information (fog of war) while decisions have to be made under time pressure. As for concentration, some students reported they were distracted. Fatigue could be a plausible explanation for low concentration as the testing session started at 15:30. It was the last session of the game introductory workshop, which started at 09:00.
Conclusion
In this paper, we reported the objective, theoretical framework, the game design and game testing results in order to develop a SG to train adaptability of military officers. The testing results cannot automatically be regarded as representative for the military population, as the participants were Master’s students in Game studies. Their knowledge of, and experience with, games could yield different results from those of military personnel.
This game will be improved based on the game testing results and comments. Afterwards, the game will be used during the training at the Major’s school in the Netherlands in order to increase the adaptability of the officers [15]. Future studies should examine whether this game can improve adaptability and investigate the learning effects of the game intervention, as this research was an exploratory study on the game design only.
Note
THIS PAPER WAS ORIGINALLY SUBMITTED FOR, AND PUBLISHED BY, THE 2016 MSG 143 SYMPOSIUM, NATO STO
(COPYRIGHT NATO STO — PAPER STO-MP-MSG-143-04, ISBN 978-92-837-2060-7). IT IS REPRINTED WITH PROPER PERMISSIONS.
References
- Dandeker, C. (2006). Building flexible forces for the 21st century. In G. Caforio (Ed.), Handbook of the Sociology of the Military (pp. 405-416). New York, NY: Springer.
- Kozlowski, S. W., Gully, S. M., Brown, K. G., Salas, E., Smith, E. M., & Nason, E. R. (2001). Effects of training goals and goal orientation
traits on multidimensional training outcomes and performance
adaptability. Organizational Behavior and Human Decision Processes, 85(1), 1-31. - Pulakos, E. D., Arad, S., Donovan, M. A., & Plamondon, K. E. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive
performance. Journal of Applied Psychology, 85(4), 612. - White, S. S., Mueller-Hanson, R. A., Dorsey, D. W., Pulakos, E. D., Wisecarver, M. M., Deagle III, E. A., & Mendini, K. G. (2005). Developing
adaptive proficiency in Special Forces officers. Personnel Decisions Research Institutes Inc. Arlington, VA. Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA432443 - Roman, P. A., & Brown, D. (2008). Games–Just how serious are they. In The Interservice/Industry Training, Simulation & Education Conference (I/ITSEC) (pp. 1-11), Orlando, FL.
- Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (2000). Performance implications of leader briefings and team-interaction training for team adaptation to novel environments. Journal of Applied Psychology, 85(6), 971-986.
- Chen, G., Thomas, B., & Wallace, J. C. (2005). A multilevel examination of the relationships among training outcomes, mediating regulatory processes, and adaptive performance. Journal of Applied Psychology, 90(5), 827-841.
- Good, D. (2014). Predicting real-time adaptive performance in a dynamic decision-making context. Journal of Management & Organization, 20(6), 715-732.
- Spiro, R. J., & Jehng, J. C. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. Cognition, education, and multimedia: Exploring ideas in high technology, 205, 163-205.
- Cañas, J. J., Fajardo, I., & Salmeron, L. (2006). Cognitive flexibility. In W. Karwowski (Ed.), International encyclopedia of ergonomics and human factors (2nd ed., pp. 297-301). Boca Raton, FL: CRC Press.
- Cools, R., Clark, L., Owen, A. M., & Robbins, T. W. (2002). Defining the neural mechanisms of probabilistic reversal learning using event-related functional magnetic resonance imaging. The Journal of Neuroscience, 22(11), 4563-4567.
- Ronay, R., & von Hippel, W. (2015). Sensitivity to changing contingencies predicts social success. Social Psychological and Personality Science, 6(1), 23-30.
- Hulst, A, van der., Muller, T., Besselink, S., Coetsier, D., & Roos, C (2008). Bloody serious gaming-Experiences with Job Oriented Training. In Industry Training, Simulation, and Education conference (I/ITSEC) (pp. 1-11), Orlando, FL.
- Buiel, E.F.T., Hulst, A, van der., Voogd, J., and Oprins, E (2013). Public order management in a virtual world. In NATO Modelling and Simulation Group Multi-Workshop, Sydney, AUS.
- Mun, Y., Oprins, E., van den Bosch, K., van der Hulst, A., & Schraagen, J.M. (2017). Serious gaming for adaptive decision making of military personnel. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 61, No. 1, pp. 1168-1172). Austin, TX: SAGE Publications.