Our prototype went through 3 rounds of user testing. In the first round we tested a variety of game concepts before narrowing down to one that incorporated the best of everything based on user feedback. We found that many users were confused by the rules we provided, making clarity a priority in future iterations. We also found that people were fatigued when reading too many articles during gameplay, so we put a focus on intermixing our main intervention with other game mechanics to make the users more comfortable when playing. In the second iteration, participants positively reacted to the social dynamic of the game, which we simulated by having group members play along as other players during the session. We wanted players to work together and vote on a piece of evidence that seemed the most suspicious, therefore getting the user practice at singling out unreliable sources of information. These mechanics were still being fleshed out and many users reported being confused by this process at this stage. Our final prototype was still operated via wizard-of-oz mechanics by group members behind the scenes, but were able to test pairs of participants in single sessions, showing us how true group dynamics would play out within the game. Our findings were overwhelmingly positive, with players working together to discuss the merits of their collected evidence and even using tactics that were directly applicable to the real world, such as researching the source of their evidence to find if it was reliable.