Design Development


Members of our user group, young, urban professionals, typically enjoy going out to eat with their friends. The problem we have diagnosed is a sufficient lack of decisiveness or creativity in the places that friends go to together. Even if members of the group want to try something new, they often find themselves shifting back to a more familiar location due to several factors discussed in our needs analysis. To solve this problem, we prototyped the “Hot Potato” app described in the needs analysis. This system includes a leader who sets up the game and sends it to the rest of the user’s friends. Once the game is set up, the players take turns to suggest ideas in the time allotted. Once enough ideas have been suggested, the friends then take turns picking their favorite ideas within a time allotted. If a user takes more than the time allotted for either of these tasks, that person is “burned”. At the end of the game, the best idea is shown to the friend group and the person with the most burns has to do something (pre-determined in the set-up phase). This system will be evaluated in user testing described below.

The prototype was tested with the following in mind: Moving through the set-up phase should be intuitive provided the users already know who they want to play with and for how long. We also wanted to find out if the extended potato metaphor was understandable and adding to the effect of the app, or if it was simply hindering it. Primary objectives in the second and third tasks were to discover if the users would be able to understand the current state of the game through the things displayed on the screens (ideas already in play, time remaining). The idea is that if certain points of the interfaces are confusing, the testers will stumble, or hesitate. From there, we can get more articulate feedback and re-design the interface.

Narrowing our Scope

In the process of the design development, we decided to narrow the problem space we were working in from "planning an outing among friends or colleagues" to "proposing and choosing an activity and location". The other sub-tasks of planning an outing are already addressed by other applications (for example WhenIsGood and Doodle for scheduling arrangements). We decided to focus on this particular area of opportunity, instead of attempting to include a large number of features at once.

We initially created storyboards for three ideas: BFFL, Group Street, and Hot Potato (see needs analysis). We critically examined each of these ideas for boldness and overlap with existing applications: for instance, we didn't pursue BFFL further as it didn't meet either criteria. Then, we further defined the functionality of both Group Street and Hot Potato by creating task flow diagrams for each idea (shown above). It was in this process that we realized the complexity of the underlying logic of Group Street and thus focused further on Hot Potato. We then created a prototype with one screen for each task on the task flow chart of Hot Potato.


Click to expand, use left/right keys to switch slides.

Our prototype app features a home screen to simulate the user’s mobile app and how the notifications pop up there. There is also a start screen (an app home screen) made to show which games the user is currently in with different friend groups (see slide #3). From there, the game is split into three distinct phases: The set-up phase, the idea pitching phase and the voting phase. In order to reflect these phases in our prototype, the first four screens were devoted to different aspects of game set-up. We wanted to split these onto different screens to avoid overwhelming the user with excessive information on one screen. These screens include a group selection screen where the user can select a pre-existing group of friends or create a new one from scratch (see slide #4). Different ways of displaying the information were tried on this single screen. One way of displaying group members was with a drop down menu. The second was a ? button that would open a pop-up displaying members of the group. Once a group was selected, the user would be directed to the “Bake Time” screen where they could choose how long the game lasts in total (see slide #5). Depending on how the user scrolls through the hours and minutes, the “turns are…” dialogue would update dynamically. The next screen is where the user picks what the loser will do (see slide #6). They can either type an idea they have into the textbox, or pick from a pre-populated list of ideas below. When the user selects one of those ideas, they are then shown the “game recipe” or the outcomes of the last three screens together (see slide #7).

When everything is set up, the recipe is sent to all other people in the group specified by the user (see slide #8). The next screen the user sees (after the notification on the home screen) is the “pitch idea” screen where the user can type an idea they have into the red textbox (see slide #9). In order to make it easier for the user to suggest ideas relevant to what the group wants to do, ideas already suggested are displayed below the text box. This screen features a timer that counts down until the user is out of time to suggest an idea. In the second round of pitching ideas, the user doesn’t notice the notification that it is their turn and gets “burned”. This opens a burn pop-up on the home screen (see slide #12). Once this notice is discarded, the next phase commences and the next screen the user sees is the picking favorites screen (see slide #13). Here, the user can up, or down-vote ideas. Once the user has up-voted at least one idea, the OK! button is no longer grayed out and the user can end their turn. This screen once again features a timer to let the user know when the turn is up. The final screen is a game over screen where it shows the result based on the voting phase, as well as who had the most burns and what the loser will do (see slide #14).

User Testing


Vinny, Nancy, Adam, and Louis

We interviewed four people (3 male, 1 female) for our user tests who are between the ages of 20 and 23 and are either working professionally or in college. These four people fit four of our five personas nicely, so we had feedback from all personas except Penelope. In selecting our test users, we realized that it is not feasible to design an app for a wide range of personas, so we removed Penelope from our design process since she would benefit least from our product.

Task Scenarios

We had our users go through four tasks which took them through the features of the app. These tasks included setting up a game, suggesting an idea, voting on an idea and interacting with phone notifications. More description about each task (e.g. instructor notes, motivation, expectations and evaluation) are included in the appendix (see user test script).

  • Setting up a game: We asked the user to set up a new game with a specific group of people and bet and lasted three hours. The purpose of the task is to see if the user can navigate the setup process, follow the hot potato game analogies and understand which elements of the game they are manipulating on each setup page.
  • Suggesting an idea: For this task, we asked the user to propose a new idea that has not already been suggested by another user within the time allotted. This is an essential task that allows us to see if it is intuitive for the user to submit a new idea and not be confused by the ideas already suggested.
  • Interacting with phone notifications: This task takes the user through the experience of receiving notifications and getting “burned.” The user will begin after running out of time with the burn notification. We were looking for how the user responded to negative feedback and trying to gauge if that would motivate them to answer quickly in the next round.
  • Voting on an idea: The last task tests whether it is clear to the user how to vote on proposed ideas. We wanted to see if the upvote/downvote concept and amount of time remaining was intuitive. We tested different functionalities for the grayed out OK! button to determine after what user input to allow the user to continue to the next screen.


We completed two interviews off campus with the non-college students and the other two were on-campus. At each interview, we split up the tasks into facilitator, note taker and computer; however, sometimes the same person fulfilled the facilitator and computer roles. The roles allowed each of us to concentrate on specific tasks making the entire process more efficient. It was very important to have one facilitator, who did not reveal too much information for how to complete each task, to maintain consistency throughout an interview. We took each user through the four tasks with our paper prototype, allowing them to think aloud and make comments throughout. Each of our interviews took roughly an hour and ended with general feedback on the project concept and any suggestions for improvement. This allowed the user to explain in more detail which parts of the app were confusing after getting a holistic view of the concept.

Test Measures

We were looking to see how intuitive the different aspects outlined in our task description were to the user. We took note of when the user was confused and which aspects were most difficult to understand. We wanted to determine if the potato metaphors made sense to the user or added confusion. If the user clicked on something that was not a button or took a while to understand the next steps, we rated that negatively. Whereas, actions that were simple and the user easily understood the purpose of, we viewed positively. We took note of these positive and negative reactions so that we could determine what needed fixing and what kinds of interfaces we should continue using.


Raw Feedback

We compiled the verbal feedback of our test users into a set of raw comments (see user test feedback in the appendix).

Condensed Feedback

We condensed the raw feedback into mostly representative and relevant quotes from our user interactions and created the simplified representation on the left. For a more complete feedback, see the referenced raw feedback above.


We have divided our results into two distinct iterations in order to reflect a critical shift in our design methodology. In constructing our first prototype, we became increasingly minimalistic: the user cannot err if there is only one possible action—the correct action—at any given time. While this lead to smooth, easy experiences for our users, we realized part of this smoothness came at the cost of functionality. We were, in effect, avoiding important design decisions by simply not supporting some plausible user goals and tasks. Of course, there is a balance to be struck here; support too many possible actions and the user will be overwhelmed, support too few and they will have little reason to use our app. Our second iteration attempted to correct this balance by flushing out previously unimplemented functionality and tackling more complicated tasks (like helping the user explore for new ideas). Hence, it would be inappropriate to merge the user feedback for these two implementations.

First Iteration

Our users generally found the first iteration to be understandable, targeted, and simplistic. On occasion, they would be frustrated by our “one way to do it” approach, such as when Louis attempted to edit the individual membership of the “Bar Buddies” group. However, much of their feedback can be characterized as either pertaining to the “hot potato” metaphor or uncertainty about the effects of their actions.

  • Hot Potato Metaphor: Once users caught the reference to the physical game of “hot potato”, they were able to use it as a conceptual framework that guided them through each task, but with a few significant exceptions. Interestingly, Vinny, who had never heard of hot potato (he is not a US citizen), was able to understand the physical game after completing several tasks in our app. Louis, Vinny, and Adam all were able to understand that being “burned” was bad, but did not understand why it was so named or its connection to the game. Similarly, Nancy found the “potatoes in play” label to be confusing. As a whole, users thought the potato theme was appropriate, but specific instantiations needed clarifications.
  • Pitching Ideas vs. Evaluating Ideas: The two actions of pitching an idea and evaluating proposed ideas form the core of the app and we have thus far kept them as two distinct phases. However, we have found that in our first implementation, users tend to assume they could be accomplished simultaneously. We included a list of ideas already suggested by the other players in order to give a user a way to gauge group interest (i.e. a quiet dinner or a night out?), but this had the unintended consequence of encouraging users to try to “like” an existing idea rather than propose their own. Every single user either tried to “like” an existing idea or asked why they were shown. This critical issue was addressed in the second iteration.
  • Burning: User feedback was consistent with respect to the “burn” counter—they liked the competitive aspect, but were sometimes confused about what was actually happening behind the scenes. For example, upon seeing the “You got burned!” screen, Louis said, “I don’t really know what ‘burned’ means, but I know Billy has more than me, so I’m in good shape.” Adam wondered if some burns were worse than others, and if so, how the loser (if any) would be calculated. Regardless, the interface clearly conveyed a negative result.

Making Changes

During our initial round of user testing, we noticed an opportunity to increase the boldness of our implementation. We subsequently aimed to explore additional features and ways of implementing the Hot Potato game that weren't necessarily conventional. For example, one idea that came up but was soon dismissed was a text-based version of the app for non-smart phone users. As we continued the brainstorming process, we considered exploration and real-time tracking as possible areas for growth. In the end, we decided to focus on improving the experience of exploring a variety of possible activities to pursue. This idea is shown below under "visual exploration".

Second Iteration

Having recognized the need for a shift in our design, we took the feedback from the first iteration and developed a more functional and tuned second iteration. We also took the time here to align ourselves more closely with the Android style guidelines in preparation for our high-fidelity prototype. Our specific changes were as follows:

  • Old welcome screen, compare with Slide #3 of the prototype.

    Welcome Screen: We adapted the welcome screen to the Android guidelines and clarified it by specifying the column names of the current games list. This addressed Nancy’s confusion over whether the displayed time was for an individual turn or for the game as a whole.

  • Old group selection, compare with Slide #4 of the prototype.

    Group Selection: We completely re-implemented group selection to support more functionality and clarify what users belong to which groups—no longer is there a single drop-down list of manually created groups listed only by name. Users can now drag Facebook groups, individual friends, or previously selected groups from the app’s history into a collection box at the bottom of the screen to form the final group. These three sources of group members are sorted using a tabbed view pane and show the name and profile picture of each member.

  • Old group recipe, compare with Slide #7 of the prototype.

    Game Recipe: Previously, the summary page (or “game recipe”) that was shown just before launching a new game was entirely static. If a user saw that certain information was incorrect, they would have to navigate with the back-button to the right setup page in order to fix the problem. Many of our test users attempted to tap on the three summary components anyways, so we coopted this interaction to bring the user back to the appropriate setup page.

  • Old idea pitching, compare with Slide #9 of the prototype.

    Pitching an Idea: This underwent significant revision, as it was single-handedly the most confusing aspect of our interface for our test users. Users had assumed that they could support someone else’s idea, instead of having to pitch their own. In addition, while the app previously supplied ample motivation to think of an idea, it did little to help the user actually explore for new ideas. In the new prototype, three options are shown. One will take the user to a visual exploration interface (see Visual Exploration), another will display a pop-up of suggested ideas, but only for inspiration, and the final list item will choose a random idea weighted off of the other ideas already suggested.

  • Visual Exploration: In order to make it easier for users to think of ideas, we implemented a dynamic screen filled with a hierarchy of idea categories (see Slides #10, #11 of the prototype). If the user taps on a category, or zooms in on an area, the screen will focus on more specific categories related to the parent category and will begin to show ideas that fit the current region of categories (i.e. zoom from “Food” to “Italian” to “Expensive” to “Magiano’s”). This will continue until the user zooms in enough to pick a specific idea by clicking on it.

Limitations of this evaluation

Although we were able to test much of the functionality of our interface through this simple paper prototype, we were limited in certain key areas. One of our more involved interfaces, the visual exploration of new ideas, could only be tested in brief snapshots that both limited the searchable area and the dynamic panning and zooming allowed by touch gestures on a real platform. Similarly, our lack of physical hardware limited our ability to simulate the haptic feedback for turn and burn notifications.

Much of the success of our app hinges on its ability to manage group creativity and teamwork. While we could test its isolated impact on individuals, our methodology restricted us from examining how the app would fare in a true group environment. Just as the group is more than the sum of its individuals, the group interaction is more than the sum of individual interactions.

Finally, we assumed that user had already used the app to play a game of hot potato before our test, such that they would have already integrated it with Facebook and each of their friends would have done the same. This pre-setup phase raises a host of issues concerning account creation, Facebook privacy settings, managing Facebook groups, etc. that could plague a real user interface.

Conclusion & Discussion

Throughout this design phase, we learned multiple lessons with regard to our initial design. For example, while the decision to embrace minimalism within the interface was a conscious one, it limited our ability to include potentially interesting features. In fact, we discovered that we still had to make hard design decisions about complex functionality that we couldn't abstract away. With regard to our design choices, we briefly considered switching to a different game model that combined both modes of suggesting and voting on activities, but found that a two-phase solution was actually more in line with our users' expectations. Lastly, the potato metaphor proved effective, although it may have been overused on occasion.

Next Steps

In preparation for the design refinement phase, we will continue our current efforts to redesign the user interface based on the feedback we gathered in our user interactions. In the following, we'll outline the changes we're planning on making to the individual screens. On a larger level, we'll be preparing for the cognitive walkthrough and the initial creation of a prototype.

  • Welcome Screen: We will articulate the login procedure for the application. For example, does it use Facebook or different means for authentication? Similarly, we will also define the experience for a first-time user. If there's to be a tutorial guiding them through the application, we will need to describe it.
  • Who's Playing: The feedback we received for this screen indicates that we will have to lay the process of creating groups out and design the interaction happening within the "friends" tab.
  • Pitch Screen: One of the approaches we were discussing was to implement the ability to switch to a map view showing suggested activities. In a similar vein, we will have to express the exact amount of information we're giving our users on the final (game over) screen.
  • Pick Your Favorites: This screen will only undergo minor changes, as we're still considering the functionality of the OK button. It may also see minor layout changes and receive the ability to display additional information about each suggested idea to the user.
  • Hardware: As this application will be running on phones and tablets, we have an opportunity to make use of additional hardware features of the devices, such as a flashing LED or vibration signal. We can also specify the functionality of the back button on the phone or tablet.

Work Breakdown

Chase Elliott Sam Rachel Sebastian
Usability Interviews 36 9 27 18 10
Prototype Construction 22 22 22 22 12
User Test Script 25 25 25 25 N/A
Brainstorming 22 22 22 22 12
Write-up 30 10 20 10 30
Website 75 0 0 0 25


  • The user test script is a full description of our testable tasks and goals for feedback and evaluation.
  • The user test feedback contains the raw feedback we noted at each interview.