Our system is a new design of the Microsoft Word sidebar for performing mail merges. Our goal is to make mail merge work best for the power users since they are the ones who merge often and would thus benefit the most. Our system redesigns the sidebar such that fields are visible and readily placed within the document. We also designed an intuitive system for managing conditionals, which was sought by the power users we interviewed. Another goal was to make organizing the user's database easy and manageable completely within our tool.

The goal of the interviews was to find out which portions of our deliverable were most useful and which needed the most work. We allowed each user to designate her own goal to accomplish with the paper prototype presented. The user would talk us through her thoughts during the process. We observed the user and would help when stuck. These interviews helped us find which parts of the prototype needed the most work.


The first system we created is shown in prototype1. The user was to open up the mail merge process the same way as she was familiar in Word. After this, our sidebar would appear, organized such that the user could work from top to bottom. Our directions button would bring up the dialog in directions1. Our goal with the directions dialog was to customize the directions to the specific goal of the user. For instance, if the user wanted to write a letter, she could click on that button for a set of directions to be printed. In our actual interviews, however, clicking on one of these buttons prompted us to describe the steps necessary to perform the action.

The user would then have the choice of writing the document or managing databases. Part of our model was to implement a feature called a data collection, which would stay associated as part of the word document. First, the user would select a database from which to pull the original data. After selecting a database, the system would prompt the user for which fields to include. After selecting the pertinent fields to be stored in the data collection the user could modify this data without changing the data in the original database. An option for updating the data collection with the data from the original source is here as well. The headers for the fields are textboxes, thus prompting the user to change the names of the fields so that they make sense. The user could also pull up recently used data collections from this screen.

After setting up the data collection, the user would see the "Fields" portion of the sidebar fill with buttons labeled with the headers of the fields in the data collection. The user could then drag the field to the document in order to insert it; alternatively, she could double-click the field button to insert it at the position of the cursor. A user could create a group of text and fields by highlighting the text and then clicking the button under the "Groups" portion of the sidebar. Groups would allow a user to put together as one item the fields and formatting for an address, for example.

The "if" button under "Conditions" would allow the user to create conditions for fields. For instance, here the user might specify that if a given field is "yes," then personalized static text might display.

Finally, the pages of the document would default to being sorted by last name and first name. In preview and edit mode, a user would see the name of the current record display in the box in the lower right and could move back and forth across records. Next to the previewing buttons are the final options of printing or emailing the merged document.

The second prototype, prototype2, incorporated some changes from feedback in our first two interviews. The data collection page is now the first screen to come up during a mail merge. From here, a user can see which Word documents are associated with a selected data collection, and the mechanism for adding multiple databases is fine-tuned. The Conditions group is replaced with a single button that creates a condition. This prototype also concentrates more on sorting and printing so that the user could set filters on what to print or email if desired.



Before making the original paper prototype, we decided to design for our persona Violet. We determined that the power user would gain the greatest benefit from our changes, and Violet is the persona that envelopes the power user. Of course, we also wanted to make sure that the system was still usable and intuitive for the infrequent user. So, we interviewed a power user twice and an infrequent user once, for a total of three interviews. We felt that this balance of users reflected our design focus, and we also felt that these interviews gave us sufficient data with which to move forward and make some informed design decisions.

Task Scenarios

There were two basic tasks performed, one by each of the user types.

The infrequent user merged labels, which was relatively straight forward: the participant simply had to create the appropriate document, populate it with the desired fields, and then print. We wanted to determine that the interface did not present any large obstacles for someone new to the system, so that completing the task required little direction and caused as little confusion as possible. This was a streamlined path without getting involved in any of the advanced features.

The power user, on the other hand, created a more complicated document that mixed elements of letters and labels. The participant was consistently prompted to perform tasks that explored the advanced, admittedly more difficult to grasp features. These tasks included defining a new customized field, filtering several fields for different output, sorting the print order, emailing the final output, and importing and linking multiple data sources. By doing this, we were trying to determine if our advanced features were sufficiently understandable, customizable, powerful, and easy enough to navigate to from the main interface.


After finishing the prototype, we ran through a sample scenario without any users to prepare for the interviews. This helped us determine supplies that we needed to bring along and the amount of time an average user would take going through a scenario. All of the users that we interviewed, we had talked with in a previous phase, so they were already familiar with us, the project, and the general concept. So, we began the interview by presenting the participant with the paper prototype and instructed them in how to interact with the computer. All of the options were set to their defaults, and we indicated who the computer would be. We then asked the user to perform a task that they were comfortable with in their original mail merge application (these are described above in the Task Scenarios subsection).

During the actual interaction, we allowed the user to stumble for a while, in the hope that they would eventually discover and explore our prototype to find the solution to a particular problem on their own. However, if this went on for too long, in order to prevent participant frustration, we would intervene with an incredibly intelligent help system and have them ask the computer questions. If the task still seemed too difficult, we would have the computer offer an explanation of the general case and concepts, and then ask the user to try to determine the individual steps necessary to complete that task on their own. This usually resulted in the participant figuring it out, but when it was not enough we stepped through the problem with the user and moved on. When the scenario was completed, we asked the user to do additional, more difficult tasks (usually linked to the first scenario they did, so as not to start from the beginning of the process), in order to ascertain the effectiveness of our advanced features (we focused more on this with the power user). After that, with the interaction finished, we asked for feedback from the participant, regarding how they felt about the new system our prototype presented, and if they had any recommendations or requested features. Each interview lasted about an hour.

Test Measures

In addition to the specifics stated above (see the Task Scenarios subsection for what we were looking for with our tasks), for both users we were also consistently looking out for more mundane observations that would help us improve our design. Specifically, we looked for items that could be better labeled, in better positions, and when and where indications of mode or functionality were useful. In general, we always kept a lookout for areas of improvement in our design, where tasks could be performed with greater simplicity, less time, and less cognitive friction. If there were any trouble spots for the user, we were sure to note them.

To read about what happened and what we took away from these interviews, go to the results and discussion page.