Obesity in America
This project created a mock website related to American obesity statistics to find out how interactive features, narratives, and data visualization could impact user empathy. The project included two other graduate researchers and one faculty member. My contribution was in the statistical analysis and website mock ups.
Obesity impacts approximately 40% of the U.S. adult population according to the CDC. Health concerns increase with obesity, and the economic impact produces a significant burden on the American health care system with related medical costs in the billions. Health campaigns working on preventing obesity can use many different formats and designs when communicating their message. The specific impact these designs have on user empathy, behavioral intentions, and perceptions of severity can make all the difference.
Measuring the variables we were interested in had to be done in response to a mock website with obesity information. I made the design for each experimental condition telling the contracted web designer how it should look and how interactive the page would be. One condition included features like hovering for more information, a slider, and different areas to click through on the same page. The other conditions were bare bones with no interactive features present for the user. Participants were asked to click through all webpages and then answer a short survey about key variables. After we achieved the target sample size, statistical procedures were used to string together condition groups with scale questions to show influences of persuasive interactivity.
The full results can be found in the academic paper, but the final takeaway is that more interactive features made users less defensive about their own obesity. This can help persuasive health campaigns reach more at-risk populations with their message. Additionally, interactive narratives performed statistically better than groups who just received the interactive data visualization. Interactive maps and infographics are fun, but less effective if you want your users to do something about it.
Implications + Improvement
As a research article, this project is done. As a UX project, it’s only just started. This would be the information I would take to the designers and developers. If the team was working on a limited budget, then paying engineers to build the data visualization would not be worth the effort. Instead, my recommendation would be to drive home our message, create better opportunities for conversion (customer behavior), and improve user empathy through an interactive narrative.
Improvements for this project would include qualitative interviews before the survey described here to make sure the key variables we had in our survey were important to users. Also working with business strategy on their OKRs.
A Taste of Home
This project focused on a mobile app for international students to order box lunches from select restaurants specializing in authentic cuisine that reminded them of home. It was done for an upper level graduate capstone course. The full deliverable is a detailed report spanning 56 pages from concept statement to prototyping a finished product. Myself and two other graduate students worked on the project throughout a full semester. Although this was for a class and was never deployed, the work sparked my interest in UX research. This represents my passion for applied research compared to other projects that were done with academic journals in mind.
Each day hundreds of Chinese students order food from authentic Chinese restaurants for a taste of something familiar. These restaurants have cultivated a system where a daily menu is texted out to a pre-made list. Students on that list then text their choices back, and a pickup time and location is decided for the next day. Food is paid for at the time of pickup, so making sure your order is still there and correct can be a hassle then. Food2You will streamline this existing service through a mobile application where students can view, order, schedule, and pay for their meals in a couple of seconds. This tool will focus on three main issues not addressed with the current system: ease of menu browsing, simplified ordering of multiple items, and payment security.
The initial scope of the app was determined to focus on the students even though the restaurants are an important piece of the puzzle, focusing on the users at this stage of development was critical to widespread adoption. Eight students familiar with the ordering process already in place via text were recruited for in-depth interviews to establish the context of use and gain insight on pain points within the existing system. From there the team was able to sketch out an initial flow model. An affinity diagram exercise between collaborators presented user behaviors around the existing ordering system which allowed us to create a task structure model to inform design and app architecture.
The team had a clear picture of what we wanted to build. From that point, wire frame sketches, storyboards and prototypes were created for pilot test deployment. Tasks presented to participants were directly observed while they talked through their through process. Qualitative responses to tasks indicated no difficulties, and quantitative responses on a 5-point scale (1 being very easy and 5 being very hard) averaged 1.83 across all participants.
Implications + Improvement
This project allowed me to get my hands dirty with UX research and see the design process unfold from evidence-based insight. The team started with a problem, systematically gathered evidence, came up with a structure that would help our users ease some of the anxiety with the current ordering process, and iteratively built on ideas to bring our prototype to life. The next step if this was a product we wanted to fully develop would be set up trials of the app with real world users, and then continue to build on feedback to improve the overall user experience.
Science Audience Feedback Tool
This project was the focus of my dissertation at UT. It involved months of prior research to establish the argument for the study in the first place. Academic research has a bad habit of not trusting expertise without a pages of justifying your argument. The meat of the project lies within the interview, survey, and analysis stage. I had to work on all stages of this research on my own since it was presented at my doctoral defense in early 2021.
Scientists who are committed to communicating their research often have a difficult time measuring their successes and areas for improvement. One way they maintain positive impacts with their audience (through social media or in person events at museums) is through ideas of public engagement. This term is thrown around in different contexts, but experts from different fields agree that high quality public engagement centers around two-way dialogue with audiences, elements of interaction and participation, and trustworthiness. Additionally, scientists are always trying to communicate new research and frame science in a way that is exciting and resonates with people’s past experiences. This gets difficult for scientists to adequately measure. On top of their institutional obligations, there is little time or incentive for scientists to improve on problem areas in their engagement efforts. One way to ease the burden is to present their audiences with a small questionnaire to measure their response to key elements of engagement. This response scale would be a key deliverable for communication training centers and scientists.
Prior research was gathered to determine key areas of public engagement with science. From there an initial list of 41 items built from existing scales as well as newly created items was used as a starting point. In traditional scale development procedures, these items still need to be validated by subject-matter experts. To make sure what I said I was measuring was actually what I was measuring I conducted 13 unstructured interviews with science communication practitioners and researchers. The transcripts were analyzed from these interviews for emergent themes consistent with the areas represented by the 41 items. The items were then distributed to a national quota sample through Qualtrics along with other items that would be compared to the new scale. Exploratory Factor Analysis and Confirmatory Factor Analysis procedures were performed on survey results from 400 respondents.
The factor analysis produced a 12 item scale that adequately represented key areas of public engagement with science. One question from each area loaded onto the same factor. This created a simple, but effective tool for scientists to use after they gave a short demonstration at a museum, presented their research at science cafe’s, or finished a Reddit AMA. Results from the survey can help scientists know what areas to work on to improve their communication and what areas connected with audience members.
Implications + Improvement
This was a huge project to work on solo. Whenever friends and family would ask me about the process of a PhD, I would compare the dissertation to a driving test for your drivers license. By that point you’ve passed your test on the rules of the road and even driven with an instructor or experienced driver to guide you. The driving test, like a dissertation, is your time to show your skills. I’m extremely proud of this project, and although I welcome collaboration on all future projects I’m glad I finished this one to prove to other researchers my expertise. At the time of writing the scale itself has never been implemented and the findings have not been formally published.