We designed something that couldn’t work — a UX case study

We designed something that couldn’t work — a UX case study My Role: UX Researcher| Duration: 1 week| Project Status: Delivered Project Summary This case study will cover the process of creating a web app that worked on desktop and mobile devices. Our research consisted of various workshops and collaborative methods alongside students in the software engineering boot camp by General Assembly. This case study covers how our first solution was based on non-existent technology and what we did about it in our final design. Spoiler: this story lacks a happy ending. Our process followed these four phases: Discover We began with a brainstorming technique called topic mapping. The goal was to identify a potential area in life that people might have a problem with. We chose to go with the general area of digital trials and subscriptions. We drew up a hypothesis: We think users easily lose track of digital subscriptions and end up paying for services they don’t use. People want a way to better control these subscriptions. And a few assumptions that supported it: Users are paying for subscriptions to products or services they don’t use. Users are highly likely to forget they have a subscription to a digital service. Service providers communicate with subscribers in a way that makes it difficult for them to keep track of. We then summarized our work in a paragraph: Many companies have adopted a digital subscription model as their main form of revenue. However, service providers communicate with subscribers in a way that makes it difficult for users to keep track. As a result, users are paying for subscriptions to products or services they don’t use. And then we asked ourselves: HOW MIGHT WE help users manage their digital subscriptions? Screener Surveys Our next step was to interview people and see if and how this problem manifested in the real world. In order to find people to talk to, we created a brief survey on Google Forms that vetted out people who had little to no experience managing trials and subscriptions. Our goal was to find at least five people to interview who had some experience in this area and could provide us with insight. Interviews We interviewed five people who shared their experiences in managing digital trials and subscriptions. Here are three key quotes we pulled out: “I assume they’re not going to say like, ‘Hey, you haven’t used this in four months. Are you sure you want to still keep giving us money?’ It’s kind of an out of sight out of mind thing.” — Peter the Elder “I got the free trial but you also have to put your credit card down. And after the trial some months went by, and I was like paying and paying and paying.”  — Mark “In terms of managing my subscriptions; I don’t do it. I’m sure other people do. But that’s not really my forte, to be on top of that sort of thing. “ — Jackie Affinity Mapping We used a process called affinity mapping to capture trends we found throughout our interviews. One consistent trend was that everybody had some experience with discovering they were paying for a subscription that they weren’t using anymore. Typically, people would join the service with the intention to cancel it soon after and then forget about it. Months would go by, and people would only notice they were being charged after checking their bank statements. We grouped these trends using a method called I-Statements. The goal was to frame these comments, quotes, and observations from all five interviewees into a first-person narrative. Define Persona We then designed a fictitious character based on our insights whom we named Jamie. The goal of this was to further humanize our data so that we could be sure that our solution was meeting a real problem in the real world. From here on in the case study, we will reference Jamie as the typical user who is experiencing the trial/subscription problem and to whom the solution is catered. Journey Map We used a method called journey mapping to illustrate Jamie’s experience of discovering and canceling a subscription. Our journey map captured Jamie’s emotional experience as well as what technology he was using at specific points of the journey. We realized that we probably couldn’t design a solution that canceled Jamie’s subscriptions. Instead, we decided to design a solution that could help Jamie identify subscriptions he was paying for but wasn’t using. Design First Design We first designed a mobile web app that allowed Jamie to create an account, find subscriptions he wasn’t using, and cancel them. Our design worked on the premise that there was an API out there that could retrieve data from Jamie’s bank account to discover what he was paying trials and subscriptions he was paying. We also assumed there was a way to track how often Jamie used specific subscriptions, regardless of what device he was using them on. This latter assumption proved to be very wrong as we will soon see. Testing We tested our first design with five people. The test consisted of

We designed something that couldn’t work — a UX case study
We designed something that couldn’t work — a UX case study My Role: UX Researcher| Duration: 1 week| Project Status: Delivered Project Summary This case study will cover the process of creating a web app that worked on desktop and mobile devices. Our research consisted of various workshops and collaborative methods alongside students in the software engineering boot camp by General Assembly. This case study covers how our first solution was based on non-existent technology and what we did about it in our final design. Spoiler: this story lacks a happy ending. Our process followed these four phases: Discover We began with a brainstorming technique called topic mapping. The goal was to identify a potential area in life that people might have a problem with. We chose to go with the general area of digital trials and subscriptions. We drew up a hypothesis: We think users easily lose track of digital subscriptions and end up paying for services they don’t use. People want a way to better control these subscriptions. And a few assumptions that supported it: Users are paying for subscriptions to products or services they don’t use. Users are highly likely to forget they have a subscription to a digital service. Service providers communicate with subscribers in a way that makes it difficult for them to keep track of. We then summarized our work in a paragraph: Many companies have adopted a digital subscription model as their main form of revenue. However, service providers communicate with subscribers in a way that makes it difficult for users to keep track. As a result, users are paying for subscriptions to products or services they don’t use. And then we asked ourselves: HOW MIGHT WE help users manage their digital subscriptions? Screener Surveys Our next step was to interview people and see if and how this problem manifested in the real world. In order to find people to talk to, we created a brief survey on Google Forms that vetted out people who had little to no experience managing trials and subscriptions. Our goal was to find at least five people to interview who had some experience in this area and could provide us with insight. Interviews We interviewed five people who shared their experiences in managing digital trials and subscriptions. Here are three key quotes we pulled out: “I assume they’re not going to say like, ‘Hey, you haven’t used this in four months. Are you sure you want to still keep giving us money?’ It’s kind of an out of sight out of mind thing.” — Peter the Elder “I got the free trial but you also have to put your credit card down. And after the trial some months went by, and I was like paying and paying and paying.”  — Mark “In terms of managing my subscriptions; I don’t do it. I’m sure other people do. But that’s not really my forte, to be on top of that sort of thing. “ — Jackie Affinity Mapping We used a process called affinity mapping to capture trends we found throughout our interviews. One consistent trend was that everybody had some experience with discovering they were paying for a subscription that they weren’t using anymore. Typically, people would join the service with the intention to cancel it soon after and then forget about it. Months would go by, and people would only notice they were being charged after checking their bank statements. We grouped these trends using a method called I-Statements. The goal was to frame these comments, quotes, and observations from all five interviewees into a first-person narrative. Define Persona We then designed a fictitious character based on our insights whom we named Jamie. The goal of this was to further humanize our data so that we could be sure that our solution was meeting a real problem in the real world. From here on in the case study, we will reference Jamie as the typical user who is experiencing the trial/subscription problem and to whom the solution is catered. Journey Map We used a method called journey mapping to illustrate Jamie’s experience of discovering and canceling a subscription. Our journey map captured Jamie’s emotional experience as well as what technology he was using at specific points of the journey. We realized that we probably couldn’t design a solution that canceled Jamie’s subscriptions. Instead, we decided to design a solution that could help Jamie identify subscriptions he was paying for but wasn’t using. Design First Design We first designed a mobile web app that allowed Jamie to create an account, find subscriptions he wasn’t using, and cancel them. Our design worked on the premise that there was an API out there that could retrieve data from Jamie’s bank account to discover what he was paying trials and subscriptions he was paying. We also assumed there was a way to track how often Jamie used specific subscriptions, regardless of what device he was using them on. This latter assumption proved to be very wrong as we will soon see. Testing We tested our first design with five people. The test consisted of walking through three tasks: account set-up, sorting subscriptions, and canceling a subscription. We measured how long test participants moved through each task and how easy they found the design to use during each task. We learned that our design lacked in two major areas: Our design looked gimmicky and untrustworthy. The amateurish branding which referenced the ghostbuster's movies was lackluster. Test participants were uncomfortable with the idea of giving their sensitive financial information over to the site. The site had weird naming conventions, confusing test participants as to what would happen when they clicked on certain buttons. The Big Mistake Before we moving on to our final design, we met with a couple of senior UX designers to shared our research and solution. We discussed our design and how it would work. Or as we discovered, how it couldn’t work. They pointed out that the main feature of our app, the one that tracked Jamie’s use of subscriptions, was not possible because the technology does not currently exist. Team morale was at an all-time low. There we were, at the bottom of the ninth, and we discovered our main idea was invalid. Second Design We were too far in to start from scratch. We had to see our idea to the end. We consulted with software engineers and kicked around some ideas about how to make our current solution work. Nothing worked. We decided to rely on the all-pervasive power of a browser plugin to track what subscription services Jamie used. This was only a partial solution because many digital subscriptions are accessed on their own apps rather than in a browser. Our second design addressed feedback from our first round of testing and how Jamie would install the browser plug-in. Testing Since our first design was not feasible, we had to rethink and redesign the tasks from our first round of testing as well This made it impossible to compare the test results from our first round of testing to the results from the second round of testing and track progress. Therefore, while the second round of testing was in identifying how our redesign worked, it didn’t help us understand if our redesign was better than our first design. We did learn a few things from this round of testing: The ghostbusters themed branding was still not taking off. We would have to remove it from the ghostbuster's references. vague naming conventions continued to be a problem. We would have to rethink how buttons were named and where they would be placed for maximum clarity. The End The project was a learning experience. We started off thinking we were solid and good to go, but slowly we realized had gotten ourselves to a place where we would have liked to restart yet that wasn’t an option We did what we could in the time that we had. We learned that we should have spent more time planning and considering how our app works before diving into the design. We learned that it doesn’t hurt to consult with senior designers earlier on in the process. We learned how to bounce back, to recoup momentum, and to do the best we could to get the project done, even though we knew that it might be portfolio worthy in the end. We designed something that couldn’t work — a UX case study was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.