top of page

History Vault: A Simplified Browsing Experience for Video Streaming

H vault.png

Read about my UX research work in general at A+E here

Unlimited video streaming for history buffs looking to learn

Check out the prototype!
App for iOS:
 
 
 
 
 
 
 
 
 
 
 
 
 
Overview

This project for A+E Networks was designed for their History Vault product, an ad-free, subscription video streaming app comprising earlier History Channel content. History Vault is available on iOS, Android, web, Roku, Apple TV, and will shortly be available via Samsung’s smart TV platform Tizen. Within the app, the main navigation comprises home (containing featured video playlists) and All Collections (containing all videos grouped into themed or edited categories). For this project, we replaced All Collections with a new Browse experience.

Team

Me (UX research and design) and Adam Kendall (UI design)

Timeframe

1 sprint for UX work only followed by 1 sprint for UI, approximately 4 weeks. Additional time for card sorting allocated after initial timeline, approx. 1 week. Time as needed for handover and UAT

Limitations & Parameters

Because this was a 1-sprint project for UX and I was the only researcher on the project, the main challenge was time management. In particular, time was very limited for user research, so the majority of the insights were pulled from previous research projects. Ultimately, 1/3 of the original 1-sprint timeframe went to research, 1/3 to design, and 1/3 to usability testing. For more on scoping the project to set limitations, see the Getting Started section below.

Resources & Materials

We started our project management using a whiteboard and sticky notes, then moved into Jira. We made low-fidelity sketches on paper, then used Sketch to create our mid- and high-fidelity wireframes, linking them with InVision. For research we used InVision Boards and Optimal Workshop. We used UserTesting for usability testing. Finally, we used Zeplin for the handover to developers.

 

Getting Started

The project started as an initiative from product, with prior input from design, to offer a different and better user experience for browsing content. We’d noticed many challenges specific to the History Vault product that we weren’t experiencing with another product under the same streaming video on demand umbrella, Lifetime Movie Club. LMC featured a simpler, genre-based browsing experience. Our hypothesis was that the Collections moniker was causing confusion among users, and that offering Browse instead would improve the user experience.

 

We began by assembling all design team members plus product for a collaborative mapping exercise to visualize all possible elements of Browse and determine scope, both initially and further out. With sticky notes, we mapped out each potential piece of the experience and how they might fit together. Eventually, we grouped these into MVP, second priorities, and third priorities. We determined that each of those subsets would be one sprint’s worth of work for UX and UI each, in that order. The MVP set would be designed right away, while the second and third priorities would be put in the backlog, pending results from the MVP.

 

Based on the exercise, our initial focus was on mandatory elements of browse with limited filters and sorts for a single platform, iOS. The filters chosen were for videos that were expiring or not yet watched. The sorts were by alphabetical order, recently added, popular, and air date. The second and third priorities would be additional filters and sorts, and other platforms. To assess the success of the design, Browse would replace All Collections for a limited run on an A/B test. That would prevent it from causing confusion or competing with All Collections. We would then observe generally how it performs as an experience, and how it performs in comparison with Collections.

 

Research & Synthesis

Given the short time frame, I decided to lean heavily on user research collected over time through various sources.

 

The majority of the qualitative research was drawn from a long-term user interview project we were running around the same time. It had broad goals to learn about user preferences and behavior on H-Vault. At the time, we had the most data on current subscribers, many of whom provided information relevant to the Browse project. Preceding the interviews, there was also a short questionnaire that provided some data. Secondarily, we had plenty of relevant snippets gathered from research projects for other features and usability tests.

 

There were two sources of quantitative data I drew from. The primary source was data analytics from our partnership with Amplitude, which allows us to see usage statistics, create and analyze subgroups, make funnel charts, and much more. The secondary source was a long questionnaire sent out to users and former users on an annual basis, which is then summarized in a set of data visualizations.

 

Opting to do an informal synthesis over a formalized process, I noted the major takeaways:

  • Users are confused about what Collections are

  • Users are confused how Collections work

  • Users believe there’s much less content than there is (the only way to see the full catalog is going into all the Collections)

  • Users are disoriented by Collections that come and go on the app (they’re refreshed periodically)

  • Users don’t use search often

  • Users are disappointed that search doesn’t work for topics (MVP search only finds titles)

  • Users are frustrated that they can’t find topical information, such as a specific war or region

  • Collections has low usage volume

  • Users want to see all the content available for streaming

  • Users want to organize the content and have strong opinions on which filters and sorts they would use

 

The findings validated our theory that a new Browse experience devoid of Collections might serve as a successful middle path between the editorially-driven Home, with its predetermined playlists perfect for users wanting a lot of guidance, and the user-initiated search bar, which is made for those who don’t want any guidance but need to know exactly what they’re looking for. Browse, ideally, serves the user who has an idea of what they want but don’t know the exact title.

 

To supplement the user research, I conducted a competitive analysis of 11 diverse competitors, some over multiple platforms (iOS + Android or web). As usual, I targeted a mix of large-volume mainstream video streaming apps, like HBO, and smaller, niche apps, like Curiosity Stream. This approach often reveals both the most technologically innovative as well as applicable ideas, and sometimes downsides to different designs.

 

These are the competitors targeted for analysis:

 

 

 

 

 

 

 

 

 

 

 

(HBO Now, Netflix, Prime Video, Film Struck, Hallmark Movies Now, Curiosity Stream, Hulu, Sundance Now, Acorn TV, Spotify, Mubi)

I opened each of the apps to look at their browsing experience, taking screenshots and organizing them by competitor on InVision Boards. I then took some notes to synthesize the information.

 

The main findings were that most competitors don’t offer robust filtering and sorting. For one reason or another, they rely on just a few functions, if they offer them at all. A few of the apps, such as Mubi, have a highly curated collection that may not need filtering or sorting, but for some of the larger competitors, it was a surprise. When it was offered, browsing and filtering by genre was the most common method. This gives a good indication that for general content, users break down content in their heads by genre. However, given the specificity of History Vault, we would have to adapt categories that made sense. In all, no single competitor offered a comprehensive, integrated experience. I chose to draw inspiration from Netflix and Hulu for sorting, and from FilmStruck and Amazon Prime for filtering. I made some notes on their functionality and appearance, and wrapped up research.

 

Ideation & Design

There were a couple of elements I wanted to consider in early ideation:

 

First, the function of filter and sort: I explored the idea of single vs. stacking filters and sort, examining logically what would be possible. I determined that stacking filters would be ideal, but stacking sorts would be to complex, so a single sort would be sufficient.

 

Second, the style of filter and sort menus: I explored 7 variations based on various design patterns such as checkboxes and radio buttons in 4 states: unselected state, open menu, select option, and final state.

 

Third, the number of menus: I explored a single or combined menu for filter and sort.

 

Fourth, the location on screen: I explored 4 variations on where to place the menu.

 

Fifth, the scope: I explored whether to offer both forwards/backwards sorting and filtering in/out, or one-way refining. I determined that for the MVP a single direction was sufficient.

 

Sixth, the copy: For five terms that might be used in the app, I explored two copy variations.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

With each of these elements, I wrote or sketched out a few variants on paper. When it was helpful, I drew out full paper wireframes held together with tape. I then reviewed the variations with the whole design team and product. After discussing, I decided to move forward with two main ideas, a full-screen style and a drop-down style. Both of them featured a single refine option encompassing filter and sort, located on the top right-hand side. I transferred the ideas into Sketch, making two sets of low-fi wireframes and two InVision prototypes.

 

Key wireframes from the low-fi set:

                                                     Style A: full-screen                                     Style B: drop-down

Testing & Iteration

With the low-fi prototypes done, I wrote a script for the first round of usability testing. The main objectives were to find out if the Browse experience worked as intended and how easy it was to use. The script comprised some data-gathering questions about streaming usage, a set of 4 tasks with an eye on task success and ease of use, and some questions about the refine features, followed by a similar set of tasks for the alternate prototype and a few closing questions about the two prototypes. I varied showing the A and B type first.

 

For round 1, I chose to do in-person usability testing with small pool of volunteer testers. I ran recruitment through a listserv and scheduling with Calendly. What’s interesting with in-person testing is that you can see body language and movement for a fuller picture of user reactions. In addition, the ability to redirect the user or slightly improvise the script can be useful. Since it was the first test for Browse, I made sure to pay extra attention to the unspoken details.

 

In this case, the initial testing validated the placement of the refine menu location on the top right, the indication on the top left when refine has been applied, and the concept and function of applying multiple filters. Most users expected those elements to behave in that way, and expressed positive emotions about the design. Users expressed a preference for the drop-down style, but not universally, so I decided to continue testing to gather more data. A few other points of confusion or negativity came up in testing, particularly deselecting items and resetting after filtering and sorting, so I decided to tweak the prototype and address them in the next round.

 

For round 2, with the help of UI, we made a set of mid-fidelity prototypes that added a browse landing page and a topic detail page listing different categories. I used UserTesting to recruit a large pool of remote testers, using a similar script. Their helpful Balanced Comparison feature alternated showing the A or B type first. As usual with UserTesting, I watched the recorded tests, making notes and clips when useful, then synthesized them and shared them with my team. This time around, we got further validation for the drop-down model. However, there was some confusion about the full-screen model only having an X out to exit (rather than an apply button, which we had deemed redundant), and no way to go back from a topic detail page (a small oversight on the mid-fi proto). In response, we made each filter or sort single-click for full-screen and added the missing back button on the topic detail page.

 

In all, we received generally positive feedback on the actual filters and sorts. One user expressed, “I’ve never seen these filters before. I’d love filters like these!”

 

For round 3, a further refined set of prototypes was created, along with a simplified task script because the core hypotheses tested before were validated. I used a medium pool of remote testers on UserTesting. We received our final support needed to go with the drop-down style, plus a couple of user behavior notes around resetting and navigating back that we will be watching on launch. All in all, the three rounds of testing gave us pretty broad validation on the design function and style, and we were able to refine usability over the multiple iterations.

 

A user quote, about the drop-down style: “Seeing the real-time filtering going on below is so helpful.” This was particularly useful because our initial hypothesis was that the full-screen style would be more popular, given its sleek look and the way it prevents distraction. Happily, usability testing steered us in the user-preferred way, the drop-down. See the final prototype at the top of the page.

 

Summary

Throughout, we experienced high user demand for filtering and sorting while browsing to customize their experience. We feel that with Browse, we were able to create a new path in-between the fully user-directed search and editorial-directed homepage. In this sense, the project delivers on utility not previously leveraged. In addition, it stands out from the competition, which we saw hadn’t capitalized on this user need.

Post-Design Card Sorting

After the design was complete, I put aside time on a different sprint to conduct a card sort on H-Vault videos. We had never gotten data on user-made groupings before. I thought it might be useful since Browse was in the development backlog, and editorial would therefore have to put together some topics for the Browse page. Although design doesn’t influence editorial decisions, I thought providing user research would help them make their choices and justify them to the management team. It turned out that I was introducing a new technique to most of the team, and a new tool (OptimalSort) to all. Because of that, I documented the process as much as possible to later share.

 

My main objective was to gain insight into user-perceived groupings and categories around H-Vault content, and secondarily to gain insight into dissimilarities and divergences in content.

 

The potential applications were broad, but I initially envisioned it to inform Topics under Browse. Additionally, it could inform editorial choices around Collections in the meantime, and further on, inform search by keyword if it’s implemented.

 

For the internal setup, I had to narrow down the pool of approximately 630 unique H-Vault video titles down to 140 to sort. 140 is the upper limit allowed by the sorting platform, and also a good upper bound for how much you can ask a user to do before they quit. It takes a pretty long time and a good amount of thinking to sort over 100 cards. From the list of 630, I selected based on two criteria: maximum range (time, location, theme, etc.) and minimum redundancy (i.e. two titles in the same series). I wanted to achieve a good spread of titles to mirror the range of H-Vault, and an even sample set.

 

Here are a couple selected titles that became cards in the sort:

  • Secret Service: JFK to Watergate

  • The Royal Navy: England’s Wooden Walls

  • Angkor Wat: The Eighth Wonder

  • Knights of Camelot

  • Secrets of the Seven Seals

  • Freedom Summer

  • Korea: The Forgotten War

  • Mark Twain: His Amazing Adventures

  • 9/11: The Days After

  • Mega Tsunami

  • Da Vinci’s World

  • Old Testament Heroines

 

I then set up the two tools, UserTesting and OptimalSort. In UserTesting, I screened for a panel that included all of the following traits, as a proxy for an H-Vault subscriber:

  • US-based

  • Subscriber to video streaming services

  • Have watched a documentary/nonfiction video in the past month

  • Have watched history-related content in the last 3 months

 

UT would be used primarily for recruitment, screen/audio recording, and follow-up, paired with with OptimalSort, for sorting logistics and data analysis. In OptimalSort, I selected the open style sort in which the participant names all categories, because we were looking for users to generate topics and reveal how they grouped videos within those topics. As with all sorts, there is one card per title and no limit to the number of categories or cards per category, and each card may only be sorted into one category.

 

Our metrics after launch yielded 15 sorts officially completed as marked by UT (the number was bounded by the max allowed per test on UT), 3 additional mostly complete sorts that were counted towards the final data set, and 6 partially complete/abandoned sorts that were not counted. The median completion time was 29 minutes.

 

To synthesize the results, I initially watched the UT videos and made notes, which was interesting because some users changed their minds midway through sorting or posited several different theories of how to sort their cards. In the end, given the number of cards and users, I leaned more on the summaries created by OptimalSort to finish my synthesis.

 

Right off the bat, I was able to see the most popular user categories:

  • Action

  • American History

  • Ancient Empires

  • Biographies

  • Crime

  • Historical Drama

  • Medicine and Disease

  • Military History

  • Mysteries

  • Myths

  • Nature

  • War and Military

 

OptimalWorkshop creates a similarity matrix, which calculates the percentage of users who grouped any two cards together. It’s presented visually as a triangle whose hypotenuse has the highest similarities. Scores are color-coordinated, with a darker color indicating higher similarity.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We can see that 77% of users grouped Mega Drought and Hurricane Katrina together, indicating similarity.

Mega Drought, Mega Tsunami, Glacier Meltdown, and Hurricane Katrina have relatively high correlation and may make a good grouping.

In contrast, 0% of users grouped Inside Alcatraz and Plague together, indicating dissimilarity.

 

Additionally, they present what’s called a dendrogram (actual agreement method, one of two methods), an illustration of data clusters that shows percentage agreement between users. The actual agreement method works best with large volumes of data (30+ users), but since we had only half those numbers, in our case, shows conservative groupings.

 

 

17% of users grouped the 6 cards in green under Ancient/Ancient Greece and Rome/Myth.

 

22% of users grouped the 4 cards in green under the subset Ancient Greece and Rome/Myth.

 

More useful for this project is the other dendrogram (best merge method), an illustration of data clusters that shows percentage agreement between users. The best merge method makes assumptions based on pairings, and works well with smaller data sets. In our case, it shows liberal or more extrapolated groupings.

 

 

 

 

 

 

 

 

50% of users grouped the large section of cards in green under Space/Mysteries/Science.

 

 

 

 

 

 

 

 

 

 

 

 

 

61% grouped the subset of cards under Mysteries/Aliens/Supernatural and the Occult/Ancient Mysteries.

 

Based on these graphics, and additional analysis, I designated some validated groupings, defined as groupings presented as Collections that users also recreated in sorting, indicating they could be potential topics under Browse. The topic below is followed by the similar Collection name:

  • 60s (The 1960s)

  • Ancient (Ancient Discoveries)

  • Battles (Battles of the Deep)

  • Cars (Planes, Trains, and Automobiles)

  • Civil War (American Civil War)

  • Disasters (Nature’s Fury)

  • Military (Military Leaders)

  • Mysteries (Unsolved Mysteries and Legends)

  • Space (Modern Marvels: Space Tech)

  • Technology & Engineering (Modern Marvels: Engineering)

  • Wild West (Wild about the West)

  • WW1 and WWII

 

There were also a host of new groupings that hadn’t been curated before:

  • Action

  • American History

  • Art & Architecture

  • Biography

  • Classics

  • Crime

  • Early America

  • Europe

  • Fantasy

  • Historical Figures

  • Horror

  • Medical

  • Middle East

  • Movies

  • Politics

  • Religion

  • Sci Fi/Science

  • Sports

  • Travel

  • War & Military

  • World History

 

In addition, some users made creative groupings that I wanted to call out:

  • Aircraft, Vessels, Structures, and Vehicles

  • Aliens, Supernatural, and the Occult

  • American Presidents & Founding Fathers

  • Ancient Mysteries

  • Art & Culture

  • Augmented Reality

  • Classic Literature

  • “Edutainment” History

  • Epics/Tales of Honor

  • Fantasy & Mythology

  • Gladiator

  • History Conspiracies

  • Midwest

  • NASA and Space

  • Native Americans

  • Special and Secret Forces

  • Speculation of Reality

  • Tragedy

 

In all, we got plenty of preliminary data to consider how to shape topics under Browse, or a revision of Collections.

Next Steps

In terms of next steps for the Browse design, following development, an A/B test will be run against the current Collections to see how Browse performs. In addition, we will examine the analytics on how much filtering/sorting is applied by users to determine whether or not to add a reset button, the final small question that surfaced in testing.

 

In terms of card sorting, the next steps may include collaborating between design and editorial to shape new or revised groupings. There’s also a lot of room for additional card sorting—potential future sorts for H-Vault titles that didn’t make the cut, additional sorts to gather more data volume, and potential future sorts for other products like LMC.

Reflection

Despite being a theoretical project in that we don’t know yet if it will ship, Browse was a great feature to work on because I feel that it addresses the Collections issues noted in research. We got positive feedback about it in testing, and I’d love to see it tested with actual users. We had a great collaboration with UI and with the team in general, and happily achieved good amount of depth (3 rounds of testing on progressively higher fidelity prototypes) in only a sprint’s worth of time. All in all a positive experience, and one I hope will serve A+E well.

 

In addition to the design, the week spent card sorting, which was a side project I’d envisioned as an addendum, ended up being as important, if not more, than the design work. Editorial and management will be rethinking Collections seriously regardless of whether Browse is implemented, and it was great for me to be working with other content-related issues that designers are not usually privy to. I think setting a user-driven tone is a great start to that conversation.

Screen Shot 2019-01-17 at 1.14.14 AM.png
IMG_2176.jpg
B2 open refine.png
A2 open refine.png
tri1.png
tri2.png
den1.png
den2.png
den3.png
den4.png
den5.png
den6.png
bottom of page