Hulu: Ad Campaign Review


Position: Product Design Intern, Enterprise Product Design

Project Type: Internal Tool Redesign

Project Role: Sole contributor working under an advising lead

Timeline: June 2021 - Sept. 2021

Users: Ad Campaign Reviewers

Project Background

Hulu was investing in the modernization of their ten-year-old ad platform. One of the platform’s tools, Campaign Review, is used by a specialized team, called Campaign Reviewers, to carry out the inspection of ad campaigns prior to launch:

 
 
 
 

Project Summary

I was tasked with redesigning the Campaign Review tool to streamline the existing inspection process, by eliminating the team’s use of any manual data entry and external tools.

Ultimately, I fulfilled this goal by implementing a feature called Error Flagging, which automated the identification of feedback and collected it directly within the Campaign Review dashboard. This reduced the amount of time a Campaign Reviewer spent in the review process by roughly 40%, or two and a half hours per day.

 

The User: Campaign Reviewer

 
 

The Process

Discovery & Definition

Upon joining the team, I learned that Enterprise Product Design had just completed an extensive research phase. As such, the majority of my Discovery & Definition efforts would go towards explicitly defining the problem and identifying applicable design patterns for a solution.

User Research

To understand the problem, I began with an end-to-end analysis of the Campaign Reviewer’s workflow. This included an extensive audit of the existing tool’s information architecture and task flows. I sourced insights from documented research interviews that had been carried out by a design lead prior to my involvement. I also relied on some of my own knowledge, given that I had three years of experience as an actual Campaign Reviewer at Hulu.

I devoted extra effort to developing very granular workflow diagrams, to capture the complexity and specialized nature of the Campaign Reviewer’s work. This would equip me down the line to explain nuanced design decisions and concerns with stakeholders.

Key Insights

1) The Campaign Reviewer relied on an outdated external CRM & Ticketing platform to receive their inspection requests, which leadership wanted to phase out as a part of the unification effort.

2) The Campaign Reviewer’s most impactful pain points stemmed from the use of Google Sheets to store inspection feedback, as well as the manual drafting of complex emails detailing feedback to the team upstream.

Side Quest: Re-evaluating Project Scope

Ad platform leadership decided they wanted to phase out the existing CRM/Ticketing platform that Campaign Reviewers used to receive their inspection orders, which meant that the project may now also require the construction of an entire ticketing platform. This had some serious implications for an engineering team that was already spread thin, so I’d have to determine if this was worth their time (and mine). As such, I set out to understand:

  • If we built a ticketing platform in our new unified tool suite, where would it live and how would it behave?

  • How tightly or loosely would it have to be integrated with Campaign Review?

  • Is it simpler to adopt an existing, but more efficient, task management tool like JIRA?

  • What specialized tasks needed to be carried out that could not be accomplished by JIRA?

Given that the Enterprise Product Design team entrusted me with this key design decision, I wanted to ensure that my final rationale was air-tight.

As such, I developed a series of detailed task flow maps to inform myself fully. Armed with these diagrams, I could also provide enough context for the product manager, software developers, and I to have constructive conversations about an in-house ticketing system’s feasibility (due to resource limitations) and to understand how JIRA could accommodate the needs of the Campaign Review team in the same way.

This task flow map illustrated the movement of a Campaign Review ticket from the Campaign Management team to the Campaign Reviewer team.

This “ticket task flow” map illustrated the lifecycle of a Campaign Review ticket within our imagined “future state” ticketing system.

 

Side Quest: Completed

As a result of this deliberation process, I arrived at a very clear understanding of the problem that I would be tackling. I also managed to source a multitude of useful design patterns, that fueled the ideation process for the Error Flagging feature down the line. Ultimately, we decided that JIRA would be a good solution that could bridge the gap between AOS’ needs and the bandwidth limitations of our dev team.

With a custom solution for ticketing off the table, I could focus solely on improving the Campaign Review tool/workflow for the AOS team.

 

The Search for Design Patterns

This comment flag in Google Sheets prompted me to get comfortable with the idea of components only appearing in particular states.

To answer some of the questions above, I set out to find patterns that could be applied to an in-house ticketing system.

I also took time to look for design patterns that could be applied within Campaign Review to address the existing pain points that Campaign Reviewers were experiencing.

To do this, I carried out an analysis of various websites, tools, and platforms that had “commenting”, “flagging”, “ticketing”, or “markup” functionality, collecting a list of these patterns and loosely documenting potential solutions.

This floating comment from Google Docs has a “kebab” menu, which proved to be very useful later during the ideation of the “Error Card” component.

 
 
 

I ultimately sourced patterns from learning management systems, word processing tools, design tools, and task management tools.

 
 
 

The Solution: Error Flagging

Campaign Reviewers use Campaign Review to assess the accuracy of 30+ attributes that determine things like run dates (start/end), the number of ads to be served, or the audience segment being targeted.

With the Error Flagging feature, the reviewer can identify, document, and export any incorrect ad campaign attributes directly within the Campaign Review interface.

The process of flagging a single error can be accomplished with as little as three clicks and no typing.

The Campaign Reviewer can now provide feedback to the Campaign Manager effortlessly, eliminating the need for manually drafted emails and the use of Google sheets to account for details.

To see this feature in action, see below:

Develop & Deliver

Frequent iteration, establishing a feedback loop, and delivering a working prototype.

Challenging Design Constraints

Aside from the technical limitations that the project was facing (low dev resources/bandwidth), there were some design constraints:

  • I was designing for a tool that, by design, was very information-dense, so I’d have to work with precision to avoid cluttering the existing interface.

  • Each table in the Campaign Review interface looked and behaved differently, so I’d have to make the feature flexible enough to capture data from each table.

Iteration

I kicked off the iteration process with rough sketching and low fidelity wireframing in Lucidchart.

The development phase of this feature proved to be the most challenging portion of the project.

Ultimately, I found that by allowing myself to ideate in a particular direction without being certain I’d find my answer, I would at least end up unearthing valuable insights that would contribute to my final design solution.

I converged with more senior members of Enterprise Product Design several times throughout the iteration process to get seasoned critiques and analyses of my work.

1st Iteration: Establishing Core Components

Because this feature had to fit in a complicated interface, the only reliable way to evaluate its success was to design a functioning prototype for each iteration and then converge for feedback. In this case, I was converging with fellow designers and developers who could give me feedback on feasibility and usability.

For my first iteration, I created a mid-fidelity working prototype. I was look to establish basic design components that could be thoughtfully iterated upon to effectively capture error information within Campaign Review.

How would I gather and then export error information? How would I share with the Campaign Manager? These two key design components, the “error card” and the “side panel" were the answer to these questions:

 

These components contained error information, such as affected campaign name and error type.

 

Because the goal was to eliminate the use of manual entry (in this case, manually typing up an error and its details), I’d have to find a way to automate the gathering of QA feedback into the error cards.

I applied these two components to Campaign Review in Figma and began experimenting with different design patterns to auto-fill the error cards by simply selecting impacted areas. For this, I used a system of checkboxes applied on the top of each table column and row.

While the check box solution worked well in one table, it was not as easy to adapt for the other tables in Campaign Review:

 
 

As such, I experimented with other ways of capturing errors in more complex tables, by adding drop-downs and checkboxes in a separate section of the table. While this attempt was unsuccessful, a similar approach would be applied in V2.

 
 

The error card and the error card side panel, appeared to be a good vehicle for gathering error information, but there were obviously issues with usability, flexibility that needed to be addressed in the next iteration.

2nd Iteration: Maximizing Flexibility

To address the issue of having to adapt to different tables, I decided to experiment by leaving the existing tables in Campaign Review as they were (no column header checkboxes), and instead, added more complexity to the error side panel. The idea was to have dynamic dropdowns in the panel, whose selection options would change based on the table type that the error was located in.

 
 

I iterated on this error side panel design for some time, but after receiving feedback from the larger design team, determined that there was too much clicking involved.

 
 

This pushed me to think harder about the interaction design: Dropdowns, buttons, and checkboxes could only be configured in so many ways.

3rd Iteration: Experimenting with Interaction Design

I wanted to be able to collect data from differing data tables without confusing the end-user. Originally, the idea of checkboxes seemed like the least confusing route. But despite my best efforts, I still had not arrived at an effective design scheme using checkboxes or dropdowns that felt both intuitive and smooth.

 
 
 
 
 

The execution in V3 was still rough. So I took a step back to search for more design patterns out in the world, to see if I could apply something to tie it all together.

4th Iteration: Key Patterns & Style

After a secondary research phase and sharing my progress with other designers, I landed on some design patterns that made all the difference in the success of the proposed solution:

 

Mouse-Over Flags

These are selection components that appear in Campaign Review tables on-mouse over. They “float” above tables and table rows, acting as selection buttons that eliminate the need to alter the functionality or layout of any of Campaign Review’s tables.

 

States: Active, Pressed, Hovered

 

Using Figma’s functionality for creating variants, I was able to design different versions of each flag: one for each “hover state”.

 

Refined Error Card and Side Rail UI

These error card and side rail UI components were built to align with the Hulu Tools Style Guide. In addition, I simplified how cards displayed complex information. For example, in the first iteration, the Error Card listed the full name of the impacted campaign. In the fourth iteration, error cards did not, as the impacted campaign name was already highlighted in blue in the respective table.

 

Copy To Clipboard

By capturing error information in an “error card”, we would be storing this info directly in the server (as opposed to a Google Sheet). A user could then easily export nuanced error details by simply clicking “copy contents”.

This would make a call to the backend, requesting the specific feedback details and automatically “Copy to Clipboard”. From there a Campaign Reviewer could simply paste it into an email and hit send.

Pain Points: Solved

By capturing all error information directly in Campaign Review, we effectively eliminated the key pain points with the existing Campaign Review process: Manually generating emails & logging error information in a Google Sheet.

The Results

Looking at the Stats

With the updates to the Campaign Review tool, the Campaign Reviewer is afforded with a much more streamlined review process.

The Error Flagging feature successfully shortens the total click-path for the entire review workflow from 25 to 15 clicks. It also eliminates the need for extensive/manual typing and manual data entry (Google Sheets).

These changes roughly equate to a 40% reduction in time spent reviewing an ad campaign, saving the Campaign Reviewer ~2.5 hours of work per day or around 12 hours per week.

The review process is carried out by each reviewer 1-125 times per day. This solution makes the reviewer’s job faster, less tedious, and more enjoyable.

 

Click Here to use the Figma prototype.

 

Next Steps

Due to the time constraints, I was unable to carry out any subsequent usability testing with the Campaign Review team.

In the hypothetical scenario where I could test with real users, my plan would have consisted of the following:

  • Scheduling timed usability testing sessions with members of Campaign Review, providing instructions to carry out specific critical tasks.

  • Gathering feedback from observation and post-testing interviews. Using cluster mapping techniques to distill the information into actionable insights.

  • Collecting SUS Survey information for the original Campaign Review process, followed by a SUS survey for Future State prototype (post-usability testing).

  • Carrying out iterations based on this feedback and repeating as needed.

 

The challenges Pablo tackled in this short time would have been difficult for someone with years of professional design experience. The fact that he was able to get to an elegant solution for as complex a feature as error entry [Error Flagging] is seriously impressive, and I know he’s only going to get better and better from here.”

Lily Lapidese, Senior Product Designer, Hulu

Thank You for Reading.

Previous
Previous

Golfr: An iOS application

Next
Next

Porsche: Service Design