Challenge Statement
Gousto is a recipe box provider whose mission is to become the nation's favourite way to eat dinner. Once a customer has subscribed to Gousto, they'll open the menu each week and land on a personalised section called 'Chosen For You'. This section uses an algorithm that shows up to 15 relevant recipes to the customer based on their order history. It is designed to help prevent choice overwhelm and inject personalisation into the browsing experience.
However, when talking to customers about the usefulness of this section, I discovered:
These qualitative findings were supported by behavioural data. When a customer adds their first recipe, 32% of these adds happen in the 'Chosen For You' section. However, 80% of first recipes added are from the top 15 ranked recipes across the whole menu.
With this in mind, how might we increase awareness, knowledge and trust of our recommendations?
Defining Trust
Working closely with the Product Manager and UX researcher, we carried out desk research and gathered literature on what trust really means to a customer. Key findings included:
• Trust being cyclical – it requires consistent positive experiences with low to no friction
• It's almost impossible to repair when broken
• First impressions count – The first interaction is the most important for trust building
With this in mind, we started thinking about how this applies to browsing the menu and built the following strategic pillars:
1/3: Improving Awareness
Focusing on the awareness pillar was a logical first place to start as it was the foundational pillar to work from – customers need to know that recommendations exist before we start explaining how they work or how they can be influenced. It was important to:
a) Tell customers that recommendations exist
By briefly explaining why customers are seeing these specific recipes within the 'Chosen For You' section, customers will be aware recommendations exist and find the section more useful.
b) Encourage customers to start using the 'Chosen For You' section.
By including the customer name in the section (e.g. 'For Harriet'), this immediately made the section feel more personalised and alluring, encouraging higher use of the section.
Impact
These small copy changes made for an extremely lean experiment to learn from and proved to be a very powerful A/B test on iOS. The test generated £0.8m in profit for the company as well as an improved customer experience so was productionised across iOS, Android and responsive web. By monitoring customer feedback through a custom tool built at the end of the choosing journey, I could collate verbatim comments from customers too, and noticed a positive increase in awareness of recommendations.
2/3: Improving Knowledge
Customers now knew that recommendations existed but still didn't know how they worked across the menu. When asking customers how they thought the recipes on the menu were ordered and why they are seeing specific recipes at the top of the list, common answers included by popularity, calorie count and stock levels 🤦♀️
Ideating with the team
I facilitated an ideation workshop with engineers, data scientists, analysts and product folks to generate a wide range of ideas. The team ideated on the following problem statement:
"How might we increase knowledge that recipes on the menu are ordered by recommendations?"
I applied the same level of design exploration across all other relevant touch points before stitching together the strongest solutions across an entire customer journey. The red boxes below highlight all the nods to explaining recommendations, from when a customer rates a recipe, to when a customer finishes placing an order.
For this to be successful, it was integral that the copy made sense to customers. I ran a series of user tests that focused solely on content testing in order to boost copy confidence. Customers were asked to describe in their own words how recipes have been organised, as well as filling in missing words in sentences to better understand their natural language.
Impact
The bundle of design changes launched as an A/B test on iOS, and despite the experiment returning flat results from a commercial perspective, customer feedback channels signalled once again a positive trend in satisfaction when browsing the menu. The amount of positive customer feedback recieved warranted a decision to productionise these changes on both iOS and Android.
3/3: Improving Influence
The final pillar of trust on the Gousto menu revolved around allowing customers to influence their recommendations. One of the main reasons why recipes in the 'For You' section were not appealing was because customers saw recipes they either couldn't or wouldn't eat. Data and insights also showed:
• Customers often complain that they order recipes and then realise at the point of cooking that they don't have the necessary equipment
• 33% of Gousto customers have a dietary requirement
• Seeing recipes with ingredients that their household doesn't like or couldn't eat was a top reason for not using the 'For You' section
I took these insights to a team ideation with engineers, product folk and data scientists and asked the group to ideate around the customer problem of "I don't trust the 'For You' section as it shows me recipes I can't or won't eat".
Prioritising an idea
Ideation demonstrated a unified agreement of asking customers to input their preferences in some way. I explored a range of different ideas in Figma, from filtering, sorting, and setting preferences within a profile before partnering with my Product Manager to prioritise ideas. We whittled down concepts using the following principles (based on learnings from previous experimentation on the menu):
✅ Make it easy to change preferences when browsing the menu
✅ Avoid the perception of restricting choice
✅ Keep the cognitive load of a customer to a minimum
✅ Personalise wherever possible
✅ Don't interrupt a customer's flow once on the menu
Understanding feasibility
It was clear from customer insights that dietary requirements, ingredient dislikes, allergies, missing equipment and spice levels were important preference attributes to capture. However after talking through ideas with the engineering team, ingredient dislikes, allergies and spice levels would be difficult to initially capture due to complex technical constraints. This meant asking for dietary requirements and missing equipment were prioritised as an MVP.
Testing with users
Next up was testing this concept with users. There were a handful of risky assumptions attached to this design, including:
• If users will be satisfied (aka not frustrated) with seeing a screen appear asking for preferences before the menu
• If users who don't have any preferences will be happy enough to complete the preferences flow
• If users want a personalised and re-ordered menu
I shared a prototype of the flow with 15 users and asked them to complete a series of tasks and follow up questions. Their responses gave me confidence that introducing a screen before the menu would go down well. They believed it was a useful feature to both save time and make their menu more personalised as a result.
Impact
The design was launched as an A/B test on iOS and both commercial metrics and customer satisfaction were measured. Interestingly, the test had a negative commercial impact but an extremely positive impact on customer satisfaction.
Considering the experiment was so loved by customers, myself and the team dug deeper into the commercial results by running a debrief. We collectively scrutinised the data and formed hypotheses as to why the result was negative. Two main hypotheses were formed:
1. The re-ordering of the menu was a worse experience for customers (we'd messed with their recommendations order)
2. We'd asked about kitchen equipment that didn't prominently feature on the menu, therefore increasing effort levels before reaching the menu
By maintaining recommendation order on the menu and stripping back unnecessary kitchen equipment from the list, I could make minimal tweaks to the design and create a lean, simple and effective iteration. This iteration was A/B tested and proved commercially successful, increasing our metric of Average Orders Per User (AOPU) by 1%, equating to just over £1m in EBITDA 🎉.