Hackathon Hotel Comparison Tool
The team: Product manager, developers, (front end and back end) QA and product analytics.
My role: Product designer, organising and conducting user research, debriefing the team and stakeholders.
Metrics for success: Longer page time on the search results page, less property detail pages viewed and higher qualified customers going to the booking (this would show up as less visits as a stat).
Intro
From previous research we know customers compare multiple properties across multiple online travel agencies in multiple visits before selecting the right one. We also know that due to all the data points customers need to compare such as location, reviews, price and amenities it makes the task even harder. Currently Hotels.com do not offer customers the ability to make this process easier instead we have seen them rely on spreadsheets, word documents and whatsapp groups to remember what they have seen. As a potential solution to this problem myself the team I was in decided to create a comparison tool.
Challenge
We know from analytics that 62% of orders happen over multiple visits and over 30% happen across multiple devices (more when families are involved). For customers, comparing and remembering properties is extremely hard, especially when the differences between properties are so subtle. The challenge we tasked ourselves with was how do we make this easier for customers?
Approach
The team worked extremely closely due to the time being limited to only two days. We split projects into a few stages: ideation, pre-grooming, refinement, design/build, remote testing to evaluate our feature, and a final site wide multi-variant test.
Ideation / Pre-grooming
We worked with customer analytics to get an idea of what customers were doing (for example number of properties viewed and key property comparison attributes), to set some rules around our thinking and we held a whiteboard session to come up with ideas. Involving the developers at this point was vital as it allowed us to essentially pre-groom all the possible solutions which saved vital time in the long run.
Design
After our whiteboard sessions we dot voted on the initial ideas and I set about creating some wireframes for the most popular one. At this point I refined the interaction patterns, outlined the data points we wanted to allow customers to compare by, ensured it would work across all our points of sales and made sure we stayed true to the initial problem we wanted to solve. The final design consisted of a heart icon over the image (highly visible area to aid discoverability of the feature) and an overlay which allowed customers to compare key attributes such as images, location, price, reviews and amenities.
Validation
I firmly believe that quantitative data only tells half the story and should always be cross referenced with qualitative. With this in mind, before we launched the MVT we wanted to run some user testing sessions to validate the feature, test for comprehension and discoverability. We also conducted a short interview at the end of the session asking how customers felt about the comparison tool. During this process we documented all bugs, noted any customer pain points (checking how frequently they happened), and identified any areas for improvement before the final test.
Test setup
Two variants for all customers that land on a search results page. Variant 1 displayed the heart icon over the image on the property listing. Variant 2 displayed the heart icon over the image on the property listing and there was also a comparison modal pinned to the bottom of the page.
Outcome
Although the first test was negative, we are now iterating on this feature as we believe it adds value to the shopping process and comments from the user testing sessions were extremely positive. The analytics also showed some positive results. 18.07% of customers who saved a property went on to compare (open the overlay) with a conversion rate of 23.64%... However, this was only 0.39% of visitors which suggests it suffered from low discoverability. This did not come out in initial user testing so will now be a point of focus in the next iteration. Also for customers that used the comparison tool we saw a conversion increase of 7% but again due to low volume (0.39% of visitors) this ended with an inconclusive, but trending negative result.
Final thoughts
Given the limited time frame for this challenge there are obviously things we could have done better. For example, I would have liked to have done an additional moderated study which may have caught the discoverability issue that we noticed in the production test. In terms of implementation, it would have been better to allow customers to have more control of the attributes that they compared and to enable them to be able to share the list of properties they had saved.
With regards to my own contribution to the project, I learnt a lot about unmoderated studies and how important the set up is, for example developing an appropriate screener to ensure you get the best possible candidate. Also, had we had more time, I would have liked to do an additional user testing session on the initial designs to test the discoverability of the feature.