STARBUTTER

UX Research 2017

GOAL

  • Identify user painpoints within the mortgage selection process.

  • Recommend appropriate changes to chatbot.

CONTEXT

  • On a team of 4 on Berkeley Innovation for a 5-month client project.

  • Starbutter creates chatbots to match people to mortgages and credit cards right for them.

  • Tools Used: User Interviews, Usability Tests, Figma, KeyNote.

FINAL DELIVERABLE

  • User study complete with insights (frustrations, user personas, journey map, dialogue samples) and recommendations for changes to chatbot based off of 20+ user interviews, 10+ usability tests.

USER RESEARCH

We conducted 20 interviews to understand user painpoints.

We interviewed first-time home buyers, refinancers, and mortgage brokers. We wanted to understand the process in terms of steps involved, sources of information, key frustrations, and key decision making factors.

We gathered several pieces of information that generally fell into 4 categories:

  • Banks = Not Trustworthy

    Large banks are described as cold, stiff, and not trustworthy. But users characterized family, close contacts, and agents as trustworthy, personable, and transparent.

  • Options Related to Time Spent Researching

    Users generally either considered 2-3 options or 3+. Users who consider 2-3 options were happy with initial interest rates and wanted to finish the process quickly. But users who consider 3+ options were looking for the best possible rate & spent more time doing so.

  • Want Quick Leads, More Reviews, Loan Estimate, & Less Paperwork

    Users wanted a tool that gave leads quickly, allowing them to view all mortgage options, and trusted reviews to go with it. They also wanted an accurate estimate on how big a loan they could get based on their current financial situation. Lastly, users wanted much less paperwork.

  • Policies & Paperwork Are Confusing

    Users felt most confused by financial/mortgage policies and paperwork.

Affinity diagramming insights from first 20 interviews.

PERSONAS

We further broke down our findings into the following personas.

  • Researchers

Seek out online resources over friends/banks. Spend at least 1 month researching. Feel process is stressful. Get 5-7 quotes before choosing 1. Slow to move through the process. Interestingly, tended to be immigrants.

  • Networkers

    Trust realtors and brokers to have their best interests in mind. Prefer information from their network over online resources. Move more quickly through research process. Find it relatively stress free. Will get 2-3 mortgage quotes from their realtor or broker.

  • Refinancers

    Move quickly and don’t consider many options. Due to previous experience with mortgages, have a good understanding. More inquisitive than stressed out during the process. Once they find a suitable option, likely to stop looking, unlike Researchers.

JOURNEY MAPPING

We created a journey map with 3 key stages: Informal Research, Professional Research, and Decision.

General journey map that outlines mortgage research process across 3 types of users. Indicates levels of “pain” (i.e. frustration), too.

MVP CRITIQUE

We made changes to the existing bot based on user insights.

Trust & Transparency: Be the “Mutual Friend”

Our challenge was figuring out how to build trust when interacting with a faceless algorithm. We recommended making the experience as transparent as possible that acted as the “mutual friend” between the lenders and users by using a conversational and light tone.

Screenshot of existing dialogue. A toned down version of this explaining how loans are calculated would create more trust. This feels a bit too much. Keeping it a bit more casual, and conversational-like would increase trust, we hypothesized.

More Context, Progress Indicators, & Support

Finding the best personalized loan requires a lot of information, and questions become increasingly more personal. To fight user drop-off, we recommended adding more context, progress indicators, and edge case support.

The existing bot lacked progress indicators. Additionally, it did not respond well to ambiguity, uncertainty, and did not take in a wide range of responses, making it difficult for the user to engage with and increasing the chances of user drop-off at this point.

Tile Answers

We recommended having users answer with tile answers pre-populated with what users generally replied with, to increase intuitiveness.

Quick, and easy-to-read tiles would increase ease of interaction with the chatbot.

First Interaction

Users should be able to quickly get the answers they need during their first interaction with the mortgage helper. Anything else would increase user-drop off.

Screenshot of response after typing “hi” to chatbot. Not immediately helpful. Additionally, transition from "us" to an "I" is confusing and doesn't tell the user who they’re talking to from the get-go. The product seems confused and thus less trustworthy to the user.

USABILITY TESTING

We conducted 10 usability tests with a version of the product that had been updated based on our suggestions from our MVP critique.

  • Users saw the chatbot as an ADDITIONAL resource to sources they were already using (i.e. online sites, banks, agents, etc.).

“I would use this tool alongside other resources, and this might be a good starting point.”

  • People felt uneasy providing personal details without context about why they were needed.

  • Incorrect grammar and typos contributed to a lack of professionalism and trust.

  • People would be more likely to trust the bot if recommended to them by an agent or friend. 

Note: The feedback we got from the user testing was very dependent on how well the bot ran technically. Some users experienced major glitches that negatively impacted the experience.

FINAL RECOMMENDATIONS

Based on our critiques, we proposed final recommendations.

  • Be the Mutual Friend Between Lenders and Users

    Evoke a friendly, conversational feeling with users through dialogue tone, dialogue copy, and chatbot profile picture.

  • Partner with Lenders/Agencies for Referrals

    Users mentioned they would trust the bot if recommended to them by a trustworthy source (i.e. agents, someone in their network). This could increase adoption of the bot.

  • More Transparency for More Trust

    Be upfront about why certain information is required to increase trust. People aren’t super willing to provide personal info when they don’t know how it’s being used, i.e. birthday, last name, email, and home address.

  • Better Navigational Freedom

    Previously, the chatbot didn’t allow people to correct for typos or mistakes, or head back to the original menu which caused frustration. Add additional navigation options for users get the info they need.

  • Proofread Copy

    Typos made users feel less trust and reliability. An obvious, but important recommendation.

IF I COULD DO THIS OVER AGAIN...

  1. I would use more quantitative methods via A/B testing to gather more quantitative data on user perceptions and see how that compared to our qualitative data.

  2. I would explore the chatbot’s “face” (i.e. profile picture) more to understand how that impacted user’s feelings regarding the bot.

  3. I would explore copy design a bit more to see how tone that specifically impacted the user experience.

 

OTHER WORK: Cisco / BuildZoom / Dayli