Freeline
Role: UX researcher and designer
Teammate(s): Ashish Sur (machine learning & software engineer)
Methods & tools: Qualitative interviews, survey, focus group, wireframing, Sketch
the problem
With the influx of unwanted calls nowadays, most people feel generally confident in spotting a scam call. But people continue to be easily deceived when they come across a sophisticated phone scam.
Phone phishing is the use of social engineering over the phone to access personal and/or financial information. In 2018, phone calls were the most common contact method of imposter scams reported to the Federal Trade Commission (FTC), and the total financial loss to imposter phone scams totaled $429 million, with a median loss of $840.
the opportunity
How might we better protect consumers against phone phishing scams?
the approach
To better understand phone phishing and consumer phone behaviors, we looked to learn from multiple sources: existing cybersecurity literature, consumer anti-phishing resources, targets of imposter phone scams, everyday consumers, and social engineers themselves.
I'll focus on the qualitative interviews, survey, and focus group findings for this case study.
Qualitative Interviews
We wanted to learn about the lived experiences of folks who have been targeted by phone phishing, and how they were manipulated from their perspective. This knowledge would provide the team direction towards a potential solution, by understanding a person's journey of a phone scam experience.
I conducted five 60-minute qualitative interviews that focused on the strategies that attackers used to build the targets’ trust towards them and how they perceived the attackers.
Key findings
“It's just the sheer amount of information they had on me - it just put doubt in my mind.”
Attackers used a variety of tactics to manipulate the target into trusting them, such as being sympathetic to build rapport. Another tactic was using information they had on the targets to strengthen their perceived credibility.
“I noticed that [an email from] Paypal showed up in my inbox, but it was deleted right away…”
There was a clear recall of the moment when something the attacker said something or requested them to do something that triggered the targets into thinking critically about the situation, and eventually led them to realize they were being scammed.
“These guys were good enough that I got scammed.”
All of the interviewees stated that they had never heard of anything like the scams they experienced, and they never expected to encounter anything like them. Before experiencing scams themselves, they held an attitude of disbelief that people get scammed by these situations.
After learning more about the experiences of people who had fallen victim to scam calls, we wanted to balance our understanding of phone scams with folks who haven't experienced scam calls. We wanted to dig deeper into the attitude of disbelief that the interviewees described about phone scams.
Survey
The survey aimed to paint a picture about people’s current mental models of phone scams - their attitudes towards phone scams, knowledge of caller ID spoofing, and protections against phone scams. Additionally, we were interested in people’s current phone call behaviors and how they trust phone calls.
In total there were 148 respondents, recruited on a college campus via social media, physical flyers, and word of mouth.
Key findings
The results from the survey solidified the need to design a solution that not only protects against the mental model of a scammer, but also fits the mental model of the target who is not educated on the magnitude of the problem.
Focus group
We hypothesized people might be uncomfortable by a real-time scam call detection app, and we wanted to test this assumption and understand the discomfort.
Since we were a team of two, we recruited research support for conducting a participatory design focus group. I worked with the team to identify the research objectives and mentored them in designing the session.
Key findings
Participants were very resistant to having their calls being recorded for machine learning purposes, and generally do not trust a 3rd party collecting and handling their data.
Recommendation: It will be important to research data privacy laws around call recording, and to establish trust with the user on the concept of call recording.
Some participants were concerned that the application would provide unnecessary interference during calls from trusted people (e.g., friends, family).
Recommendation: The solution should allow users to easily opt-out of call recording.
the solution
From our user research, potential users really value privacy, but cybersecurity experts call for better security. Throughout the project a difficult challenge was addressing the question: at what point does the user’s need for privacy outweigh the need for protection?
Privacy-by-design
We took on a privacy-by-design approach to help us design this application, taking into consideration where in the user flow do we allow user control over defaults, provide consent disclosure so users are aware of what is occurring when using the app, and putting any sort of data processing on the client or phone rather than sending it to be processed on external servers.
the impact
This project raised critical awareness on the issue of phone phishing within our community. It was awarded a research grant by the Center for Long-Term Cybersecurity in 2019.
Many lessons learned from the course of this project were the design tradeoffs in consumer cybersecurity products. Our user research found that people are highly concerned about the data privacy implications of a real-time phone scam detection application, but we made the decision to proceed with the design because we as security researchers understand the problem more than the typical consumer.
We attempted to design Freeline with privacy and security in mind. However, we believe security professionals must consider: at what point does the desire for privacy by the user outweigh the need for protection? A follow up question is who should make this decision - the user or the security expert?
Further usability and desireability testing will be necessary to evaluate whether our privacy and security controls satisfy user needs.
At what point does the desire for privacy by the user outweigh the need for protection?