What is 'Fake Door' Testing in UX?

Fake door testing in UX involves creating dummy interface elements to gauge user interest and collect data.

What is 'Fake Door' Testing in UX?
Photo by Dima Pechurin / Unsplash

I used to work with a startup that was ‘all in’ on UX design. From the top down, in an admittedly small company, they were invested professionally and financially in connecting with users. They specialized in educational technology, so many students were often willing to participate in user testing sessions. Great! Or so I thought because the reality was that we often struggled to get realistic feedback from users during testing sessions.

This wasn’t the fault of the participants, of course. It’s human nature to modify behaviour in certain social situations, especially if you’re getting a £40 Amazon gift card at the end of the session! In addition to modified reactivity, there was also a great deal of difficulty in trying to find out what users want, because more often than not, users don’t actually know.

Henry Ford’s famous quote about faster horses holds true in UX: users don’t know what they want until they see it.

Fundamentally, there were two main issues impacting us in getting really great insights:

  • People aren’t comfortable rocking the boat — they would rather give ‘nice’ responses to questioning, even though this might not be how they really feel
  • Asking users directly what they want, or what they think of a feature, is a waste of time, because again, they’re likely to see this as being led to a particular answer, or they honestly don’t know what they want

What we found: It was common for users to be conscious of the controlled environment they were in, often suggesting that they felt as though they were being tested, leading to the potential for bias in their responses. Because of this, you could argue that all the results were — and this is true of any participant study — affected by what’s known as the Hawthorne effect.

The Hawthorne effect refers to the phenomenon in which individuals modify their behaviour or performance when they know they are being observed, leading to biased results in research or evaluation. I wrote about it 5 years ago, in 2018, and it’s a phenomenon that I still think about when I have a conversation with a user.

Overhead view of student on a laptop
Source: Pexels

One memorable instance involved a student who was hesitant to be completely honest about their struggles with a particular feature. No amount of reassurance from us as facilitators would have made the participant feel at ease or reduced the pressure she felt when we prompted her with certain questions.

As the session continued, our questions and prompts only seemed to increase the pressure she felt, and it became clear that her awareness of being observed was hindering her feedback. We ended the session earlier than planned — as nothing is more important than the comfort and well-being of participants.

But this was becoming a bit of an issue and a talking point amongst the team.

“What can we do about this? It’s fantastic that we have so many willing participants, but are they afraid to be honest because of the £40 Amazon voucher? Are they afraid it’ll be taken off them if they offend us? We have a fantastic opportunity to talk with so many people here, so how can we make the most of it?”

These discussions went on for some time amongst the team.

We had to find an alternative method to get more representative, more honest, feedback. Humans are often messy, imperfect, and make mistakes — these are the folks that I want to see during a session.

It’s so difficult to capture that within a short timeframe, and frankly, will forever be very unlikely, as participants pop their social masks on and keep their cards close to their chests.

We took this as an opportunity to reflect on our research methods and how we could improve our testing approach to ensure that all participants felt at ease and could provide accurate feedback. Fortunately, there is a method that can lead to potentially more reflective insights: fake door testing.

What is fake door testing?

Fake door testing is a research method used to test the demand for a product or feature without actually building it. Essentially, it involves creating a landing page or a dashboard prototype and alluding or signalling to a feature that doesn’t exist yet.

The theory is that by measuring user behaviour and interest, either in conversation or through actioning toward that feature, we’ll be able to see if it’s worth pursuing.

If I could give UX professionals just one tip, it’s this: remember that what people say they’ll do and what they actually do can be quite different. By observing how users interact with these ‘features’, designers can gain valuable insights into what works and what doesn’t.

Ultimately, the goal of user research is to create products and experiences that meet users’ needs, and observing their behaviour is a critical part of achieving that goal.

How would a fake door scenario have helped when I worked at an educational startup? Well, instead of investing significant time and resources in designing and building the feature, we could have created a dashboard mockup that included a call-to-action button such as “Access your learning materials now”.

Remember that what people say they’ll do and what they actually do can be quite different.

If a participant clicks on the learning materials button, they will be redirected to a message indicating that the feature is currently unavailable. This strategy would have allowed me to measure a participant's interest in the feature by tracking the number of times a participant interacted, which would validate the need for such a feature.

The research session itself would have centred around this concept, with the ability to tailor non-leading questions for a feature we now know has some significance to the participant.

This approach is favoured and employed by individuals who follow an exceedingly lean approach to design and development. Great for cash-strapped startups.

More importantly, this method often has the benefit of relieving any pressure within the room for participants. As a moderator, it puts less emphasis on the need for questioning, and more on taking a step back as a user navigates the application or website in front of them.

A green door covered by the shade of a large tree
Source: Pexels

How to conduct a fake door test

Want to conduct a test yourself? Here’s how:

  1. Identify the feature or product you want to test: Start by identifying the feature or product you want to test. This could be a new feature for an existing product or a new product altogether. Referring back to the educational startup I worked with, we had a JIRA board full of potential ideas — prioritized teamwork effort and impact on users.
  2. Create a landing page or prototype: Create a landing page or prototype for the feature or product. Keep it simple and focus on the key value proposition. Make sure to include a call-to-action button, that leads to a message that the feature or product is not yet available. Pro-tip: Tell a little white lie to the participants to put them at ease. Suggest you have nothing to do with the product or service, and that you’re only there to moderate and communicate anonymous feedback back to the company.
  3. If unmoderated remote testing, drive traffic to the landing page: Once the landing page or prototype is created, start driving traffic to it. This can be done through social media, email marketing, or other digital channels. Caution: If using this method unmoderated, or by way of a beta scenario for some users, be wary that some users might grow frustrated at a feature they want not existing!
  4. Measure user behaviour: Measure user behaviour by tracking how many people click on the call-to-action button and how long they spend on the page. If remote, you can include a survey or questionnaire to gather qualitative feedback from users. If in-person, ensure you and your team have clear guidance and expectations for how to question this particular scenario.
  5. Analyze the data: Analyze the data to gauge the demand for the feature or product. If there is high interest and engagement, it may be worth pursuing. If not, you may want to reconsider the feature or product. Rinse and repeat with other features and ideas.

What’s the verdict?

There’s no doubt fake door testing is a powerful research method that can help UX and product designers make informed decisions about new features and products.

By testing the demand for a feature or product before investing resources into building it, designers can save time and money and ensure that they are building features that users actually want and need.

But remember: this method if used too often, on the same individuals, without moderation could lead to some users becoming frustrated.

There’s nothing more honest than a person’s actions — especially if a person doesn’t want to be as brutally honest as you want them to be. As with everything in UX, it’s not about choosing one methodology and using that in isolation.

UX is the sum of its parts, and by using fake door testing along with other methodologies, you and your team are much more likely to get an accurate portrayal of user behaviour.

This post was originally shared on UX Collective, over at Medium.