Skip to main content
Build the future with Agentforce at TDX in San Francisco or on Salesforce+ on March 5–6. Register now.

Create a Prototyping Plan

Learning Objectives

After completing this unit, you’ll be able to:

  • Identify the key considerations for planning a prototype testing session.
  • Run a successful prototype testing session.
  • Synthesize your prototype test results.

Sounds Like a Plan

The Cloud Kicks team is now all set to build a prototype. They identified the question they’re looking to answer. And they identified the format that provides the greatest likelihood of answering their question with the least amount of work. They’ve even defined the feedback and signals they’ll be looking for to gather insights.  

The next thing for the Cloud Kicks strategy designer to consider is how they’ll gather feedback on their prototype. Designing the prototype session is a small design challenge of its own. Here are some key considerations when planning prototype testing sessions.

  • Prototype audience: Who can act as a proxy for your customers or users and see the promise in a rough, scrappy experience? What group of people holds the perspective that is most likely to have strong opinions on the question you’re answering? How will you recruit participants for testing?
  • Payment: Make sure to compensate your testing participants for their time. If your prototype testing experience requires them to do emotional labor, make sure to value that work. Budget can be a constraint for the scale of your prototype testing plan.
  • Group or 1:1 testing: Watching groups of people use your prototypes offers the benefit of hearing participants’ conversations about it, and it feels more efficient because you can hear from more people in less time. However, beware of group dynamics that may affect participants’ ability to offer your team independent feedback. You can mitigate this by creating a scorecard for participants to fill out as individuals before leading a group discussion. Also, the larger the group, the more difficult it will be to hear from every participant and the less time they will get with a low-fidelity prototype. You may want to build multiple prototypes to enable parallel use.
  • Setting: Where will you conduct prototype testing? Make sure your setting is chosen intentionally for its function and feeling. Will testing be done virtually or in person? If it’s remote, which platform will you use? If it’s in person, do you want to create a casual space for discussion or a formal setting for analysis? Do you want the testing environment to mimic the real world environment where your product or service will be used? Do you need special elements in the testing environment, like furniture that cues participants on how you want them to engage or a noise level that supports focus? If your user base is as broad as “the general public,” do you want participants to discover your team at a popup in a public place, rather than being invited to a closed feedback session?
  • Sessions and iterations: Consider how many testing sessions you’ll run, and whether you’ll make changes to the prototype along the way or use the same version from beginning to end. If you change the prototype along the way, you’ll leave the prototyping stage with a better model than you started with. But you won’t build quantitative evidence around people’s feedback, since each session can potentially use different stimuli. Running prototype testing is hard work for teams. Make sure you don’t schedule sessions too close together, or it will be hard to process what you’re learning.
  • Materials: In addition to the prototype itself, you may need to gather or create materials that help the testing and feedback collection processes go smoothly and easily. Do you want participants to fill out a scorecard or feedback survey? Do you want to create a feedback grid with some expected responses so you can just add a hash mark when you hear or observe one? Do you plan to record the session or take pictures? At a minimum, you’ll want to have participants sign a release form that explains how you’ll use their feedback and images.
  • Team roles: Who will recruit and schedule participants? Who will facilitate, and how will you capture insights during the sessions? What will you do with new questions that come out of these sessions? Do you need a timekeeper or an observer of signals like body language? Will someone need to refresh supplies or iterate on prototypes mid-session or between sessions? Who will ensure the comfort of participants if they need water or restrooms? Does anyone need to run a camera or audio recorder?
  • Show flow: What is the agenda for each session? How much will you tell participants about what you’re testing before they see it? Will there be multiple activities, such as using the prototype, providing feedback on it, and having a group discussion or having an interview? How long will each activity take, and how will you move a group through them? Will you guide the users during testing, or will they be allowed to explore independently? Make sure to leave plenty of time for each activity and in between activities.
  • Post-test processing: When will you synthesize insights as a team, and how will you decide what feedback to act on and when? How will you know what feedback to leave behind and why?

A well-thought-out plan can mean the difference between agile learning or anecdotal opinions; between adaptive iterating or chaotically following the latest direction.

Now that you’ve built your prototype, planned your testing sessions, and scheduled your testing participants, it’s time for prototype testing.

Put It to the Test

Prototype testing is, no surprise, the process of testing your prototype on real users. While you’ll do some learning in the prototype building process as you encounter and solve issues, the bulk of prototype learning comes from the feedback and observations you’ll gather in prototype testing. 

If you’ve ever done UX user testing before, you may be picturing a process where a user is assigned a task to complete, and the evaluation of the prototype is based on usefulness, clarity, usability, and successful task completion. Since the nature of the questions in prototyping for strategy is more abstract, and the format of the prototypes can vary radically, the testing sessions need a more varied and dynamic approach too.

You’ll still get a chance to see how users interact with your prototype, and learn from their experiences and reactions. You’ll learn where your strategic concept resonates and what aspects of your thinking need improvement. Keep an eye out for any surprises your team or your testing participants express. These surprises unlock deeper understanding of expectations and needs, and offer another opportunity to ask questions and discover nuances.

When running a session, make sure to:

  • Make participants feel comfortable: If participants aren’t comfortable, they won’t share their thoughts freely. Make sure they understand that there are no wrong answers, and you’re not trying to validate a decision or design. You’re legitimately curious about their experience and their thoughts and sentiments about it. Some participants even want assurance that they won’t hurt designers’ feelings. Create psychological safety within a group session by encouraging all types of feedback and being a great listener. Don’t tolerate any disrespect between participants, either.
  • Be present: Make sure the facilitator is not multitasking or distracted during a prototype testing session. Turn off notifications and ask participants to do so as well. You have a limited time together, so make the most of it!
  • Engage your whole team: There are lots of roles to play during prototype testing. Even after you’ve assigned roles like facilitator, time keeper, and camera operator, make sure everyone on your team knows what part they play. And remember that people from each discipline will hear different implications out of the same feedback, so don’t hesitate to assign several people listening roles.
  • Record insights in real time: Details can be hard to accurately remember later. So make sure you have either a dedicated note-taker, a self-reporting system, or some technology like video or audio recording, or audio transcription to capture the conversations.
  • Collect storytelling assets: You’re going to report insights from prototype testing back to your extended team and stakeholders, so make sure you take pictures of each participant. And accurately record key quotes that exemplify what you’re learning.

Synthesize Your Results

As with any research activity, you collect raw data in a prototype testing session. That data may be qualitative, quantitative, or a combination of both. The way to make that data valuable to your project is to synthesize it.

Just as you did in Research for Strategy Design, and again after ideation sessions, you take the individual ideas you collected—in this case, pieces of feedback and observations—and write the most important ones on sticky notes so you can cluster them by theme. 

The sentiments that were repeated the most often are trustworthy guides. They can be about what’s working or what’s not working. The number of times they came up shows their importance. 

But don’t throw away the one-off comments or sentiments. Sometimes a participant will remark about something your team has been feeling unresolved about, provide an insight that represents a blind spot your team has had, or come up with an idea that inspires you. Judge each statement you collect based on what feels important to your team and to the participants.

Once you’ve done one round of synthesis looking for key insights, go back to the initial question you were trying to answer with your prototype. Did you get a clear answer? If not, did you learn anything that can help you adjust the testing plan or prototype design to answer that question?

You may need to run a prototype testing session multiple times to feel confident that you’ve exposed the prototype to enough people and gotten a clear answer to the question.

Create an action plan to address feedback that can’t be addressed quickly and easily. You may need to go back to research or ideation if you learn that your foundational assumptions were incorrect. (In which case, you’ve saved your organization a lot of time and money by running that prototype—celebrate that!) Or, based on how abstract your question is, your prototype may come to a natural conclusion, and you can just take the insights forward as you develop your concept. 

For example, remember our live action event + survey prototypes? When one customer was exploring proximity-based community building they offered a free yoga class (a collective activity) and only advertised it within a large office tower (proximity). They learned a lot about community building in their specific context, and didn’t need to evolve their prototype. Instead, they took what they learned and started brainstorming community building programs that were closely tied with their core business. For them, it was never really about yoga. 

If your prototype is an early version of the product or service you’re proposing, you likely want to keep iterating it. Every round of sessions is likely to answer your key question, and raise others that will require a bit more fidelity to answer, until you’re satisfied that your strategy is sound. 

Share your Trailhead feedback over on Salesforce Help.

We'd love to hear about your experience with Trailhead - you can now access the new feedback form anytime from the Salesforce Help site.

Learn More Continue to Share Feedback