Reassess and Scale Your Experiment
After completing this unit, you’ll be able to:
- Describe the next steps after the initial experiment.
- Explain how to reassess and scale your experiments.
Employ Your Experiment Results
If you have had the opportunity to run your first experiment using this content as your guide, then congrats to you. In this unit, we explain how to take your successes or learning opportunities from your first experiment and apply them in a valuable way for your business.
It is important to remember that the outcome from your first experiment sets the tone for what to do next.
Change Your Approach
You can find that the experiment hasn’t quite worked out. But that is okay. If you think your idea is still potentially valuable, then maybe you just need to tweak the way you approached and executed the experiment. If that is the case, consider making the following changes.
- The test group (for example, number of users, demographic of users, and so on) you used in the experiment test group
- The configuration of the idea
- The way in which the test was set up
Let’s say you’ve proven your hypothesis. Great, but what happens if you implement some adjustments to your experiment. What happens when you change the group you tested or the experiment setup? Do these changes provide opportunities to scale? Let’s look at how we’ve run experiments to find new Chatter solutions for customers.
Case Study: Experimenting with Chatter
Deploying features of Salesforce with some customers can be complex because of legalities and regulations. In this use case, we had a customer who needed a different Chatter solution. We decided to experiment with various Chatter solutions for this customer. Before we started, we asked the following questions to help form their hypothesis.
- How can we lock down profiles for certain users?
- How can we use groups for cross-collaboration and limited collaboration?
- How can we ensure certain users can moderate Chatter?
We configured environments that support the customer’s desired outcomes, then we tested it in a sandbox on a subset of users, and reviewed the experiences of those users.
Then we changed our approach. In all, the configuration was mapped out three times before the team decided on the optimal configuration. Each time the team learned something new. For example, the team learned the optimal group setup, ways features can work for their sales reps, and the impact of the configuration on other areas. The team learned what can and cannot work.
Had the team not run those experiments, they might have wasted time and money developing a solution that either did not fit the customer’s business purpose, configuration of the bespoke environment or even regulatory requirements.
Now the team was in a better position to scale their experiment globally.
If the hypothesis has been proven, yet it’s not a landslide victory, consider rerunning and expanding the experiment. In the initial experiment, you likely have kept these small and simple. Now is where we begin to think bigger.
If you have proven the value yet need more clarification, you can choose different paths. In expanding your experiment, you can:
- Choose to expand the test group.
- Look at similar or related datasets to determine expanded value in addition to the base,
- Rerun the exact same experiment but under different conditions (that is, more users, different demographics, and so on).
Case Study: Mocking Up a Chatbot
Here’s a Salesforce customer challenge: Its current business involved hands-on service renewals that took time away from its customers’ core operations. Although its customers had to engage to continue the relationship, survey results and interviews found that they did not want to have face-to-face conversations. So the business put together a small budget to run a chatbot pilot.
The customer had the idea to invest the funds in building a chatbot solution on the Salesforce platform. This would have taken their time, and their customers’ time as well as time for the Salesforce team to develop it. All at a cost. Salesforce’s goal was to reduce the time, cost, and risk through experimentation. First, we defined the hypothesis.
By introducing a Chatbot into the Small Law Sales Renewals Channel, we can at least maintain a level of customer satisfaction vs in-person interactions.
Once the hypothesis was defined, we outlined a plan of action to test the hypothesis.
- Obtain existing data on the current face-to-face experience.
- Obtain the sales script.
- Build mock-ups of the Chatbot interaction in PowerPoint®.
- Arrange customer interviews.
- Relay the experience and harness feedback.
The experiment proved valuable. The feedback indicated there was no change in customer satisfaction feedback rating versus that of a traditional in-person interaction. In some places it was an improvement.
Through this experiment, we learned a lot all while saving the business considerable time and budget. The experiment clarified the customers view of chatbots in renewal experiences while providing extremely valuable feedback on ways to optimize the experience.
Scaling was an important goal. The idea had been proven but at a basic level. Rather than developing a full-blown solution, it was recommended to the customer that the experiment is rerun with a larger test group. Also, to potentially consider other separate experimental use cases, such as password resets.
Calling It a Day
When it comes to scaling, you might find your idea does not work in its current form. Just remember, if your experiment does not prove your idea, it may not mean the end. Try to get creative before you close it down for good. It can be the way you framed it or the people you engaged that led to the results. Consider if you inadvertently created bias. Try again if needed. But do not be afraid to let go and recognize that it was a lesson learned.
So, you now know how to get started experimenting in your business. You are able to build a strong hypothesis around your ideas, and you understand what it means to execute on your ideas with an experiment.
Good luck and don’t forget… you can #TestAnyIdea