Testing and launching an experiment

Last updated:

|Edit this page

Once you've written your code, it's a good idea to test that each variant behaves as you'd expect. If you find out your implementation had a bug after you've launched the experiment, you lose days of effort as the experiment results can no longer be trusted.

The best way to do this is adding an optional override to your release conditions. For example, you can create an override to assign a user to the test variant if their email is your own (or someone in your team). To do this:

  1. Go to your experiment feature flag.

  2. Ensure the feature flag is enabled by checking the "Enable feature flag" box.

  3. Add a new condition set with the condition to email = your_email@domain.com. Set the rollout percentage for this set to 100%.

    • In cases where email is not available (such as when your users are logged out), you can use a parameter like utm_source and append ?utm_source=your_variant_name to your URL.
  4. Set the optional override for the variant you'd like to assign these users to.

  5. Click "Save".

Once you test it works, you're ready to launch your experiment.

Notes:

  • The feature flag is activated only when you launch the experiment, or if you've manually checked the "Enable feature flag" box.
  • While the PostHog toolbar enables you to toggle feature flags on and off, this only works for active feature flags and won't work for your experiment feature flag while it is still in draft mode.

Viewing experiment results

While the experiment is running, you can see results on the details page:

Viewing experiment results

This page shows:

Ending an experiment

After you've analyzed your experiment metrics and you're ready to end your experiment, you can click the Ship a variant button on the experiment page to roll out a variant and stop the experiment. This button only appears when the experiment has reached statistical significance.

Ship a variant

If you want more precise control over your release, you can also set the release condition for the feature flag and stop the experiment manually.

Beyond this, we recommend:

  1. Sharing the results with your team.

  2. Documenting conclusions and findings in the description field your experiment. This helps preserve historical context for future team members.

  3. Removing the experiment and losing variant's code.

  4. Archiving the experiment.

Further reading

Want to learn more about how to run successful experiments in PostHog? Try these tutorials:

Questions? Ask Max AI.

It's easier than reading through 564 docs articles.

Community questions

Was this page useful?

Next article

Experiments & A/B testing best practices

1. Establish a clear hypothesis A good hypothesis focuses your testing and guides your decision-making. It should include your goal metric, how you think your change will improve it, and any other important context. For example: Showing a short tutorial video during onboarding will help users understand how to use our product. As a result, we expect to see more successful interactions with the app, fewer customer support queries, and reduced churn (our primary goal) among tested users. This is a…

Read next article