Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify path to enable A/B test "success" case from petition/signup submissions #3573

Closed
alanmoo opened this issue Aug 26, 2019 · 9 comments

Comments

@alanmoo
Copy link
Contributor

alanmoo commented Aug 26, 2019

There's a desire to A/B test pages that have petitions, and perhaps signups. In order to do this, the Wagtail Experiments framework looks for a specific endpoint to be hit, identifying a "successful" test. Let's figure out what needs to happen, and what questions still need to be answered in order for us to come up with a reasonable solution to this.

User story: "As an advocacy team member, I want to create multiple versions of a page and figure out which one results in more actions taken via the CTA on the page".

Short term: This is just the "sign our petition" button that triggers it.
Longer term: We can trigger this in other ways, perhaps using a new streamfield component.

For estimating, note that closing this issue only requires opening specific implementation tickets, not actually doing the work.

@alanmoo alanmoo added this to the Sep 3 milestone Aug 26, 2019
@alanmoo alanmoo changed the title Enable A/B test "success" case from petition/signup submissions Identify path to enable A/B test "success" case from petition/signup submissions Aug 26, 2019
@gideonthomas gideonthomas modified the milestones: Sep 9, Icebox Aug 28, 2019
@alanmoo alanmoo modified the milestones: Icebox, Nov 25 Nov 8, 2019
@alanmoo alanmoo modified the milestones: Nov 25, Dec 9 Nov 26, 2019
@Pomax
Copy link
Contributor

Pomax commented Dec 4, 2019

Taking into account our idea to switch the JSX "thank you" update to a real "thank you" page, I would recommend we move ahead with this by using "Wagtail Experiments", combining the in combination with subroutes for our pages with the javascript trigger functionality that wagtail-experiments supports:

Running through the CTA on some f.moz/campaigns/some-campaign would redirect users to f.moz/campaigns/some-campaign/thank-you, with the content for that page being determined "at random" by wagtail-experiments.

This would involve:

  • adding wagtail-experiments to the app list
  • adding a test-completion route to our urls.py
  • adding "thank you" sub routes to the (bannered)campaign/opportunities models (we've already done subroutes before, for /blog/ routing)
    • Updating our JSX finalizing buttons, to trigger two things, in order:
      • save the campaign/opportunity identifier in sessionStorage
      • redirect the user to the thank-you page.
        • This is where we, ideally, tap into wagtail-experiments to decide which (if more than one) thank-you page to load for them (that said, it's not 100% clear how to do this atm, so I've filed Additional documentation? torchbox/wagtail-experiments#20 in the hopes of finding out whether this can be done)
    • the thank-you page should do two things:
      • consume (i.e. read + remove) the sessionStorage identifier. If there is none, the user did not get to this URL by finalizing a CTA and they should be redirected (either to the homepage or the campaign base page, probably the latter?). If there is an identifier:
      • trigger a call to the test-completion URL with the campaign/opportunity name as part of the URL

An alternative would be to create real "ThankYouPage" models, one for each test case, and then create real pages in the CMS for each of those models so that they have real wagtail-managed URLs, and then setting up wagtail-experiments A/B tests to manage which pages should all count as "possible pages to serve for URL xyz".

The problem I see with this alternative approach is one of work load: in addition to the several templates required in the first approach, we'd also have to define new page models for each of our tests, and then creating concrete pages for each of those models. We then either need to create one set of "possible thank you pages" for every single campaign/opportunity in order to allow server-side fetching of the correct "which thing was this a thank you for?", or single sets that can do something similar to the first approach, but based on URL query argument passing in order to allow the server to find the appropriate page (which means exposing some form of page id on the URL, which feels fragile).

@Pomax
Copy link
Contributor

Pomax commented Dec 4, 2019

/cc @alanmoo @cadecairos I'd love to hear your thoughts on this.

@alanmoo
Copy link
Contributor Author

alanmoo commented Dec 4, 2019

adding a test-completion route to our urls.py

Is this a generic endpoint that the form submission on every page being tested would hit?

save the campaign/opportunity identifier in sessionStorage

I'm not sure I understand what this is doing. If it's only to make sure we don't show the thank-you page to someone who hasn't actually submitted information...I'm not convinced that's a problem we need to solve.

trigger a call to the test-completion URL with the campaign/opportunity name as part of the URL

This feels like a replication of what wagtail-experiments already does....though reading your alternative solution, I think that's kind of your point.

If I understand the proposal correctly, we'd end up with some generic Thank you page that isn't necessarily part of the page tree, but is only faked to exist when signing, by way of the RoutablePageMixin?

As to your comment on workload with the alternative approach: Are you saying that the extra work would be creating a Thank You page as a child of each petition? Or is it the version of each petition having its own Thank You page that would be cumbersome?

I was imagining something like the alternative, but I want to fully understand your approach. I think something that might help here would be an ASCII tree representing the Wagtail Pages that would need to be created for your proposal vs the alternative. That is to say, something like this, where each item is a real page in the Wagtail admin:

campaigns
|-campaign1 (url users get sent to)
|  |-thank-you
|-campaign1b (alternative version)
|-campaign2

@Pomax
Copy link
Contributor

Pomax commented Dec 4, 2019

Is this a generic endpoint that the form submission on every page being tested would hit?

Correct. Wagtail-experiments allows you to define a URL for goals rather than pages.

I'm not sure I understand what this is doing. If it's only to make sure we don't show the thank-you page to someone who hasn't actually submitted information...I'm not convinced that's a problem we need to solve.

No, it's to make sure that people hitting our thank you URL without ever submitting anything don't skew our metrics: we want to know the efficacy of our thank you pages within a specific user flow. It's not about "showing the thank you" page - anyone happening to come across one is perfectly allowed to load it - but about the fact that a thank you page loaded in isolation should not pollute our A/B statistics: a thank you page getting loaded without being prompted by a user completing a CTA action is not part of the test, and so should not hit the wagtail-experiments stats-recording endpoint.

Even if it still shows the page itself (although there's very little value in doing so: what would we be thanking them for? Move them to the campaign or homepage so they get something meaningful and can perhaps roll into our call to action activity instead)

This feels like a replication of what wagtail-experiments already does....though reading your alternative solution, I think that's kind of your point.

Right, there is no "and then we do our own thing" so much as this is one of the two documented ways in which wagtail-experiments is meant to be used. I'm just listing it so we know what happens at each step.

I'll write a follow up that goes more into detail on which files, what code, and what kind of process is required for both approaches, but I need to get that worked further before I can condense it back into something that fits in an issue.

@alanmoo
Copy link
Contributor Author

alanmoo commented Dec 5, 2019

but about the fact that a thank you page loaded in isolation should not pollute our A/B statistics: a thank you page getting loaded without being prompted by a user completing a CTA action is not part of the test, and so should not hit the wagtail-experiments stats-recording endpoint.

I just checked by creating a test experiment on staging- if you hit the goal page without having visited the page that you're experimenting on, it doesn't count as a conversion. So it seems like this would be extra work to prevent people who...manually guess the URL of the thank-you page after hitting the landing page?

@Pomax
Copy link
Contributor

Pomax commented Dec 5, 2019

more like "people who copy-paste links" and "crawlers". That said: the sessionStorage note only applies to hitting the "goal" endpoint URL manually (e.g. via a JS call), where ruling out false positives (as far as I could tell) is still important. More detailed story still forthcoming.

@alanmoo alanmoo modified the milestones: Dec 9, Jan 20 Dec 10, 2019
@alanmoo alanmoo modified the milestones: Jan 20, Feb 10 Jan 16, 2020
@Pomax
Copy link
Contributor

Pomax commented Feb 4, 2020

This might need reevaluation

@Pomax
Copy link
Contributor

Pomax commented Feb 10, 2020

Reviewing the state of wagtail-experiments, the project technically only supports up to wagtail 2.3, and is known to break on Django 3 (which we'll want to move to sooner than later) so we may want to hold off on this entirely until we can get clarity on whether this solution is going to stay maintained or not.

@Pomax Pomax mentioned this issue Feb 10, 2020
6 tasks
@cadecairos cadecairos modified the milestones: Feb 10, Icebox Feb 11, 2020
@Pomax
Copy link
Contributor

Pomax commented Mar 31, 2020

Closing this issue due to having gone stale - if this is still something we want to do, let's file a new issue with all the information necessary for triaging.

@Pomax Pomax closed this as completed Mar 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants