-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identify path to enable A/B test "success" case from petition/signup submissions #3573
Comments
Taking into account our idea to switch the JSX "thank you" update to a real "thank you" page, I would recommend we move ahead with this by using "Wagtail Experiments", combining the in combination with subroutes for our pages with the javascript trigger functionality that wagtail-experiments supports: Running through the CTA on some This would involve:
An alternative would be to create real "ThankYouPage" models, one for each test case, and then create real pages in the CMS for each of those models so that they have real wagtail-managed URLs, and then setting up wagtail-experiments A/B tests to manage which pages should all count as "possible pages to serve for URL xyz". The problem I see with this alternative approach is one of work load: in addition to the several templates required in the first approach, we'd also have to define new page models for each of our tests, and then creating concrete pages for each of those models. We then either need to create one set of "possible thank you pages" for every single campaign/opportunity in order to allow server-side fetching of the correct "which thing was this a thank you for?", or single sets that can do something similar to the first approach, but based on URL query argument passing in order to allow the server to find the appropriate page (which means exposing some form of page id on the URL, which feels fragile). |
/cc @alanmoo @cadecairos I'd love to hear your thoughts on this. |
Is this a generic endpoint that the form submission on every page being tested would hit?
I'm not sure I understand what this is doing. If it's only to make sure we don't show the thank-you page to someone who hasn't actually submitted information...I'm not convinced that's a problem we need to solve.
This feels like a replication of what wagtail-experiments already does....though reading your alternative solution, I think that's kind of your point. If I understand the proposal correctly, we'd end up with some generic As to your comment on workload with the alternative approach: Are you saying that the extra work would be creating a Thank You page as a child of each petition? Or is it the version of each petition having its own Thank You page that would be cumbersome? I was imagining something like the alternative, but I want to fully understand your approach. I think something that might help here would be an ASCII tree representing the Wagtail Pages that would need to be created for your proposal vs the alternative. That is to say, something like this, where each item is a real page in the Wagtail admin:
|
Correct. Wagtail-experiments allows you to define a URL for goals rather than pages.
No, it's to make sure that people hitting our thank you URL without ever submitting anything don't skew our metrics: we want to know the efficacy of our thank you pages within a specific user flow. It's not about "showing the thank you" page - anyone happening to come across one is perfectly allowed to load it - but about the fact that a thank you page loaded in isolation should not pollute our A/B statistics: a thank you page getting loaded without being prompted by a user completing a CTA action is not part of the test, and so should not hit the wagtail-experiments stats-recording endpoint. Even if it still shows the page itself (although there's very little value in doing so: what would we be thanking them for? Move them to the campaign or homepage so they get something meaningful and can perhaps roll into our call to action activity instead)
Right, there is no "and then we do our own thing" so much as this is one of the two documented ways in which wagtail-experiments is meant to be used. I'm just listing it so we know what happens at each step. I'll write a follow up that goes more into detail on which files, what code, and what kind of process is required for both approaches, but I need to get that worked further before I can condense it back into something that fits in an issue. |
I just checked by creating a test experiment on staging- if you hit the goal page without having visited the page that you're experimenting on, it doesn't count as a conversion. So it seems like this would be extra work to prevent people who...manually guess the URL of the thank-you page after hitting the landing page? |
more like "people who copy-paste links" and "crawlers". That said: the |
This might need reevaluation |
Reviewing the state of wagtail-experiments, the project technically only supports up to wagtail 2.3, and is known to break on Django 3 (which we'll want to move to sooner than later) so we may want to hold off on this entirely until we can get clarity on whether this solution is going to stay maintained or not. |
Closing this issue due to having gone stale - if this is still something we want to do, let's file a new issue with all the information necessary for triaging. |
There's a desire to A/B test pages that have petitions, and perhaps signups. In order to do this, the Wagtail Experiments framework looks for a specific endpoint to be hit, identifying a "successful" test. Let's figure out what needs to happen, and what questions still need to be answered in order for us to come up with a reasonable solution to this.
User story: "As an advocacy team member, I want to create multiple versions of a page and figure out which one results in more actions taken via the CTA on the page".
Short term: This is just the "sign our petition" button that triggers it.
Longer term: We can trigger this in other ways, perhaps using a new streamfield component.
For estimating, note that closing this issue only requires opening specific implementation tickets, not actually doing the work.
The text was updated successfully, but these errors were encountered: