-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adaptive response sampling for surveys #25722
Comments
ok, I spent some time thinking about this today and this is an approach that would work. Changes to Survey data model
posthog-js changes
Distributing survey response limits over a time periodS = <total number of responses accepted>
D = <duration>
n = <number of days in duration>
u = <per day response count> If a customer wants to collect 500 responses over 30 days :
We would evenly distribute the number of responses we collect over the time period, giving us 17 responses per day. For example, on day 1 , we hope to get 17 responses, and on day 2, we hope to have a total count of 34 responses to the survey.
Yes, the last number is higher than 500, but we have a guard clause in the section below. Turning off survey response collection if over daily limitWe will run a periodic task, every 30 minutes, which will enable the flag as long as :
Once we either of these conditions are false, we will turn off the Caveats
|
Nice! one caveat, related to (1): I'm not sure if toggling between 0 and 100% would work well here. The problem is, this effectively means we're hiding / showing the survey to all users at any time. This doesn't really help with the catchup mechanism because we don't know in advance how many users will respond to the survey. For example, if a customer wants 500 responses, buuut there are only 20 people who will respond when they see the survey (this is unknown to all of us at time of creation, but probable in the real world), then showing the survey to everyone would mean we probably get [17-20] responses on day one, rollout to 0, and on day two get [0-3] responses when we roll out to 100% again, and on day 3 onwards get none, because all the people who would've responded have responded. This problem is much less stark when we're dealing with lots of new incoming users though. I'd propose instead adjusting the throttling rollout based on the n (=30 days) as well. I.e. everyday by default we increase rollout % by (100/n). This ensures new users start seeing the survey everyday, even if no new users are coming from the client. And now, if expected no. of responses is lower than what it should be today, we increase it further, based on how close we are to the target of 500. If the expected no. of responses is higher, we don't increase the rollout %, and keep it where it is. The advantage is clients don't randomly see and then not see surveys based on other people's behaviours. It's sort-of how I've seen @annikaschmid run the product feedback surveys internally too, and seems to work well. |
Thanks for the comments @neilkakkar! I like the idea of adjusting the rollout percentage of the feature flag. Given this, does the below section make sense? Turning off survey response collection if over daily limitEvery day, the rollout percentage of the feature flag is calculated by this forumla
|
Feature request
We currently allow %-based rollouts for surveys, as well as the option to set a response limit.
For some users with high volumes, that's not enough. Here is what we've heard:
One proposal how this could be achieved from @neilkakkar: "I think the naive targeting all users and adjusting the rollout % based on how many responses have been received so far would work pretty well - and better than trying to extra optimise it based on several other factors / guestimating cohort sizes who are active.. This is also reasonably straightforward to implement - we have all the pieces in place already."
More details in this Slack thread here: https://posthog.slack.com/archives/C07QD3LT8U9/p1728406374642269
Debug info
No response
The text was updated successfully, but these errors were encountered: