Flash Forecasting
Hello! If you are at: “just tell me how to get my organization predicting the future”, you are at the right party. Welcome, I’m so excited to have you here. If you aren’t there yet, no worries, start with my first post here.
You’ve probably thought about trying to bring a prediction market to your organization, but stopped there. I bet it was unclear how exactly to make that work. I’ve developed a cheap and straightforward way to bring the benefits of forecasting to your org. Enter Flash Forecasting. Leaders and their teams quickly forecast the outcomes of their projects. A single individual or team can get started immediately without any additional buy in.
I’ve built a culture of flash forecasting at Twitch without any direct authority over anyone, and you can do the same thing for your organization. The key is to have forecasting sessions crystallize and advance the participants thinking enough to immediately justify the investment of time.
In this world the entire organization levels up because of the intrinsic motivation to forecast: improved foresight, communication, and reflection. Here are the concrete steps you’ll need to bring your organization up to speed.
Overview
Start Predicting Yourself
Extract Predictions from Leaders
Get Stakeholders to Collaborate
Reflect on Predictions once they’ve Resolved
Market the Reflections Internally
Repeat/Spin up in Parallel
1) Start Predicting
Secure your oxygen mask before assisting others. Ideally you’ll make forecasts on important projects that are likely to succeed (>50%) and will resolve soon (months or less). The following questions should help you find projects suitable for forecasting: What work are you most excited about? What were your biggest wins in the last 6 months? Take the predictions through the rest of the process below.
If predictions improve your most important decisions that’s enough. It’s easy for prediction on critical work to immediately pay off by giving you 10% more clarity. But, if you are going to bring predictions to your org you should make shorter term forecasts while you are waiting for your big predictions to resolve. Short term forecasting ability correlates with medium term forecasting ability in a domain, so it’s a great way to practice. I predict what my queries will return (seconds), if I’ll deliver projects on time (days), and how product launches will go (weeks).
2) Extract Predictions from Leaders
Your organization is already pursuing projects, because your leaders believe they are worthwhile. You just need to facilitate the expression of those beliefs as predictions. Extraction is primarily a matter of asking the right questions. Start with the leaders that you expect to be the easiest sells. Emmett, Twitch’s CEO, was already making forecasts in his head, but he wasn’t sharing them with anyone. His cached forecasts came out effortlessly; he was the perfect early adopter. Call in some favors and ask potential allies for an hour of their time. If you can make that hour a success you have a clear path to building forecasting into your culture.
Extraction Script (1 hr max)
i) Set expectations with the project leader. Tell them: “I expect it to be clear by the end of the session whether or not this was a good use of time”. By the end we should have a forecast for an important project you are pursuing, and the process of making that forecast should crystallize and advance your thinking enough to be worth the time.
ii) Pick a project they are excited about that hasn’t launched (payoff still largely unknown). Ask the project leader how will future you know if this project has succeeded? When would we be able to observe such a success? Timebox choosing a goal at 20 minutes, people can spend weeks arguing over metrics and goals. Just do your best and move on.
iii) Ask the leader how surprised they would they be if they didn’t hit that goal. Move the goals posts around. Make the goal smaller, until they’d be surprised if they didn’t hit the goal.
iv) Help them make their statement of confidence quantitative. Tell them that “I’d be surprised if I didn’t hit my goal” generally translates to 60–80% sure. Ask “which of those probabilities feels the most right to you”.
v) Ask them to summarize why they believe their prediction. Prompt them to leave whatever breadcrumbs their future self will need to remember why they made this prediction. Evidence and models are dope, but the gist of their intuition is sufficient. Extraction complete!
vi) Start moving their beliefs around. Bring up whatever evidence you think is relevant. Bring up the outside view: How often do projects like this succeed at this level in our org? Give them a chance to update. This is the time to make your own prediction if you’d like.
vii) Once you’ve got a confident prediction in hand, it’s natural to ask them to forecast higher payoff outcomes that are more unlikely. I find 2–3 predictions to be a sufficiently rich representation of most payoff distributions. Making predictions for another project could make more sense with whatever time you have left.
viii) Evaluate the session’s success, ask “was the session clearly worth your time? Do you want to do this again for other projects on your plate?”
Too easy? I expect you’ll get stuck or off track somewhere in there. The countermeasures you’ll need are described at the end of the post.
3) Get Stakeholders to Collaborate
Predictions are the clearest and concisest world models I expect you’ll ever get from your leaders. Write them down! Share the predictions with the leader’s team and manager. Stakeholders will be very interested in predictions. I’ve never had a clearer idea of Twitch’s vision than the last time Emmett rattled off predictions for all of Twitch’s initiatives. Ask stakeholders if the prediction seems too high or too low? Get some of them all the way to making their own prediction.
Stakeholders are more easily sold on the idea of making predictions than leaders. Stakeholders get more from predictions with less effort. A gut reaction (too high or too low) will only take seconds to manifest. And they get a new right, the right to ask the hardest and most aggressive question: what concrete value do you expect this arc of work to produce?
Having clear expectations allow stakeholders to be surprised and learn as much from the project outcome as the leader. Predictions enable them to push for their own ideas with increased strength and clarity. The collaboration will quickly improve the forecast. The wisdom of the crowd hones the prediction with questions, relevant evidence, and reasoning. The forecast becomes stable, less likely to change wildly when shown to a new party. Finally the team reaps the benefits of their leaders’ increased foresight.
I strongly recommend you find a better place to collaborate than a spreadsheet. Twitch uses Cultivate now, and it’s 5x’d the average number of contributors to a given prediction.
If things go off track, refer to the countermeasures at the end of the post.
4) Reflect
Abject failure? Wild success? The engine of science grinds to a halt if observations aren’t compared to predictions. That’s when you enrich and update your model of the world. What have you learned that you want to send back to your past self? How could your past self have figured out the lesson earlier on their own? Your prediction is your primary defense against hindsight bias. It wasn’t crazy, it’s the best codification your past self made of their beliefs.
Do at least 3 of 5 of your 70% predictions come true? Are you like most other forecasting novices, overly optimistic? Recalibrate appropriately given the strength of the evidence. Remember forecasts are never “wrong”. Forecasts only provide evidence of foresight or the lack of it.
5) Market
Whatever the outcome, this is a chance to market predictions in your organization.
You crushed expectations: Make that clear by comparing the success to the expectations (the predictions) everywhere you report on success.
You met expectations: This worked just like I said it would, boom!
Failure: We learned from failure. Behold, this nugget of wisdom we overlooked. Lessons can easily push projects from failure to success.
6) Spin up in Parallel/Train Facilitators
Important predictions take a while to resolve, so get a bunch of these threads going in parallel. Once you are somewhat comfortable, start training facilitators to extract predictions from other leaders. Have future facilitators watch a couple sessions. I wish I’d done that earlier. Look for the individual that generates evidence: data, anecdotes, whatever. They will be able to contribute to the prediction discussion, and chase down evidence that predictors will want. The goal of a training session is to hand off the entire practice to them: motivation to forecast, setting up the sessions, extraction, seeking collaboration, reflection, and marketing. Make sure they are bought into that goal before the session.
In a World
Join me in imagining a world. You are building flash forecasting at your organization. The foresight of your leaders is on the rise. Communication is smoother. Resource contention is easier to navigate. You are strongly influencing the beliefs of leaders throughout the organization by helping them make predictions.
Are you 70% confident: you’ll devote 20 solid hours to making such a world in the next 6 months?
If you are in, I’m happy to help (sessions, talks, workshops, etc). You won’t be alone, Pinterest, GiveWell, Open Phil, and CFAR are all exploring forecasting. Hit me up @dannyhernandez on gmail. If you are still skeptical, no worries. I’d love to understand your hesitations.
Appendix: Extraction Script Countermeasures
Fuzzy Wuzzy: The biggest baddest issue you’ll run into. Many organizations and sub organization don’t have a true north metric (observable, manipulation resistant). Even if they do, they might lack clear goals. Wild success, moving the needle, and a good reception are not observable goals. I find it helpful to push the goal conversation forward with a description of a past clear observable success, and dial things back. For instance this project could generate as much revenue (any top level metric) as the most successful project we had last year. It could be that they expect to have a meaningful impact, but they don’t expect it be measurable (hard to a|b test for example). If that’s the case look for observable things that would be evidence of meaningful impact.
Skeptical Face. If they are dragging their feet, the probably don’t believe getting better is tractable. I’ve found the strongest counter to skepticism is the following evidence: Tetlock demonstrating a 10% improvement in forecasters that spent an hour reading these instructions in a controlled experiment.
Fraidy Cat: Specifically if they’re worried their forecasts will be misused, I agree that it’s a valid concern and I bring up the weatherman. People think the weatherman is wrong if it rains when he forecasted a 10% chance of rain. Forecasts are never wrong, they only provide evidence of foresight or the lack of foresight. The solution is education. If it rained 4 of 5 times the weatherman made a 10% forecast, that’d be strong evidence that the weatherman is badly calibrated. Another cause of fear might be the concern that predictions demonstrate a lack of confidence? Address the outside view. Being 60% confident you’ll be as successful as the 90th percentile of projects at an organization actually expresses a lot of confidence.
Evidence Monster: They might say we don’t have enough evidence to make a prediction. The plan is already being pursued based on the available evidence, so there must be enough evidence to expect some success. Clearly refute the claim that predictions need to be rigorous and formal. They are an introspection tool. Review the existing evidence and do the best you can.
Shot in the Dark: You expect to fail. You don’t expect to observe success at a level the organization would care about, and it’s not clear the project should move forward. This is painful, but the earlier you know this the better. Once you own some domain, it’s easy to over polish it. You are better off expanding or changing your domain than over polishing something marginal.
Moonshot: Self-driving cars, general AI, existential risk, etc. Low-probability long-time horizon forecasting is out of scope for this post. Forecasting can still help. I’d recommend practicing with smaller and shorter-term forecasts in the same domain. Like will OpenAI’s concrete ai saftey problems get x number of citations or reads.