I absolutely love riddles. A few of my favorites include the Hobbits In a Line, 20 coins, and 5 Pirates. YouTube seems to have been keeping track of my riddle intrigue, and it recently suggested (and I subsequently watched) this very well-done video about the wizard standoff riddle:
If you want to solve the riddle on your own, go ahead and do so now. I have hidden the solution below and you can proceed at your own risk. The point of the rest of this post is not to solve the riddle, but rather to expand on two concepts: potentially infinite games and backwards induction which have several real-world applications. Unfortunately, this involves spoiling the riddle itself. Once you are ready to proceed, unhide the text below by clicking the button.
Bare Bones of the Riddle
Like any good riddle, the actual puzzle is dressed up in awesome but ultimately irrelevant details. The boiled down riddle is written out at 1:54 in the YouTube video. The abilities of the wands are irrelevant (being turned into a statue or being sent to a mountain are considered equivalent). I took the liberty of boiling down the already boiled down version, for clarity’s sake:
- You are in a duel with two other wizards. The turn order is you first, the wizard with 70% accuracy next, than the wizard with 90% accuracy.
- Each wizard may attack one other wizard on their turn. Moves occur sequentially.
- If all three wizards are left standing after one full set of turns, everyone loses. As long as one wizard loses on the first set of turns, this does not occur. This rule does not apply in any way to other turn sets beyond the first.
- The two other wizards behave rationally, and try to maximize their probability of winning.
- your wand choice and probability associated with it is public knowledge.
You must decide the following:
- Which wand to pick. Your choices are a wand with 60% accuracy, a wand with 80% accuracy, and a wand with 100% accuracy.
- Your moves (who to attack, or whether or not to intentionally miss).
I only have one real problem with the riddle’s presentation: it is not apparent that intentionally missing is an option. My impression when solving it was that the wands were more like trap-door levers: you pull the lever of the person you are targeting, and they are either eliminated or not (with the stated probability). Apparently, however, you can intentionally miss, even with the 100% accuracy wand.
Solve the Riddle
In order to set up the rest of this post, I will solve the riddle by breaking down the situations into three wand choices and two first turn strategies (6 total options). Reading this is not strictly necessary, but it will set the stage for the main three insights I dig into later:
- If you choose the 100% accuracy wand and do not miss on purpose, your optimal move (excluding missing on purpose) is to target the 90% wizard first. You will, of course, succeed. Then you will need to weather an attack from the 70% wizard. If you do, you win, as you can for sure eliminate the 70% wizard next turn. Your probability of success is 30%.
- It is actually not optimal to intentionally miss with the 100% wand, because your perfect wand will make you the target of both other wizards. If they fail to eliminate you, everyone loses. If either succeeds, you lose. As a result, taking the perfect wand and intentionally missing gives a probability of success of 0%. This appears to be an error in the original video, which presents this probability (Noether 9000, Miss on Purpose) as 1.6% in the table given at 4:05.
- If you choose the 80% accuracy wand and do not miss on purpose, your optimal move is to again target the 90% wizard. If you succeed (80% probability),you then begin what I like to call a potentially infinite game with the 70% wizard. This potentially infinite game is the point of this blog post, and we will get into it in more detail soon. For now, accept that you win with a probability of 32.3% (like in the video’s table).
- If you choose the 80% accuracy wand and miss on purpose, the 70% wizard will also miss on purpose, because he/she knows that you will be the target of the 90% wizard. The 90% wizard will target you in order to avoid everyone losing. Either way, you lose ( a miss causes everyone to lose, as no one would have been eliminated). This strategy will result in a 0% chance of victory for you.
- If you choose the 60% wand and do not miss on purpose, your optimal move is to target the 90% wizard. If you succeed, you enter a potentially infinite game with the 70% wizard. If you fail, the 70% wizard will also target the 90% wizard, knowing he/she will be the target (since 70%>60%). If the 70% wizard succeeds, you enter a potentially infinite game with the 70% wizard. If the 70% wizard fails, the 90% wizard will attack the 70% wizard. If the 90% wizard succeeds, you enter a potentially infite game with the 90% wizard. If the 90% wizard fails, everyone loses. Again, for now, I wave my hands and say that your probability of success is 38.1% (the value reported in the table).
- If you choose the 60% wand and miss on purpose, the 70% wizard will target the 90% wizard. The rest proceeds in the same way as the last scenario, including the potentially infinite games. You win with a probability of 64.6%.
You pick the wand and strategy that yields the highest probability of success, which is scenario f. The 60% wand and miss on purpose.
Pulling Insight Out of a Wizard’s Hat
Now let’s pivot from the solution to the interesting insights embedded in this riddle:
Backwards Induction
If you look back at scenario d, I explain (the video does too) that the 90% wizard, if they are left alive to take their turn, will not intentionally miss. This is because this option leads to a loss for everyone for sure. Further, the 90% wizard will definitely take a shot at the second most powerful wizard which in scenario d is you. Knowing this, the 70% wizard will definitely miss on purpose, because the 70% wizard is certain about his/her own safety, and the 90% wizard is more likely to kill you than the 70% wizard is. This “working backwards” is called backwards induction in game theory, and is a common method of solving sequential, (or turn based) games. Below is an illustration of backwards induction at work. The dark bold arrows represent the decision each rational or strategic actor would make if the game progressed to the given point. To see the diagram in a more readable, zoomed in fashion, click on the below image to view it interactively on SmartDraw.com:
Potentially Infinite Games
If you read the six scenarios, you can see that three of them involve what I call a potentially infinite game. This is basically my name for a game that cannot be solved by backwards induction, at least not without drawing decision trees for the rest of eternity. I really like the video, but to the casual viewer it is not at all apparent how the videographers computed the probabilities in their final table (except for maybe the Noether 9000 without intentionally missing). Hopefully my explanation of these potentially infinite games will make this apparent.
Let’s take scenario f as an example (the wand and strategy that solves the riddle). The table in the video tells us the probability of success is 64.6%. But how did the author get this exact number? Well, we know that you miss on purpose, so no one dies with 100% certainty after your turn. The 70% wizard then tries to eliminate the 90% wizard. If they succeed, we enter a potentially infinite game. Why? Because you and the 70% wizard are left, and unlike the situation where everyone is left alive (the rules state everyone loses if this happens), there is no rule that limits the number of turns for two wizards. If each of you continues to miss, the game will quite literally never end. Of course, the probability of both of you always missing gets progressively smaller the more turns that we consider. But on each individual turn you have a probability of winning. In order to get the 64.6% number, we need to pin down this infinite sum of increasingly smaller probabilities. This is where things get cool (or boring, depending on your perspective).
Let’s assume we are in this exact situation. The 90% wizard is eliminated, and you square off with the 70% wizard. It is now your turn again, so you cast a spell. Your spell hits true with a 60% probability. So you know you have at least a 60% chance of winning. If you miss, things are grim, but you aren’t completely doomed. The 70% wizard will attack you, and you will survive with a 30% probability, and then get another 60% chance of winning.If you miss again, the cycle repeats. This means that your minimum probability of winning so far is, after 3 rounds is:
\(0.60+0.40*0.30*0.60+0.40*0.30*0.40*0.30*0.60\)
But wait…that is a pattern! Condensing the terms:
\(0.60(0.40^0*0.30^0+0.40^1*0.30^1+0.40^2*0.30^2)\)
We can go even farther and write this as a formulaic series:
\(0.60\sum_1^\infty 0.40^n*0.30^n = 0.60\sum_1^\infty 0.12^n\)
This new representation is concisely packaged and looks pretty (we used a sigma!!!!) but it still is infinite. How do we calculate its value? Well, the reason we represented the series in this way is to show that it is in fact a geometric series (a series with a constant ratio between successive terms). The common ratio in this case is 0.12, which is less than one. It can be shown that any geometric series with a common ratio (r) such that \(|r|<1\) converges. For us, this means we can calculate the value analytically! The formula for calculating an infinite geometric series that converges \(\frac{a}{1-r}\) is probably familiar. It is the same formula used to compute the present value of an annuity (stream of payments). In fact that is perhaps the coolest thing I got from this riddle: this sum of an infinite geometric series formula has uses beyond just annuities!
The formula applied to our situation (without the 0.60 applied) yields:
\(0.60\sum_1^\infty ar^n = 0.60\frac{a}{1-r} = 0.60\frac{1}{1-0.12} = 0.60\frac{25}{22} \approx 0.60*1.136=68.16\%\)
Thus the final probability of you winning this infinite game (you vs the 70% wizard given that the 70% wizard eliminated the 90% wizard) is 68.16%. But all of this is only if the 70% wizard succeeds in eliminating the 90% wizard. If he fails, then the 90% wizard will try to return the favor. If he/she fails, everyone loses. If he/she wins, you enter another potentially infinite game with a more powerful wizard. Sparing you the details (hint: the calculation is the same as the other potentially infinite game), you win this other potentially infinite game with a probability of 56.25%.
Finally, we can combine these two probabilities into your total chance of winning given this strategy (60% wand, intentionally miss) by multiplying each probability by the chance it occurs: \(56.25\%*0.30+68.16\%*0.70 = 64.6\%\)
And in this way we have obtained the probability listed in the video. It is true that calculating this probability is not all that important for the riddle itself. Intuitively, you want to enter the potentially infinite game that involves the least powerful wizard. But in situations with money involved, finding the exact probability can be very important. With that in mind, let’s move on from wizards to reality.
Application Beyond Wizard Duels
You probably do not often find yourself in a magical duel to the death, but the two non-magical concepts involved in solving the riddle can be quite useful in everyday situations faced by local governments and non-profits. Here are a few examples:
Backwards Induction
Passing legislation at the city council level: City governments often pass bills that are very important to the everyday lives of the people who live in their city. Things like zoning laws, road expansions, public works programs, park creation, and property tax rates. For the most part, citizens do not directly vote on these matters, they elect council members to represent them. But even these council members do not simply vote on an issue: bills often have to be proposed, approved to be voted on by a committee, voted on and then signed by the mayor. This sequential process is exactly the sort of situation where backwards induction can help predict future outcomes. Take, for example, the District of Columbia’s city council. Their process is outlined here. The outline is basically: Propose->Committee Vote->Whole Vote-> Mayor Veto/Sign. Because this a sequential process, actors in each step will keep in mind what actors following them will do.
For example, let’s pretend a hot issue in the city is building bike lanes. Let’s assume the following:
- We can ignore the mayor for simplicity.
- There are 10 council members, and the median council member wants to spend $200,000, which is roughly the median amount that DC voters want. That is to say, the median council member represents the residents of DC pretty well.
- The committee has 3 members. The median committee member wants to spend $300,000.
Because the alternative to any bill is no bill (meaning $0 on bike lanes), the committee knows that the full council will pass any amount that is greater than $0 and less than $400,000. Because $300,000, the committee’s optimal choice, is within this bound, the committee will approve only bike lane proposals that spend exactly $300,000. This proposal will pass the full council and become law. If the committee did not exist, and instead each member of the council could submit one proposal, and every proposal was pitted against each other proposal, there is strong reason to believe that the amount spent on bike lanes would be exactly $200,000. This means that the existence of this committee system contributes to a outcome that is farther away from “the will of the people” than that obtained from other systems of voting.
Potentially Infinite Games
Deciding Whether to Protect a Building Against Earthquakes: Many California cities require new buildings to be built to withstand earthquakes to some degree. However, requirements are often set to preserve human life rather than protect property. Consider the choice of an organization which is building a new office. The organization faces a choice: they can either build the building to the minimum code or invest additional money to protect from property damage. Let’s say the probability of a major earthquake in a given year is 15%, and if an earthquake occurs, the building must be proofed again if it was previously (that is to say the game ends when an earthquake happens). We will simplify the situation and just say that the cost of the additional earthquake proofness is $1 million up front, which includes the present value of future maintenance costs. If an earthquake occurs with the additional protection in place, the company will save $10 million in property. Is the system worth it? Well, the series looks like: \(0.15+0.85*0.15+0.85^2*0.15…+0.15*0.85^n = 0.15(1+0.85+0.85^2+…+0.85^n)\). So \(|r|=|0.85|<1\) therefore we can calculate the effective probability of an earthquake occurring this way:
\(0.15(\frac{a}{1-r})= 0.15\frac{1}{0.85} = 17.65\%\)
Multiplying this by the $10,000,000 in property lost if an earthquake occurs and we get $1,765,000. This means that the system is “worth” $1,765,000 right now (ignoring discounting from interest rates and the time value of money). This is less than the cost, so we can move forward with the additional earthquake protection!
Some other situations where the potentially infinite game methodology can be used:
- Deciding whether to invest in additional resources when creating a proposal for an annual grant.
- Deciding whether to buy additional insurance for extreme situations.
I hope this exploration of a riddle was both fun and informative. Reach out to me if you have suggestions, questions, or have a problem Intrepid Insight can help you with (for free if you are a local government or other non-profit).
// add bootstrap table styles to pandoc tables
function bootstrapStylePandocTables() {
$('tr.header').parent('thead').parent('table').addClass('table table-condensed');
}
$(document).ready(function () {
bootstrapStylePandocTables();
});
Comments