In Approval, voters who would consider it strategically desirable to vote for someone other than their favorite in Plurality also have a strategic reason to vote for the same candidate in Approval--and for everyone whom they like more. Vote for the lesser-evil compromise you'd vote for in Plurality and for everyone you like more. ## Basis of Mathematical Strategy:

The first thing to point out is that, in Approval,
mathematical strategy isn't actually necessary,
just as it isn't necessary in Plurality. Just as
you vote in Plurality without mathematical strategy,
so can you do the same in Approval. As I said above,
you can simply vote for the same candidate for whom
you'd vote in Plurality, and also for everyone whom
you like better, including your favorite. What
really distinguishes Approval from Plurality is
the fact that everyone will always feel free to vote
for their favorite.

That statement of Approval strategy when frontrunner information is available is all that a voter needs for that situation, and all that's needed to say when talking to the public about strategy in Approval. Later in this article, I also talk about Approval's mathematical strategy.

What if no frontrunner information is available? In the absence such information, vote for all the candidates who are above average. Above the average of all the candidates.

Now this is unlikely, but say you didn't even feel inclined to guess where the average is, and all you want to go by is your ranking of the candidates. In that case, your best strategy is to vote for the best half of the candidates.

Of course if you know enough about the candidates to rank them, you probably have an opinion about their absolute merit for you (utility is the word usually used), and so you'd rather use the strategy of voting for all the above-average candidates.

Imagine that in an election there is one candidate who is your favorite, one who is the lesser evil and one the devil incarnate. If you think your favorite has no chance then you should vote for your favorite and your lesser evil. If you are sure that your favorite can beat the devil incarnate but there is a risk that the lesser evil will defeat your favorite you should vote only for your favorite. Things could get tricky if you think that all three could be fairly equal. Vote for just your favorite and you risk the devil incarnate beating both your favorite and the lesser evil. Vote for the lesser evil and your favorite and it could be your vote that helps your lesser evil defeat your favorite. For many voters, whether they vote just for just their favorite might well depend on the extent they are gamblers but there is a mathematical way of resolving dilemmas like this...

These voting instructions are for people who want to maximize their utility expectation. Utility is just a word for how good a candidate would be for you. Your utility expectation for an event is, calculated over all the possible outcomes for the event, the utility of that outcome, if it happens multiplied by the probability that it will happen. So your utility expectation multiplied by a large number of trials, is the utility that you can expect to derive typically from those trials. This is an important idea. If you don't know for sure what will happen, at least you want to act so that, on the average, over the long run, you can expect the best gain.

In fact, whether they know it or not, voters in Plurality tend to vote so as to maximize their utility expectation, in our political elections.

Expectation, in other kinds of problems, is often about money as well as utility. It could be about any quantity of interest in trials where the outcome isn't certain.

Mathematical strategies for Plurality & Approval:

I should add here that the elimination rank-count known in the U.S. as Instant Runoff, and in Britain as the Alternative Vote has mathematical strategy much more complicated than that of Approval. You won't hear about that from Instant Runoff advocates, because they're in denial about Instant Runoff's need for strategy. Plurality's mathematical strategy is similar to that of Approval. Approval doesn't need mathematical strategy more than Plurality or Instant Runoff.

But Approval has something for everyone. If you like mathematics, or are willing to read some first-year algebra, you'll find that the study of Approval's mathematical strategy best reveals the value and beauty of Approval.

This article describes the mathematical basis of Approval strategy. The Approval II article lays out the various mathematical Approval strategies, all of which are based on the mathematics in this Approval I article.

Let me briefly mention one example. It's clear that if you vote for all the candidates who, for you, are better than what you expect from the election, you thereby improve your expectation in the election. In the Approval II article it's shown that, with some reasonable approximations, that voting strategy maximizes your expectation in these election, as determined by the mathematics in this article.

The goal of voting strategy is to maximize your "utility expectation". Utility simply means value to you, an effort to assign a number to that value. Obviously that number assignment will often be approximate, or a guess. "Utility expectation" is the utility that you expect from the election. The expectation for an outcome is the probability of that outcome multiplied by its value (utility, money, etc.). The expectation for the overall event is the sum of the expectations for all of its possible outcomes.

Suppose someone says that they'll flip a coin, and give you $5 if it's heads, and $3 if it's tails. Your expectation is (1/2)5 + (1/2)3 = 2.5 + 1.5 = $4. The probabilities involved in calculating your expectation in an election are obviously guesses too, for the most part. Based on your estimates of the candidates' utilities, and your estimates of certain probabilities, you can easily calculate your utility expectation maximizing strategy in Approval or Plurality.

You ballot can change an election result if, when we count all the ballots except for yours, there's a tie or a near-tie (the top 2 votegetters have vote totals differing by one vote), and you vote for one of those top 2 but not the other. In that way, you can make or break a tie.

If you change the winner from candidate i to candidate j, your utility gain is Ui-Uj. But, if a tie between i & j is solved randomly, by flipping a coin, then the value for you of that tie is halfway between the utilities of i & j. So if you change a j win into an ij tie, or if you change an ij tie into an i win, then your utility gain is (Ui-Uj)/2. Half as much as if you changed it from a j win to an i win.

As I said the expectation for an outcome is its utility for you multiplied by its probability. What's the probability that you'll accomplish what's described in the previous paragraph?

In public elections, where there are lots of voters, it's possible to ignore ties & near-ties between more than 2 candidates, because they're so much more unlikely. But even in a small committee, of course, a 3-way, though not out of the question, is still significantly less likely than a 2-way tie, and so ignoring 3-way ties & nearties is still a reasonable approximation even in small committees, especially since the uncertainties in the estimates of probabilities and utilities--the inputs for these methods--are such guesses that there isn't really much precision to be lost by ignoring 3-way ties & nearties.

So consider the probabilities of ties and nearties:

Pij is the probability that, when we've counted all the ballots except yours, either i & j have the same vote total, or j has one more vote than i. In other words, Pij is the probability that you can make or break a tie between i & j by voting for i and not for j.

Pji is of course the same, except instead of "j has one more vote than i", it's "i has one more vote than j". The probability that by voting for j and not for i, you can make or break a tie between i & j. It's reasonable to assume that Pij & Pji are the same, and I make that assumption.

As I said, the expectation for an outcome is its probability multiplied by its utility value for you. And so, by voting for i and not for j, you improve your expectation by:

Pij(Ui-Uj)/2. The probability that you'll make or break an ij tie, in i's favor, multiplied by the utility gain of doing so.

(Of course that formula could evaluate negative, if you prefer j to i).

Actually, since the factor of 1/2 is present in every term of that type in this calculation, it makes no difference if we leave it off. Let's leave it off for simplicity.

Let's say that it's already decided that you aren't voting for j. Pij(Ui-Uj), then, is your utility expectation gain if you vote for i.

But what if it's been decided that you're voting for j. Then what's the utility expectation gain of voting also for i? Well, the gain from voting for j & not for i, by the above formula, is:

Pji(Uj-Ui). Since Pji = Pij, and since (Uj-Ui) = -(Ui-Uj), Pji(Uj-Ui) = -Pij(Ui-Uj)

So, when it's decided that you're voting for j, the gain from voting also for i is Pij(Ui-Uj), because you're getting rid of -Pij(Ui-Uj), by no longer voting for j and not for i.

In other words, whether you vote for j or not, the utility expectation gain from voting for i is Pij(Ui-Uj).

That means that we can calculate the expectation gain for voting for i, with regards to ij ties, without considering whether or not we're voting for j, since I've shown that it's the same whether or not we're voting for j.

Well then, to find out the total gain of voting for i, we evaluate Pij(Ui-Uj) repeatedly, letting each of the candidates other than i take their turn as j. We sum the results of those calculations. That gives the entire expectation gain from voting for i. That's called the sum, over all j, of Pij(Ui-Uj), where j is different from i.

That sum, for i, is called i's strategic value. If i's strategic value is greater than 0, then we should vote for i. If i's strategic value is less than 0, then we shouldn't vote for i.

That's in Approval. By the way, if the method is Plurality, then we should vote for the candidate with highest strategic value. That appears to be what voters are doing in our Plurality elections. They feel that P(gore,bush) is virtually 100% , and that P(nader,bush) and P(nader,gore) are essentially 0, giving Gore the highest strategic value.

So, because people believe that the only candidates who might be in a tie or neartie are Gore & Bush, and that Nader has no chance of being in a tie or neartie, P(gore,bush)(Ugore-Ubush) has a much larger value than, for instance, P(nader,gore)(Unader-Ugore). So they vote for Gore, believing him to therefore have the highest strategic value.

What that voter will say is "It's between Gore & Bush. We should make our votes count by helping Gore beat Bush".

Of course, with Approval in use, and everyone therefore able to vote for their favorite, the probability and winnability beliefs that voters now have might turn out to be inaccurate. We might find that the Republicans and Democrats aren't the only ones with a chance at a tie or neartie.

Of course it can be difficult to estimate the Pij. Especially if there are a lot of candidates.

Some of the strategies in the Approval II article are based on the following simplification:

Estimate the probability that, if there's a tie for 1st place, or a near tie for 1st place (2 top candidates differing by one vote), i will be one of those candidates.

Call that probability Pi. We can estimate Pij as Pi*Pj. For some of the strategy methods described in Approval II, the Pi are easier to use than the Pij.

The forgoing assumed that we can estimate the Pij or the Pi (by methods described in Approval II).

But say we don't have any information about the other voters' preferences. No information about how big the factions are, how popular or winnable the candidates are. No way to estimate the Pi or Pij. That's called a zero-info or 0-info election.

In that case, all the Pij are equal. Therefore we can leave them out. Replace Pij(Ui-Uj) with (Ui-Uj).

Let's calculate that sum again, with that simpler formula.

The sum, over all j, of (Ui-Uj) is the same as the sum of Ui over all j, minus the sum of Uj over all j.

What does it mean to say "The sum of Ui, over all j"? Once for each of the other candidates, while each of them takes a turn as j, we evaluate Ui. Obviously Ui doesn't change. Then we add up all those identical Ui terms. In other words, that sum is Ui(N-1), where N is the number of candidates.

What's the sum of Uj, over all J. The sum of the utility of everyone but i. I'll call that Uallj

So Ui(N-l) - Uallj > 0 if we're to vote for i

That can be rearranged to say:

Ui > Uallj/(N-1)

In other words, vote for i if i's utility is greater than the average of the other candidates' utilities.

It can be shown, but is intuitively clear, that that condition is also true if i's utility is greater than the average utility of all the candidates (including i). Say i's utility is greater than the average of the others. The average utility of all the candidates must be somewhere between that average and i's utility. Therefore i's utility is greater than the average of all the candidates' utility.

Then we get the 0-info strategy: Vote for all the above-mean candidates.

As I said, the Approval II article describes the various Approval strategies that are based on the mathematics in this article.

Mike Ossipoff