Showing posts with label Utility. Show all posts
Showing posts with label Utility. Show all posts

Monday, July 20, 2009

The Missing Variable

My esteemed colleague, the venerable ShadowBanker, has done a lot of work analyzing villain's choices, cooperation and the outcomes that arise from these choices.

And I think we can all agree that these analyses are pretty well thought out and well considered, but one element is missing.

Let's consider a similar analysis. In a two-part story in Detective Comics in the mid-90's, the Penguin hires an actuary to calculate risks for his jobs. The actuary sets to work analyzing a particular set-up for a heist, works on the cost-benefit ratios and probabilities, and provides Penguin with a plan that is satistically most likely to succeed. The actuary even applies his skills to reducing the interference of Batman. He postulates that since Batman appears almost exclusively at night, a daylight heist of a rare orchid from a flower show could be pulled off without any interference from Batman. But Batman shows up, beats the living crap out of the thugs and foils the plans.

But why did the actuary's analysis fail? How did Batman triumph over statistics and reasoning? With the most important ingredient in any super-hero analysis: BAD-ASSITY. That's right, because Batman is bad-ass, probability starts to break down.


And its not just Batman who benefits from this extra variable influencing probability. It's nearly ever major superhero who triumphs over impossible odds consistently. You can find bad-ass in all of them. It's in every flying kick. Every splash page uppercut. Every pile of vanquished enemies. The true hero eats, sleeps, and maims with pure bad-ass. And as a result, this infused bad-assity affects all of their hero encounters.

Allow me to demonstrate some simple equations for how bad-ass can affect a situational analysis.
  • Superman+ Braniac's spaceship + Badass= smoldering metal floating in space

  • Daredevil + Kingpin + Badass= -12 teeth for Kingpin

  • Batman + Scarecrow + Badass= A floor covered in straw and bloody burlap

  • Wolverine + 3,000 ninjas + Badass= HOLY CRAP ALL THOSE NINJAS ARE DEAD!

But without Bad-ass you're left with very different results:
  • Blue Beetle + Maxwell Lord = Gaping Head Wound

  • Speedball + Civil War = bondage freak

  • Alpha Flight + Power Absorbing Mutant = Dead Canadians

So if you're going to analyze the outcomes for actions of superheroics, you need to include Bad-Assity as a fundamental element of your equations. Bad-Ass is the deciding factor between a hero standing triumphantly on a balcony with the moonlight shining through their flowing cape and a hero locked in the trunk of a car with a plastic bag wrapped around their head. It's that important.


Thursday, July 9, 2009

Uneasy Alliances II: Why Two-Face Loses by Flipping a Coin

Reprinted from filmschoolrejects.com


Earlier we began a discussion of Harvey Dent and James Gordon's alliance to clean up the streets of Gotham. We concluded that cooperation did make sense in the scenario portrayed by the film The Dark Knight. The cooperation game, or the Stag Hunt game, produced two pure strategy Nash Equilibria -- both players would cooperate or both players would work on their own.

Now, suppose that we take the same situation but introduce randomization. That is, suppose Harvey decides he wants to randomize his actions so that Gordon could not predict what he would do. Note that this is extremely unlikely; people usually intentionally randomize when they are working against another player. But for the sake of argument, suppose that it applies here.

The idea of assigning a certain probability towards an action is known as a mixed strategy. In this game, we can actually find a third, mixed-strategy equilibrium in addition to the two pure strategy ones we had found in the previous post.

So, here is the matrix from last time:

Harvey Dent -->>
James Gordon ↓

Cooperate

Don’t Cooperate

Cooperate

(4,4)

(1,3)

Don’t Cooperate

(3,1)

(3,3)


Suppose that Harvey assigns a probability, p, to cooperating and (1-p) to not cooperating. Then we could perform an expected utility calculation to deduce Gordon's optimal strategy.

Recall that Expected Utility (EU) of a given action is equal to the sum of the utility values (U) or outcomes weighted by the probabilities (p) of receiving each. Therefore:

EU = p * U(Cooperate) + (1-p) * U(Don't Cooperate)

Then the expected utility if Gordon cooperates is:
EU(Gordon Cooperates) = p * 4 + (1-p) * 1
=4p + 1 - p
=3p + 1

The expected utility if Gordon does not cooperate is:
EU(Gordon Does Not Cooperate) = p * 3 + (1-p) * 3
= 3p + 3 - 3p
= 3

We know that Gordon will choose whichever action gives yields the greatest expected utility. So setting the two equations equal to each other, we have:

EU(Gordon Cooperates) = EU(Gordon Does Not Cooperate)
3p + 1 = 3
3p = 2
p = 2/3

Therefore, Gordon will cooperate only if the probability that Harvey cooperates is greater than 2/3. Otherwise, he will not cooperate. We can perform the exact same analysis by assigning a probability, q, to Gordon's actions and calculating expected utilities for Harvey. It will yield the same answer, namely that q = 2/3.

So p = q = 2/3 and we have a new, mixed strategy equilibrium where each player chooses to cooperate 2/3 of the time and does not cooperate 1/3 of the time. If Harvey decides to randomize this way, then Gordon cannot benefit by deviating from this strategy alone.

This result is interesting for several reasons. First, each player's expected payoff under mixed strategies is 3. Therefore, the mixed strategy equilibrium outcome is no better than either of the pure strategy ones. Therefore, Dent and Gordon would be just as well off choosing not to cooperate with each other 100% of the time. They would each be strictly better off choosing to cooperate 100% of the time.

Second, I had mentioned before that we were supposing Harvey intentionally randomized his actions, but the truth is that this mixed strategy exists whether he wants to or not. The reason is that these mixed strategies can be interpreted to reflect one individual's beliefs about the other's actions. In other words, Harvey choosing cooperate 2/3 of the time and choosing to work on his own 1/3 of the time can be seen as Gordon's views on what Harvey will do given his uncertainty in the matter. If he believes Harvey will cooperate 2/3 of the time, then he will cooperate 2/3 of the time.

Now suppose that Harvey decides to flip a coin instead. And what's more, suppose that Gordon knows that Harvey will flip a coin. What will Gordon do? And will this be an equilibrium?

If Harvey flips a coin to decide, this means that he will cooperate 50% of the time and work on his own 50% of the time. So, Gordon's expected payoff will be:

EU(Gordon Cooperates) = (1/2 * 4) + (1/2 * 1) = 2.5
EU (Gordon Does Not Cooperate) = (1/2 * 3) + (1/2 * 3) = 3

Therefore, Gordon will derive a larger expected utility from not cooperating and will choose to work on his own all of the time.

This, however, is not an equilibrium. We already know that if Gordon chooses to work alone 100% of the time, then Harvey would be strictly better off by also choosing not to cooperate 100% of the time. By sticking to the coin strategy, Harvey is actually losing some utility.

Of course, there are certain situations where flipping a coin could work. Suppose that Two-Face and the Penguin are facing off against each other by driving their cars towards one another in a bizarre game of chicken. Each can choose to go left or go right. The only thing is that they have to make their decisions at the same time, so nobody gains any utility by turning first. All we know is that each wants to live. So, if they both turn left, they each receive a utility of 10 for being alive. If they each turn right, they will also receive a utility of 10. If one turns left and the other turns right, both will die in the car crash and receive a utility 0f 0. The matrix then looks like this:

Two Face -->>
Penguin ↓

Left

Right

Left

(10,10)

(0,0)

Right

(0,0)

(10,10)


Here if we perform the same utility calculations as above, assigning a probability of p to Two-Face turning left, we will arrive at p=1/2. Therefore, if Two-Face chooses to flip a coin intentionally, the Penguin should do the same and this would be a mixed-strategy Nash equilibrium.

Now, this sort of situation does not happen often. And this is why Two-Face's gimmick of flipping a coin to make every decision is usually a costly one. First of all, he gives away his strategy, making it easy for his opponents to predict their best actions. Second, it is not always the case that choosing one action 50% of the time and another 50% of the time is a mixed-strategy equilibrium, as we saw above. If Two-Face continues to adhere strictly to this strategy, he will be losing in the long-run.

And this is why Batman will always win. He knows his economics.

Tuesday, July 7, 2009

Dent and Gordon's Uneasy Alliance: A Game Theoretic Analysis

Reprinted from screenrant.com


Watching the film, The Dark Knight, got me thinking more about trust and cooperation. In fact, trust seems to be a big theme in the movie (as it was in Batman Begins). For example, though we know that future Batman and future Commissioner Gordon form a close bond predicated on absolute trust and respect for one another, the relationship was more uneasy at the dawn of Batman's career. Gordon knew that in order to aid a masked vigilante, he had to bend the rules of the law and risk his career. Batman knew that in order to gain help from the inside, he needed to accept someone into his operation who could at any moment turn on him and compromise his mission. In Frank Miller Batman: Year One, we are actually given access into Gordon's mind as he struggles with the implications of partnering with Batman.

There are other examples of cooperation and trust that are presented in the Christopher Nolan films as well. Batman's gradual but cautious partnership with Harvey Dent, Lucius Fox's faith in Bruce's use of company technology, even mob loyalties towards one another. And the movie certainly does pose a few interesting questions. Namely, "Are these partnerships a good idea" and "What are the potential consequences of such relationships?" We also are allowed to see the short-term effects of some of these, particularly Batman's relationship with Gordon. SPOILER ALERT. Towards the end of the film, we in fact see their ties severed and the good that they had accomplished slowly wither away.

But let's take it a step back and think about a specific example in game theoretic terms to see whether cooperation makes sense. Earlier, we had considered the case of villains betraying each other after neutralizing Batman's threat, which we likened to the famous example of the Prisoner's Dilemma. Now, we consider a different scenario--one in which cooperation, rather than defecting, might be an optimal choice for each party involved.

Let's take the example of Gordon and Harvey Dent in the Dark Knight. Recall the scene in which Gordon meets with Dent at the DA's office following an unsuccessful trial in which the latter attempted to convict Boss Maroni for his connections to the mob. In this scene, Dent proclaimed that despite all of his best efforts, he was at best only able to put some mobsters away and was ultimately incapable of rooting out their money laundering schemes and halting the transfer of illegal money. Conversely, Gordon, with Batman's help, was able to craft a scheme to seize the mob's finances from five notable banks. However, he needed Dent to issue warrants to back the search and seizures on the banks. Without Dent's help, Gordon and Batman could only continue stopping pockets of crime here and there.

It is clear that the benefits of cooperation in this case would be a significant reduction in mob power and influence. However, both Gordon and Dent have some reservations about partnership. Gordon was skeptical about allowing a third party to meet Batman, fearing both that it would make the operation too large and that it would increase the chances of either sensitive information being leaked or the operation being compromised. Dent feared that he could not trust Gordon's Major Crimes Unit after having investigated several of its members in Internal Affairs.

Though they had met and ultimately agreed to cooperate, let us suppose that their mistrust was considerable enough to cause serious doubts. That is, neither Gordon nor Dent were 100% certain of the actions of the other player. What would their optimal choices be in this situation?

We need to assign some utilities here. Let's say if both players cooperate then each would receive a utility of 4 for the success of having rooted out the mob's finances. If neither chooses to cooperate, but rather continue with their separate means of crime-fighting, then each would receive a utility of 3. They would continue catching mobsters and putting some in jail, but of course this would not be as beneficial to either as "hitting them where it hurts: their wallets." Suppose that Gordon decides to cooperate, setting up the scheme to seize assets from the five banks and involve Batman, but Dent backs down at the last minute. In this scenario, Dent would still receive a utility of 3 for catching criminals on his own, but Gordon's utility would be reduced, say, to a utility of 1 for having planned the operation and expending all the resources of his unit. Conversely, if Dent decides to cooperate and drafts up the warrants, but Gordon decides not to include him after all, then Gordon would receive the utility of 3, while Dent would receive the utility of 1 for having wasted the effort.

This scenario is similar to the popular game known as the Stag Hunt. In it, two hunters decide whether to cooperate to catch a stag (from which they each derive greater utility) or go separately and each catch a hare. The following normal-form matrix represents these utilities.

Harvey Dent -->>
James Gordon ↓

Cooperate

Don’t Cooperate

Cooperate

(4,4)

(1,3)

Don’t Cooperate

(3,1)

(3,3)


Let's find the pure strategy Nash Equilibria of this game. Suppose that Dent decides to cooperate. Then it would be in Gordon's best interest to cooperate as well, for he would receive a utility of 4 instead of a utility of 3. However, if Dent decides not to cooperate, Gordon would be better off by also not cooperating, for he would only receive a utility of 1 for cooperating, rather than a utility of 3 for going on his own.

Similarly, if Gordon decides to cooperate, Dent should choose to cooperate as well, as he would also receive a utility of 4 as opposed to 3. Should Gordon choose not to cooperate, Dent would also be best served by not cooperating, as he would receive a utility of 3 rather than 1.

Notice that the results of this game are quite different than that of the Prisoner's Dilemma. Namely, this game actually has two pure strategy Nash equilibria (strategies in which neither player can benefit by deviating alone). Either both players will cooperate or both players will work alone. Recall that in the Prisoner's Dilemma (Batman villains betraying each other), there was only one pure strategy Nash Equilibrium: both villains betray each other. Although both players would benefit by cooperating, it did not make sense for either player to do so individually. Rather, each should have always chosen to betray each other. In this case, however, Gordon and Dent choosing to work together could make sense.

Note that there is a difference between the two Nash Equilibria in this game. If both players cooperate, this is known as a payoff dominant equilibrium. This means that it is Pareto superior to all other outcomes in the game, i.e. should players choose to cooperate, each would receive more benefits than any other outcome. However, the other equilibrium, in which both players work on their own, is risk dominant. This means that as players become more uncertain of the actions of the other player, they would be more likely to choose the strategy that leads to this outcome. The reason is that choosing not to cooperate guarantees a utility of 3, whereas an individual choosing to cooperate bears the risk of receiving a utility of 1.

In fact, one major implication of this game (aside from the fact that there are situations in which cooperation makes sense) is that an individual's view of another one's actions matter. Suppose Gordon knew the probability that Dent would cooperate. How would this affect his actions? This is where mixed strategies and randomization come into play. As it turns out that this "Stag Hunt" game contains one mixed strategy Nash equilibrium. We shall consider these mixed strategies in an upcoming post.

Thursday, June 11, 2009

Follow-Up to Batman Villains and Cooperation Post

Thank you all for your comments on our post analyzing the Joker's decision of whether or not to cooperate with other villains. It seems as though many of you have taken issue with a few assumptions that I have made. I want to address those issues here.

1) The probability of killing Batman given that three villains attack him separately should be equivalent to adding the probabilities--or 6%.

I never actually said this--in fact, I did not mention the probability of them working separately at all. Suppose the Joker attacks Batman on Monday, Two-Face on Tuesday, and the Riddler on Wednesday. In effect, this means the three villains would be attacking Batman separately. The probability of killing Batman in this case would actually be 5.88%. It would be a geometric series of probabilities. In other words, the probability of killing Batman on the third day (by the third villain) would be equal to:

sum(1-p)^(k-1)*p, where p=probability of killing Batman that day and k=the day #
--> (1-0.02)^(1-1)*0.02 + (1-0.02)^(2-1)*0.02 + (1-0.02)^(3-1)*0.02
==0.0588 --> 5.88%

2) Diminished returns does not apply in this situation.

Upon reevaluation, I concede that the probability equation I offered for cooperation does not work for this scenario. The probability of killing Batman given cooperation among Batman villains should exceed the probability of killing Batman if they were to attack him separately. So if instead of attacking on separate days, the Joker, Two-Face and the Riddler were to plan a coordinated, simultaneous attack, the probability should exceed 5.88%, as dictated by the concept of synergy (the whole is greater than the sum of its parts).

However, this should only be the case up to a point. I am surprised to see that so many commentators seemed to ardently deny the theory that as you add more villains, the marginal effectiveness would diminish. If adding more villains to the plot increased the probability exponentially (or even linearly), then this means that eventually there is a number such that the probability of killing Batman is 100%, or that death is certain. This cannot be the case for Batman--who survived an attack by OMACS, who lived through an attack by the Black Glove, and who survived Darkseid's Omega Sanction. Further, this means that if the battle hits this point of absolute insurmountable odds, then adding more villains could not possibly increase the probability of death (being that it is already 100%).

Instead, the graph should be convex (increasing returns) up to a certain point and then switch to being a concave graph (diminishing returns). That is, cooperating up to a certain number of villains should increase the marginal probability of killing Batman, but after that point the marginal probability should start decreasing. This would be an "S" curve, similar to a learning curve or a logistic function. It should look like the following shape:

As an example, suppose that Batman is fighting the Joker and Two-Face. If the Scarecrow suddenly joined the party then Batman would have a significantly harder time fighting the three of them simultaneously. But now imagine Batman fighting 100 villains. If one more villain joins the party (making it 101), does this last villain induce the same marginal probability increase as the Scarecrow did? I certainly don't think so. In a battle with 100 villains, there are two outcomes. The first is that Batman withstands the 100 villains by himself, in which case adding one more would increase the probability of killing him, albeit not by much. The second is that Batman loses the fight against 100 villains, meaning that the 101st villain would have been ineffective.

Finally, this is, after all, the Batman universe we are discussing here. Cooperation means not only that the villains have to forgo their already significant hostility towards one another (which would involve a cost), but hatch a plan predicated on compromise. And as many commentators pointed out, compromise is not particularly easy for these villains. These are the sort of people who each want to play a prominent role in the demise of Batman. Yet they all have different talents and different means of achieving that goal, all of which cannot be fulfilled in a cooperative plan. The Scarecrow, who prefers psychological means of destruction, would not be able to poison Batman with fear gas and let him destroy himself, while letting Deadshot shoot him in the head from a distance. The group would have to sustain the interest of each individual member (who have short attention spans) and keep close monitor of these villains as their numbers increase. As the group surpasses a certain point, there becomes a huge potential for villains to become contentious, get in each others ways, foil the plan, or weigh the group down. It is not unlike working on a school project with a group who, though having equally effective means of achieving a goal, cannot agree on the particular method.

All in all, eventually we should be seeing some diminishing marginal probability increases with respect to the probability of killing Batman.

3) This sort of analysis should not be applied to Batman villains since it assumes they are rational actors, when they are in fact, irrational.

First: I already mentioned this in the previous post.

One of the distinguishing features about most of the notable Batman villains is that they all have distinct neuroses and pathologies that render most of them utterly incapable of working together. It is not a rational decision, rather that most of these rogues have deep-rooted psychological afflictions, many of which mirror some aspect of the Batman. As such, they have different motivations and goals, different means to achieve those goals, and different reasons to kill Batman.

As such, the analysis is purely academic. We know that the Joker is not actually making utility calculations in his head when he is deciding. The post was designed for fun and to engage the readers in debate. In no way am I actually prescribing that writers start figuring these calculations into the books or start having the characters engage in mathematical debates.

Secondly, by extension, arguing that this sort of analysis should not be applied to Batman given the nature of their villains' irrationality also implies that economics should not be applied to real-world, human decisions. Human beings are also irrational. Our preferences do not always make sense and our decisions are not always exercised with rational caution. If every human being acted rationally, nobody would have ever won a tic-tac-toe game in the history of human civilization. Yet, we still apply economic theory, as we do political theory, social theory, psychological theory, etc. as a guidance in an attempt to explain the world with the means and evidence available to us.

Tuesday, June 9, 2009

Batman Villains and Cooperation: A Utility Analysis

(This is the first part of a post that will include some very light and simple algebra and game theoretic concepts. Not to worry--it is pretty crude and easy to follow along with. Also, please note that the assumptions made are rudimentary and based off of my own view of Batman and his villains. I welcome everyone to debate them with me).

Reprinted from http://fc07.deviantart.com/fs25/f/2008/035/9/1/Batman__s_Rogues_Gallery_by_Buzz_On.jpgJeph Loeb has a tendency to depict Batman villains in a strange way. In The Long Halloween and Dark Victory, we see the bulk of his rogues gallery actually working together to achieve a common goal. In fact, the latter has Two-Face actually conducting a sort of mock trail in his lair with all of the villains (including the Joker) in attendance, watching and participating as Two-Face prosecutes witnesses as part of his deranged scheme.

Of course, this is ludicrous. One of the distinguishing features about most of the notable Batman villains is that they all have distinct neuroses and pathologies that render most of them utterly incapable of working together. It is not a rational decision, rather that most of these rogues have deep-rooted psychological afflictions, many of which mirror some aspect of the Batman. As such, they have different motivations and goals, different means to achieve those goals, and different reasons to kill Batman. In fact, it's been argued ad nauseum that the Joker may not even want to kill Batman, for this act would extinguish his very nature of being. The one truth is that, barring certain less-insane villains like the Penguin, most Batman villains have a burning desire to find, identify and kill the Batman on their own.

Most people would intuitively argue that it would make more sense for these villains to pool their skills and cooperate in order to finally rid the world of Batman. However, the decision to work alone is not entirely irrational. In fact, we can use very basic tools of utility and game theory in order to work out such a decision for a Batman villain (let's say the Joker, being the most insane and distinguished) and show that cooperation is not necessarily the optimal choice.

First, we have to make certain assumptions. Specifically, we need to assign probabilities of capturing Batman and figure out how much these probabilities increase due to the addition of a new cooperating villain. We also need to assign utility values for the Joker for each scenario. Let's start with utilities.

For not killing Batman, we can obviously assign the Joker a utility of 0.
For capturing Batman on his own, let's assign the Joker a utility of 10.
For capturing Batman with the help of x other villains, the utility would be 10/x.

The last one is sort of tricky. This means that if the Joker cooperates with one other villain (say Two-Face) and together they manage to kill Batman, then the utility for each would be 5. In effect, this means that the villains "split" the utility of 10.

Many of you might be wondering why it is that the more villains there are, the less utility one of them receives from killing Batman. Well, consider the pathology argument above. Obviously, if we factor in the Joker pathology, his utility for killing Batman with cooperation would be less than that of capturing Batman on his own, but greater than 0 as Batman would still be out of the picture. The thing is, as more and more villains enter the party, the Joker will feel less and less accomplished if they wind up killing Batman. Again, it is his very essence of being. He wants nothing more than to kill the Batman on his own, so it should make sense that the satisfaction he derives diminishes as more rogues are brought on board for the mission.

Aside from pathology, there are other reasons why the Joker's utility would diminish as such. One is that the villain who finally achieves victory would be held in the highest esteem among the criminal underworld and feared the most by the Gotham elite. Victory over Batman is largely a symbolic projection of status to the rest of Gotham City--and this is something the villains all desire. Therefore, if the Joker were to finally kill Batman, he would effectively "rule" Gotham.

Consider an even more tangible reason. The villain who kills Batman would gain access to his identity. He could therefore do with this identity anything he pleases. Assuming Nightwing and Robin won't be a problem to neutralize, the Joker could gain access to the batcave: a goldmine of wealth and technology. He could further auction off Batman's identity to the highest bidder. Even though he would be dead, I have no doubt that most of the villains would want to purchase this information to exact revenge on Alfred, Dick, Tim, etc.

For all the reasons then, it makes sense for the Joker's utility to diminish as more villains are added. The more rogues that take party to the death of Batman, the less the Joker will feel satisfied, the less influence he will have over Gotham City, and the less actual benefits he will reap after his death. For simplicity's sake, I assume that the villains "split" the utility of 10.

Now, let's assign the probabilities. I'm going to assume that each Batman rogue has a 2% chance of killing Batman alone (and this is being very, very generous and neglecting the individual skills of each rogue for simplicity). You would then think that adding villains to the scheme would increase the probability of killing Batman by 2% with each new rogue. Except, this ignores the economics law of diminishing returns, which states that as you increase the factors of production, the marginal benefit of those factors decreases. Usually, this applies to outcomes which are continuous (such as production of goods) rather than binary (to kill or not to kill Batman), but we can apply diminishing returns in this case to the probabilities. The theory is that as you add villains, working together will prove more difficult and planning more arduous. Therefore, the probability of getting Batman will increase, but by a marginally smaller amount with each villain added.

Thinking of probability as output, let's assume that in each state,
p = 2*y^0.9, where
p = probability of killing batman and
y = number of villains involved in the scheme.

Hence, we have a diminishing returns function. If there is only one villain involved in the scheme, the probability of killing Batman is 2%.

If there are 2 villains involved in the scheme, the probability becomes:
p = 2*(2)^0.9 = 3.73% (the probability increased by 1.73 percentage points)

If there are three villains involved, then:
p = 2*(3)^0.9 = 5.38% (probability increased by 1.65 percentage points)

And so on and so forth. Now armed with the knowledge of probabilities and utilities, let's conduct an analysis of whether it makes sense for the Joker to team up with Two-Face and the Scarecrow. We must analyze the expected utility of each scenario (teaming up and working alone).

First let's calculate the expected utility of working alone for the Joker. The equation is:
EU = p * (Uk) + (1-p)*(Unk) where
EU = expected utility
p = probability of killing Batman
Uk = utility of killing Batman.
Unk = utility of not killing Batman

We know that for the Joker, the utility of killing Batman alone is 10 and the probability of killing Batman by himself is 0.02. Hence:
EU = 0.02*(10) + (0.98)*0 = 0.2
Hence the expected utility of the Joker killing Batman on his own is 0.2.

Now, we analyze the expected utility of the team-up. We know that the probability of the Joker, Two-Face and the Scarecrow killing Batman is 0.0538. The utility would be 3.33 each. Hence:
EU = 0.0538*(3.33) + (0.9462)*0 = 0.179
Hence the expected utility for the Joker of the trio killing Batman is 0.179.

Since the expected utility of the trio killing Batman is less than the expected utility of the Joker doing it by himself, the Joker should prefer to work alone. Hence using simple economics, we have shown that it makes perfect sense for the Joker not to cooperate with other villains. Of course, this is incredibly simple and there are many other issues to consider. One of these issues is whether it would make sense for the Joker to cooperate, but then backstab the other villains. This issue will be considered in a subsequent post.