Written: Dec. 7, 2005.

Unlike many of my colleagues, I have the greatest respect for social science and sociology. I never liked, and strongly disagree with, the old "joke":

*
Q.: Why is it cheaper to hire a sociologist than to hire a mathematician?
A.: A mathematician only needs paper, pencil, and waste-paper basket.
A sociologist does not even need a waste-paper basket.
*

Unfortunately, many mathematicians do not use the recycling bin enough, and while what they publish is usually correct, a lot of it is either trivial or uninteresting. On the other hand lots of social science is full of wonderful insights.

Most social scientists do not pretend to know mathematics,
hence there is little danger that they abuse it. However,
one should be on one's guard
when reading works by those few who *think* that they know it.

A cursory reading of the opus of Columbia University sociologist
Jon Elster gives the impression that
he knows quite a bit of mathematics. For example his
"axiomatic approach" to Marx's
theory of productive forces (Appendix 2 of "Explaining Technical Change")
that while
unnecessarily pompous-sounding to a mathematician,
(it almost seems like a parody on
the axiomatic method by taking trivial facts and dressing them up to look
like deep mathematics), is nevertheless correct (the mathematical part was
helped by a colleague). Also the "theorem" in p. 97
of "Explaining Technical Change", (Eqs (2) and (3))
is correct, but I wouldn't call it a theorem.
It is a calculus exercise!

Indeed if f(aK,aL)=af(K,L)

then f(K,L)=f(L*(K/L),L*1)=Lf(K/L,1)=LF(K/L), where F(x):=f(x,1).

Now diff(f,K)K+diff(f,L)L=

LF'(K/L)(1/L)K+1*F(K/L)L+LF'(K/L)(-K/L^{2})L=LF(K/L)=f(K,L).

But whether you call it Theorem, Lemma, Proposition, or
Exercise, it is a *true statement*.

[Added April 4, 2009: it has just occurred to me that this is the n=1 special case of
Euler's Homogeneous function theorem.]

In "Explaining Technical Change" (p. 34), Elster talks about "counterfactuals" and about the common fallacy that people try to prove "A implies B" by showing that "not A implies not B". He is right that this is a big blunder (and it is sad that he has to state something so obvious), but as every Logic 101 student knows

A implies B

is equivalent to

not B and A is false

and, as we all know,
a common way to prove that "not B and A" is false is by
*reductio ad absurdum*, in other words,
proving that it leads to a contradiction.
For example that "not B and A implies not A".
Using this method is as old as the proof that the square-root of 2
is irrational, and is still valid after all these years (I hope!).

Hence I was very puzzled
when I read in Elster's 1993 paper "Some Unresolved Problems in the
Theory of Rational Behavior"
(Acta Sociologica 36 (3): 179-190, available on-line) that he
disapproves of backwards induction, and his objection to it
equally applies to all proofs by contradiction.
It appears that he confuses a *rationality* paradox with
a *logical* paradox.

Let me first reproduce the scenario for the "paradox".
It is the lovely *centipede game* from Game Theory.

Player I and Player II are playing a game where Player I has the first move, and they later alternate moves until the end. In his first turn Player I has the option to quit the game assuring for himself 2 dollars and -1 dollars for Player II, or not quit. If he doesn't quit, then Player II has the option to quit right away, assuring 1 dollar for Player I and 2 dollars for herself, or to pass, giving Player I the next turn. Then Player I has the option to quit assuring 4 dollars for himself and 1 dollar for Player II or not quit, giving Player II the options of either giving 6 dollars to Player I and 3 dollars to herself or 3 dollars to Player I and 4 dollars to herself.

A simple backwards induction shows that under the assumption that
game-theorists euphemistically call *rationality*,
but should be more appropriately named "greedy paranoia",
and the assumption of perfect common knowledge,
Player I will choose to quit after the first round.

Sadly, it is suboptimal, and if they would have been nice people,
*both* would have been better off, but this is not a
*logical* paradox. It is what Quine calls a
veridical paradox, i.e. "strange but true", and, as
I will mention in more detail latter, does indeed challenge
the foundations of Game Theory (or else such
luminaries as Reinhard Selten, Robert Aumann, and
Ariel Rubinstein would not have written about it), but don't
blame backwards induction!
Elster seems to think that it is a genuine logical paradox.
Here is what he says (ibid, p. 182):

*
"The reasoning leading up to the conclusion that the first player will
quit in the first round seems
compelling. But it harbors a paradox. The reasoning requires the first
player to go through
a thought experiment, including the assumption that the last round
has been reached.
But we know that this round will never be reached if the players are rational.
The assumption that it is reached can be entertained only if one or
both players are assumed to be irrational.
But in that case the backwards induction argument,
which requires both players to be rational,
cannot be carried out. This is the paradox of backwards induction.
It seems to suggest that a game like the one just described has no
rationally prescribed behavior. "
*

Jon Elster then goes on to suggest:

*
"The conclusion is hard to accept.
The logic of backwards induction is intuitively compelling.
It may not be the last intuition we give up, but it is more
compelling than most...
If, in the end, backwards induction has to go, large parts of economic
theory will also have to be thrown overboard."
*

I have news for you, Prof. Elster. If backwards induction has to go then not only "large parts of economic theory will have to go" but most of mathematics. Your "thought experiment" objection applies equally to all proofs by contradiction. We want to prove that A implies B. Let's assume that we have "A and not B", then blah, blah, blah ... and we get that this implies not A, a contradiction, QED. But, according to you, Prof. Elster, this reasoning is flawed, since it forced us to consider an `alternative world' where "`not A' is the case", while the premise of the theorem says clearly that "A is the case".

If we have to renounce proofs by contradiction, we would have to reject the proof that the square-root of 2 is irrational. Yeah, Prof. Elster, another triumph for your rationality obsession. Maybe the square-root of two is a rational number?, or at least the "jury is still out" on this question? The Pythagoreans would give you an honorary membership in their order, and thanks to you they wouldn't feel so guilty for drowning poor Hippasus of Metapontum.

First Round: Player I may quit with outcomes (2,-1) or pass the baton to Player II.

Second Round: Player II may quit with outcome (3,2) or quit with outcome (1,5).

Theorem: If both players are rational and with Complete Common Knowledge, Player I will quit after the first round.

Proof: Suppose that Player I doesn't quit, then Player
II, being an ungrateful bitch,
would screw Player I, implying that he is a sucker, i.e. irrational.
This contradicts one of the premises, namely that Player I is rational,
so the assumption that Player I doesn't quit after the first round
leads to a contradiction, implying its negation, i.e. that
Player I *does* quit after the first round. QED.

Theorem: If a and b are relatively prime integers than a^{2} does not
equal 2b^{2}.

Proof: Suppose that a^{2} equals 2b^{2}.
Then a is even, which
in turn implies that b is even, which implies that a and b
have a common factor (namely 2), which contradicts the premise that
a and b are relatively prime. Hence the negation of the statement that
a^{2} equals 2b^{2} always holds.

Jon Elster's objection applied to the second proof.
The theorem talks about a pair of integers that are relatively
prime, the assumption that a^{2} =2b^{2} forces us
to consider integer pairs that are not relatively prime, and this
is a paradox (according to Elster),
since we are only supposed to be talking about
relatively prime integer-pairs.

I(a) / \ / \ (2,-1) II(b) / \ / \ (1,2) I(c) / \ / \ (4,1) II(d) / \ / \ (6,3) (3,4)We have the following sequence of lemmas and corollaries

*Lemma (d)*: In position (d), if Player II is rational
she would choose the (3,4) option.

*Proof of Lemma(d)*: If she would have chosen (6,3), she would
have gotten one dollar less, contradicting that she is
rational. QED.

*Corollary (d)*: If Player II plays at (d) then
Player I would get 3 dollars.

*Lemma (c)*: If Player I is rational, Player II is rational,
and Player I knows that Player II is rational then
if he is at position (c), he would quit.

*Proof*: Suppose that he doesn't quit, then
Player II would be at position (d).
Player I knows that player II is rational, hence
by Corollary (d) he would get 3 dollars, which is
less than 4 that he would get if he quits. This
contradicts the assumption that Player I is rational.
QED.

*Corollary (c)*: If Player I is at position (c), then
Player II would get 1 dollar.

*Lemma (b)*: If Player II is at position (b), and
she is rational, and Player II knows that Player I
is rational and vice versa and etc. then she would quit.

*Proof*: If she doesn't quit, she knows that Player I
will be in position (c), and by Corollary (c) she would
get 1 dollar which is less than 2 dollars that she would
get if she quits, contradicting that she is rational. QED.

*Corollary (b)*: If Player II is at position (b), then
Player I would get 1 dollar.

*Theorem (a)*: If Player I is at position (a), and if
both players are rational and under Complete Common
Knowledge, then Player I will quit.

*Proof*: If he doesn't quit, then by Corollary (b) he would
get 1 dollar, which is less than the 2 dollars that he would
get if he does quit, contradicting that he is rational. QED.

Now comes the big

STANDARD OBJECTION: If Player II is at position (b), then
it means that Player I is *not* rational, hence
one of the premises of Lemma(b) is false,
so we can't use it, and hence the whole argument is invalid.

MY OBJECTION to the Standard Objection: This objection can only be made because we ALREADY KNOW Theorem(a). This is Circular Reasoning! In the same way that you can't use the fact that a statement is true when you try to PROVE it, you are also not allowed to use it when you try to INVALIDATE it.

The Structure of the two reductio proofs in Appendix I was

Thm: A implies B

Proof: A and not B implies not A, a contradiction.

Here the structure is a bit more complex (but it is just a red herring)

Thm: A and B Implies C

Proof: (A and B and not C) Implies (A and D) Implies (not A), a contradiction.

The objection seems to say that since the conclusion is
(not A) we are not allowed to use the second implication,
but this is nonsense, that's the whole point of
*reductio* proofs.

Conclusion: You picked on the wrong concept, Prof. Elster. Backwards Induction is OK. The true suspects are:

- The naive notion of rationality that should be replaced by meta-rationality or bounded rationality (or both)
- The notion of Complete Common Knowledge that
is suspect for two reasons. First it is infinite:
"I know that she knows that I know that she knows ..."
ad infinitum, and infinity begets paradox,
and is also self-referential, and this is even
a bigger paradoxical hazard.

Doron Zeilberger's Opinion's Table of Content