Arnold Kling  

Hansonism

PRINT
Challenging My Most Absurd Bel... Solar Energy Update...

Will Wilkinson and Robin Hanson have at it. When Tyler Cowen says "self-recommending," does that mean he recommends it without watching it?

Anyway, a few issues.

1. Hanson says that people have a propensity to disagree, just to be contrary. Do you agree? How do we explain conformity?

2. Hanson says that the expected return from being cryonically frozen is positive. If it works, the benefits are high, and the probability of it working is greater than zero. Yet few people sign up for it. I think that we are afraid of looking weird if we sign up for it.

3. Hanson and Wilkinson discuss instinctive status-seeking vs. conscious status-seeking. This brings up an issue. If you learn to recognize that certain behavior is designed to raise a man's status, should you engage in that behavior more (because you want status), or less (because you see that it serves no useful social purpose)?

4. A central point in the Mike Munger podcast is that having a city take over bus service does not eliminate individual motivation. Hanson and Wilkinson describe as idealists people who believe that individualistic motivation can be overcome. Are idealists themselves unusually unselfish, or are they just naive? Are cynics who believe in individualistic motivation particularly selfish, or are they just as public-spirited, but more realistic? The Hanson-Wilkinson discussion broaches the subject, but does not do carry it very far.


Comments and Sharing





COMMENTS (17 to date)
Robin Hanson writes:

You are right that we did not get into any particular topic very deeply. We have both pressures to conform and to diverge, which operate to different degrees in different contexts. So Arnold, are you conforming or diverging when you choose not to sign up for cryonics? :)

Kevin Dick writes:

"1. Hanson says that people have a propensity to disagree, just to be contrary. Do you agree? How do we explain conformity?"

I completely disagree.

(Sorry. Couldn't help myself, which I guess is the point.)

In all seriousness, I think people have two different tendencies, which individuals are "blessed" with in different amounts (presumably on a Gaussian distribution).

Tendency one: in interactions framed as "one-to-one", they tend to be contrary.
Tendency two: in interactions framed as "group", they tend to conform.

My "just so" explanation using evolutionary psychology (and therefore probably suspect a priori): one-on-one interactions are opportunities to signal reproductive fitness over the opponent while group interactions are opportunities to signal cooperativeness with common goals. Personally, I seem to be blessed with a couple extra standard deviations of contrariness and at least one standard deviation less of conformity.

nk writes:

On the issue of cryonics: Let's say someone is between 18 and 25 years old and has a discount rate that is very close to zero. Would it make sense to precipitate an early death and be cryonically frozen?

Does age at death affect the expected return from being cryonically frozen?

I am assuming that being younger future one will be more adaptive to new technologies, quicker to acquire new knowledge, more athletic, and more attractive than one would be if/when he or she died of natural death.

Note: I am not considering this option...my discount rate is probably too high to give it credence.

Dan Weber writes:
Hanson says that the expected return from being cryonically frozen is positive. If it works, the benefits are high, and the probability of it working is greater than zero.
If there is an afterlife which is better than current living, and cryogenic freezing keeps me out of it indefinitely, there is a negative return.
Mr. Econotarian writes:

The big problem with cryonics is that neurons undergo apoptosis (programmed cell death) when injured, and axons themselves rapidly degenerate when injured (perhaps to save energy in the brain - why "light up" potentially damaged axons).

If you are able to freeze yourself (appropriately) at the exact time of death, that is one thing, but even 15 or 30 minutes at body to room temperature is enough for there to be massive information loss in the connectivity of the brain.

Here is the a good quote on the problem of anoxia:

"The principal problem for an anoxic cell is to maintain its
ATP levels. The stop in oxidative ATP production (giving up to
36 mol ATP/mol glucose) leaves the cell with glycolysis (2 mol
ATP/mol glucose) as the only route for ATP production. As a
result of the brain’s high rate of ATP use, mainly associated
with the ion pumping needed to sustain electrical activity,
brain ATP levels fall drastically within minutes of anoxia in
“normal” anoxia-sensitive vertebrates. Consequently, the ATP-
demanding Na+-K+pump slows down or stops, initially lead-
ing to a net outflux of K+. Soon, extracellular K+reaches a con-
centration high enough to depolarize the brain. At this point,
Na+and Ca2+flood into the cells, a process also stimulated by
a concomitant release of excitatory neurotransmitters like glu-
tamate. Indeed, a major route for Ca2+entry is the glutamate-
activated N-methyl-D-aspartate receptors. Neuronal death
appears largely to be initiated by the uncontrolled rise in intra-
cellular Ca2+, which activates various degenerative and lytic
processes."

Now we know that there are types of turtles and carp that have evolved ways to deal with prolong anoxia - Crassius carp can survive days of anoxia at room temperature, over overwintering at near 0 degrees C...

Blackadder writes:

Towards the beginning of the diavlog, Prof. Hanson says something along the lines that if people were not biased they would never knowingly disagree. At the risk of revealing my ignorance on the subject, could someone explain why this is supposed to be so?

Chuck writes:

To the extent that our thought patterns are dictated by evolution, it would seem that there is evolutionary benefit, at least in the past, to contrarianism.

Since, at least in primitive contexts, we have very little reliable information (where are all the bears in the area, what will the weather be like next week, will my 'wife' get pregnant soon, etc), it would be beneficial to the survival of the species to have two different hypothesis tested by different groups.

So, for a trait that evolved when there was not reliable information, the disagreement is in fact more important for the survival of the species than actual data or the actual analysis.

If we all reach the most logical conclusion we can based on the evidence we have and it is the wrong conclusion and the entire tribe perishes, we would, in a sense, suffer gambler's ruin.

It would be interesting to see how much human disagreement in groups matches Kelly's criterion overall - did evolution build our brains so that as a group we somehow allocate our resources in accordance with Kelly's criterion.

Arnold Kling writes:

Blackadder,
Suppose that you believe that the value of some variable X is 100, and I believe that it is 200. I should ask the "meta" question of which one of us knows best. If I think you know best, then I should switch to 100. If you think I know best, then you should switch to 200. If we think we each have equally valid opinions, we should both switch to 150.

If we don't converge to the same opinion and we know one another's opinion, then at least one of us is being stubborn. That reflects bias. If nothing else, it reflects a belief that the information I use to form my opinion is somehow "better" than the information you use. But if I think I have better information, and you think you have better information, we can't both be right!

Blackadder writes:

The 'meta' question is only helpful if we agree as to which of us is more likely to have the right answer, and there's no more reason to think that two people couldn't disagree about that without being stubborn than there is to think that two people couldn't disagree about the non-meta question without being stubborn.

Dr. T writes:
Hanson says that the expected return from being cryonically frozen is positive. If it works, the benefits are high, and the probability of it working is greater than zero. Yet few people sign up for it. I think that we are afraid of looking weird if we sign up for it.
First, the probability of it working is zero. Not just close to zero, but zero.** Second, there is a positive benefit to not buying this service: you can spend the money on something that makes you happy or you can give the money to your family, friends, or favorite charities when you die. Those positive benefits greatly exceed those of cryonic preservation (where the only beneficiaries are the people running and employed by this scam).

**Econotarian is correct, but my explanation of the zero value of cryonic preservation is simpler: there is no way to prevent your cells, including your brain cells, from rupturing during the freezing process. Even if you could heal each and every cell, the memories that had been contained by those brain cells are lost forever. So, even if your body and brain cells were restored to health, you would awaken as a drooling infant with a blank slate mind.

Blackadder writes:

One other thing. When people talk about wanting to know the truth, they often conflate two related but distinct goals: (1) believing what is true; and (2) not believing what is false. Not only are these goals distinct, they are in tension with each other. If all we cared about was believing the truth, then we would believe everything. If all we cared about was not believing falsehoods, we would be complete skeptics.

Since pretty much everyone wants both to believe what's true and to not being what's false (I don't say these are our only desires, only that we do desire this to some degree), we have to make trade-offs between the two goals. The more credulous we are, the higher the risk we run of believing error. The more skeptical we are, the greater the danger that we will miss out on believing what is true.

Just as some people are more risk averse than others when it comes to, say, gambling or investing, so some people are more risk averse than others when it comes to belief. Since it's not clear to me that rationality demands we give any particular relative weight to believing the truth vs. avoiding error, I would think that, if nothing else, the different trade-offs people choose to make regarding their beliefs could be a source of legitimate disagreement.

Kurbla writes:

(1) Disagreement. If person has tendency to become leader or follower in some context, he also tend to disagree or agree in that context, respectively.

(2) Cryonics. I think it is mostly procrastination of unpleasant activity. People do not go on medical exams, although they know it can save them. On the same base, they do not froze themselves - and much stronger because they do not believe it works. As cryogenics progresses, they'll do it more - number of Alcor "patients" grows fast in last decade.

(3) Status. Personally, older and possibly wiser I am, I seek less for a status. My ratio change my emotions, but it happens very slowly.

(4) Selfishness. Selfish motivation cannot be eliminated, but it can be reduced on satisfying some very basic needs. If you served in army, you know what I'm talking about. You have food, clothes, place to sleep, some time for a few kinds of cheap recreation - and you are satisfied. It works.

BGC writes:

"Hanson and Wilkinson describe as idealists people who believe that individualistic motivation can be overcome."

This is true and important.

'Idealists' absolutely refuse to plan on the basis of indvidual incentives. Clever idealists even refuse to analyze on the basis of incentives; because they know that if this analysis becomes an accepted practice, their cherished schemes will not be allowed to happen.

Not being an idealist, I am continualy amazed about the extent to which this happens.

In the UK, the national examinations system was changed so that it is possible to cheat (because examined work is done unsupervised, where other people's help can be obtained - or other people could be doing the work); and there are very powerful incentives to cheat (both on the part of the examinee, the teacher and the school), and there is evidence of cheating - but it is regarded as nastily cynical to assume that cheating is widespread or even universal.

Of course, this applies in universities and colleges too, where it is possible and for many students advantageous to cheat on coursework.

(As recently as thirty years ago, it was very difficult indeed to cheat, since all examined work was done under supervised exam conditions. So easy-cheat systems have been deliberately created.)

My point is that systems are made which embody perverse incentives, and these incentives are obviously effective, yet a climate can be created in which even to discuss such incentives is regarded as crass to the point of exemplifying a personal problem (evidence of a worrying degree of cynicism and distrust) on behalf of the critic.

The advocate of easy-cheat examination systems is, in effect, saying: 'well, I know it is *possible* to cheat, but I wouldn't cheat and neither would any decent person - how about you?'

To draw attention to perverse incentives - such as incentives to cheat, or lie, or for greed, or status-seeking - is then to single oneself out as being the kind of person who is swayed by such incentives.

Robin Hanson writes:

Econotarian and T, my hopes are mainly on the whole brain emulation scenario, which doesn't require rebooting existing biochemical processes.

Max M writes:

Econotarian - Yes, you're right, the smart money would be on those people who plan appropriately when they realize that death may imminent. Making sure that: when theres a fairly high probability of death, you have the Alcor people standing beside the bed with their devices at the ready.

T - My gosh! This whole thing is a scam! Of course, why didn't anyone think of that problem! ... [sarcasm over] The whole science of cryonics has been on how to solve the problem of cells rupturing from the freezing process. In essence they inject the body with a form of antifreeze (that is toxic, yes) before then cooling it to the right temperature. Its not near perfect yet, but it does take us quite a way towards solving the problem. One problem is getting the liquid in to every last crevice, the other is how to undo the toxic effects of the antifreeze when reviving the body.

~Max M

Alex J. writes:

Robin, perhaps that would simulate a person a lot like me, perhaps indistinguishable from me over the phone. This might be pleasing to my friends and family, but it doesn't help me. I would be dead, even if my twin lives on in simulation.

Mr. Econotarian writes:

Dr. T is wrong - embryos are regularly frozen and recovered alive after many years. You have to engage in a very complex procedure using cryoprotectant and slow cooling to avoid the formation of large damaging intracellular ice crystals.

The question is whether your neurons can handle the long period of anoxia during the slow cooling process. It can take an hour to slowly freeze down an embryo, but that's just a few cells. A whole body (or whole brain) may take much longer because of thermal inertia.

But the brain's negative response to anoxia is slowed by cooling...so I think you might just be able to pull it off if you started cryoprotection and cooling immediately after "death" (the beginning of anoxia to the brain). A heart-lung machine may help.

This also assumes a blood clot or other circulatory problem in the brain doesn't stop the cryoprotectant from circulating throughout properly.

But all it takes is 10-15 minutes at room temperature anoxia and the brain is gone.

Now we could start genetically engineering ourselves to survive longer periods of brain anoxia like crassius carp and other overwintering animals...

Comments for this entry have been closed
Return to top