Arnold Kling  

My Talk on Financial Regulation

PRINT
EconLog Book Club: For a Ne... Morning Commentary...

It's tonight, to a private audience. I only have ten minutes, which is good, because many of the guests are more knowledgeable and distinguished than I am. What I plan to say:

1. If you look at the sequence of housing finance crises (the Depression, the S&L crisis, and the current mess), the response to each crisis became the basis of the next crisis.

2. Regulation is a chess mid-game, not a math problem. With a math problem, once you solve the problem, it stays solved. In a chess mid-game, new opportunities and threats arise constantly. You try to plan ahead, but your plans inevitably degrade over time.

3. We tend to think of the task of regulation as one of making systems hard to break. An alternative to consider is making systems easy to fix. Think of a computer. You can try to use firewalls and anti-virus software to make your computer hard to break. But it still pays to back up your data to make it easy to fix.

4. The The Group of Thirty report (a focal point for the discussion this evening) is mostly focused on "hard to break," not "easy to fix." In particular, it seems to contemplate a European type of financial system, which is less heterogeneous than the U.S. system. The report contemplates consolidation in the banking sector, a single regulator, etc. That might help make the system harder to break, but it will make for a system that is much harder to fix. The current U.S. system, although it can be improved, is better than the system that is envisioned in the Group of Thirty report. I would focus much more on reforms that make the system easy to fix, rather than count on creating a system that is hard to break.


Comments and Sharing





COMMENTS (18 to date)
Gamut writes:

I think a better analogy to computers would be; you can put up firewalls etc, or just encrypt and protect each individual resource on it with a different password. It might seem more complicated to remember them all, but the ability of one person to guess a word and steal everything is severely limited since they can only take down on part at a time.

Gamut writes:

So, while your analogy suggests that there's a single 'fix' in case of failure, I think mine points their minds in the right direction to built-in resiliency.

Gamut writes:

Actually, there is a bigger irony there: I used to administer IT systems, and although what I suggest is actually the best practice, nobody, including me, ever carries through all the way. Generally we tend to use the same password for everything anyway because we're afraid that we might forget one of them and lose access to our own resource. In a way, we're afraid that our system will be too robust to even consistently allow us unfettered access. Regulators might learn from IT admins in this regard.

Lucas Reis writes:

Mr. Kling, your comments remember me of Paul Ormerod, Benoit Mandelbrot and the econophysicists...

Something like "we can't predict the future, so we must be prepared for the unknown".

floccina writes:

We tend to think of the task of regulation as one of making systems hard to break. An alternative to consider is making systems easy to fix.

What about the idea that you make the system easy to break but make the breaks less damaging. So you get breaks all the time and people get very careful about using the system. Nothing like dead bodies always lying around to make people act carefully. Recent recessions where so mild that people took more and more risks.

shayne writes:

I concur with your thesis here, Dr. Kling. I would be much more impressed with a regulatory framework that is designed to compartmentalize failures, rather than attempt to regulate against them in a consolidated manner. Your 'computer/data backup' analogy is just such a mechanism for loss/failure containment while recognizing failures (both risk and uncertainty based) will occur.

On that note, the former separation of Commercial and Investment banking types contained in the Glass-Steagal legislation that was repealed in the 1990's served as just such a compartmentalization of loss/failure. I've never quite fully understood the expected benefit of having repealed it. The costs of the lack of separation seem obvious - failures of the more risky and uncertain financial 'innovations' allowed to taint relatively more 'safe' financial products (business models) and, in so doing, the larger financial system. What exactly was the expected systemic benefit of the Glass-Steagal repeal?

Alex J. writes:

Gamot, your analogy is well taken. I would extend it to point out that "you" with your many passwords and resources are like a central regulator. In an economy, the many resources each have their own interested owners. The fear that we might lose track of the individual bits corresponds with our intuitions when we think of ourselves as the King of the Economy.

Dan Weber writes:

I like where floccina is going. Remember when elevators were dangerous and required operators? No one would try to fling themselves through a closing door unless they wanted to lose an arm.

As you make the risks smaller, people push the system more.

We need lots of dead bodies lying around. If everything looks safe, that just means that the risk has been moved off somewhere where you can't see it.

tjames writes:

I am not sure that the "easy to fix" approach is correct. I think you have to consider the system at hand before you pick an approach. There are many systems in operation where "hard-to-break" is a requirement for the system to have any use. Think of bridges, or financial transaction software. The users of these systems are unlikely to accept them if the builders just put out something that can be patched later, once there are 'dead bodies'. They expect best effort at hard-to-break.

At a level of system engineering, I think both approaches can, and should be, considered simultaneously, and you also seem to recognize this in point 3). They are both approaches to the problem of system robustness vs. failure. Decrease the probability of failure as much as possible AND increase the probability of being able to fix failures as they occur. I see nothing wrong with having a financial regulatory structure that explicitly avoids known failure modes while retaining sufficient flexibility to respond to new crises.

I am not sure how you tie this into your talk. Perhaps it is a feature of the fixes being proposed that they cannot be made to serve both goals at once, which is what you seem to say in point 4). I'd rather have a system that doesn't make me choose one approach over the other.

Arnold Kling writes:

re: Glass-Steagall. I'm not convinced that compartmentalizing failure is such a good idea. Instead, I think that having a lot of messiness in the system--overlaps, differences in regulations, and so on--actually makes it easier to fix. If an institution of type A is not allowed to do anything that an institution of type B can do, then if the type B's fail you have no backup plan. If their functions overlap, you do have a backup plan.

re: some systems, such as bridges, should be hard to break. That certainly applies to the payment system--check clearing, wire transfers, etc. But it need not apply to any one approach to funding housing or corporate investment.

hacs writes:

If there is an inevitable trade off among easier-to-fix regulations and harder-to-break regulations, it is an excellent point to think about optimal regulations in that sense so, isn't it?

tjames writes:

"Instead, I think that having a lot of messiness in the system--overlaps, differences in regulations, and so on--actually makes it easier to fix."

It is decidedly not my experience as an engineer that messiness improves systems. Redundancy yes - perhaps this is what you mean by overlap - but actual messiness, particularly in regulation, no. I want regulations to be clear, not messy.

I am concerned that having a lot of messy, overlapping regulations would create more opportunity for what you label "regulatory arbitrage". A messy regulatory structure creates opportunities to be exploited and that seems like a danger. If our messy tax code is any example to go by, it also creates rent-seeking and politicians willing to grant the rent for re-election, and further distortion of that same tax code.

I can't say I'm sold yet.

Chris Kramedjian writes:

Arnold,

What sort of easier to fix regulations do you recommend? I agree with your premises, but I'm having a hard time thinking of concrete examples.

Perhaps an incredibly well funded bankruptcy court?

Paludicola writes:

I'm also intrigued by the distinction of, "easy to fix," versus, "hard to break," but unable to reason what the real implications of those premises are. I'm also curious as to whether you have an opinion as to the need for some systemic observer, if not regulator, which has been promoted by many articles as a desirable reform.

George writes:

We tend to think of the task of regulation as one of making systems hard to break. An alternative to consider is making systems easy to fix.

This has got to be the best quote from this blog in the past month (and definitely in the running for all of 2009). It sounds like something straight out of a Roger van Oech book.

(I look forward to a liveblogger quoting you as saying, "I think regulation should break the system," and various semi-deranged Nobel-prize-winning print media columnists reprinting it.)

I vaguely recall you writing something similar, but less pithy, in the past. Is anybody else beating the easy-to-fix drum?

George writes:

tjames,

You wrote:

It is decidedly not my experience as an engineer that messiness improves systems. Redundancy yes - perhaps this is what you mean by overlap - but actual messiness, particularly in regulation, no. I want regulations to be clear, not messy.

Yeah, in the software systems I work with, I'm a fanatic about keeping things clean. But that's an engineered system, one simple enough for a single person (or a handful of people taken together) to understand.

Turn your attention to the system between your chair and your keyboard: very messy, very redundant, very wasteful -- but at the same time, very flexible, very resilient, self-repairing, and wildly better at general information processing than any system humans have come up with.

Plus it would take most of a medical school faculty to get a comprehensive explanation of even the non-brain part of it.

It may be that we should think about changing regulatory systems as "growing" them, rather than as "engineering" or "rationally planning" them (as we seem to want to do), or "accreting" them (as we actually do in practice). (cf. Pretty much everything Alan Kay has ever written.)

wc0350 writes:

Our financial system seems to have always been based around a “hard to break” system. We build the system up with great “firewall,” however firewall is not always unbreakable. Therefore, I would prefer the “easy to fix” system.
Why put so much time into building up a system that is hard to break? It is still breakable and chances are, at some point in time, it will crash. By making something easy to fix, it may not have a complex “firewall” but if it does break, it isn’t like rocket science trying to fix it. I believe that the U.S. should reform the current financial system and add in a component that makes it “easy to fix” whereas the current system (although it is not as solely based on being hard to break like other nations financial systems) is still based around being “hard to break.”

Dezakin writes:

Easy to fix is fine if you ignore politics. Then it implies easy to alter for political purposes of the day, paving the way for another round of systems alteration.

Really, fighting off systemic risk and moral hazard at the same time just isn't an easy task even if ideology says otherwise.

Comments for this entry have been closed
Return to top