Tuesday, February 19, 2013

In memoriam: Armen Alchian

Armen Alchian, one of the greatest non-Nobel economists, has died. He was noted for his work on the contingency of property rights, but this story (from "Principles of Professional Advancement", in the July 1996 edition of Economic Inquiry (vol 34, issue 3) has always stuck in my memory:
RAND was not sure what an economist would do. I certainly didn't know either. But I learned a lot about "big real world problems'--too big to comprehend, usually. Since it wasn't clear at first what an economist could do that was pertinent, the task was to snoop around, look at the problems being analyzed (defense problems, usually) and try to see how economics could help.
 What we economists did first was detect how economics was being ignored, in particular how costs and interest rates were ignored in making military-strategy decisions. Another "complicated, surprising" proposition was that for assigning nuclear material to the Air Force versus the Navy, it was not deemed necessary to know whether it was more important for the Navy or the Air Force to have more fissile material. But of course, that would be very desirable to know. With the idea of indifference curves between nuclear material and labor (as inputs), marginal rates of substitution between the two in the Navy and also in the Air Force would indicate directions in which to revise the allocations. That "revelation" gave the economics group some extra clout.
I cite these as two examples of how the simplest concepts and propositions in economics have mega-ton power. In that vein, I like to brag that I did the first "event study" in corporate finance, back in the 1950s and 1960s. The year before the H-bomb was successfully created, we in the economics division at RAND were curious as to what the essential metal was--lithium, beryllium, thorium, or some other. The engineers and physicists wouldn't tell us economists, quite properly, given the security restrictions. So I told them I would find out. I read the U.S. Department of Commerce Year Book to see which firms made which of the possible ingredients. For the last six months of the year prior to the successful test of the bomb, I traced the stock prices of those firms. I used no inside information. Lo and behold! one firm's stock prices rose, as best I can recall, from about $2 or $3 per share in August to about $13 per share in December. It was the Lithium Corp. of America. In January, I wrote and circulated within RAND a memorandum titled "The Stock Market Speaks." Two days later I was told to withdraw it. The bomb was tested successfully in February, and thereafter the stock price stabilized.

Wednesday, February 13, 2013

Feasibility of the Guaranteed Minimum Income

A guaranteed minimum income (GMI) is a tax scheme designed to replace welfare, social security, and other poverty-support systems with a single unified system that ... guarantees a minimum income to everyone. In the future times, I'll post about why such a system is manifestly a great idea. For now, I want to examine its feasibility. This calculator inputs the proposed guaranteed minimum income and a marginal tax rate schedule, and uses census data to estimate total expenditures, total receipts, and net tax revenue.

A couple of quick notes. First, units are percentages of GDP/capita. Second, the census data are for individuals age 15 or higher; this should result in a significant positive bias, since the distribution does not account for children. Third, I make no estimates about the impact of tax changes on revenues; one should bear the concavity of the Laffer curve in mind.
Update: According to the CIA world factbook, roughly 20% of the US population is less than 15 years old. I therefore added 22% to the lowest income bracket in order to account for this.
Graphs Pre-tax/post-tax curve:
Income distribution:
Net receipts in each tax bracket:

Wednesday, February 6, 2013

What's your rationale for progressive taxes?

What's your rationale for a progressive tax? I see two general categories:

  • People should pay back to society in measure of their benefit from society.
  • People should pay back to society in measure of their ability to do so.
Both call for people to pay more taxes as they earn more money, but I'd argue that they imply two very different tax structures.

The first, people should pay back to society in measure of their benefit from society, implies a regressive tax. The tax paid on each additional dollar should be lower than the previous. Why? Because marginal utility from income falls. To wit: Warren Buffet has not benefited proportionally more from society than I have, so he should pay proportionally lower taxes than me. 

Perhaps more suggestively, people benefit from income increases in two basic ways: higher consumption and greater security. The greater component by far is increasing consumption, so paying back to society proportional to your benefit from society implies a consumption tax. Consumption taxes are widely held to be regressive.

The second, people should pay back to society in measure of their ability to do so, implies a progressive tax. The tax paid on each additional dollar should be higher than the previous. Why? Because, again, marginal utility from income falls. Warren Buffet loses proportionally less than I do when his after-tax income falls, so he should pay proportionally more in taxes.

Note that this does not consider other benefits and costs of taxes. The progressivity/regressivity of the tax code should depend on effective marginal rates, marginal incentives, the importance of investment, deadweight loss estimates, and so on.

The moral of the story: Be careful of your justification for your preferred tax structure. If you want people to pay back because they've benefited, you're implicitly arguing that you want poor people to pay proportionally more than rich people, because poor people benefit more relative to their income than rich people.

Saturday, January 5, 2013

LaTeXed WCI post

From here, parsed with LaTeX to help me see what's going on.

I wrote this post. Then I realised it was wrong. I really wish my math were better. So I'm turning it into a sort of bleg. I should have written the technology in implicit form as \(F(C,I,K,L)=0\) rather than \(H(C,I)=F(K,L)\). Because the way I wrote it makes \(P_k\) depend only on \(I/C\), when it should also depend on \(K/L\) as well. I can't think of any plausible underlying story that would make \(H(C,I)=F(K,L)\) legitimate and reasonably general. But \(F(C,I,K,L)=0\) is ugly and unintuitive and unteachable, even though it works fine theoretically, and is just a little bit more complicated.
Maybe someone has some ideas?
Here's what I originally wrote:
Macroeconomists like to aggregate things. To keep it simple. Especially for teaching. But we don't want it too simple, so we have to wave our hands when we want to talk about things that can't happen in the model.
Here is the simple aggregate technology macroeconomists often assume:
$$C + I = F(K,L)  \mbox{ where } I = \frac{dK}{dt} \mbox{  (I have ignored depreciation for simplicity).}$$
Some economists object to the right hand side of that equation. They complain that it aggregates all labour into one type of labour \(L\). And they complain that it aggregates all capital goods into one type of capital good \(K\).
But I object more to the left hand side of that equation.
It aggregates newly-produced consumption goods \(C\) with newly-produced capital goods \(I\). It assumes they are perfect substitutes in production. It assumes the Production Possibilities Frontier between \(C\) and \(I\) is a straight line with a slope of minus one. It assumes the opportunity cost of producing one more capital good is always and everywhere one less consumption good. It means that the price of the capital good will be always one consumption good. And that means that the (real) rate of interest will always equal the marginal product of capital.
We don't assume a straight line PPF between two different consumption goods. Why should we assume a straight line PPF between consumption goods and capital goods?
Let's relax that left hand side assumption. Let's instead assume:
$$H(C,I) = F(K,L)$$
Where \(H( )\) is some convex function, so the PPF between \(C\) and \(I\) is bowed out. That means that the marginal cost of investment (in terms of foregone consumption) will be an increasing function of investment. So the price of the capital good \(P_k\) (in terms of the consumption good) will also be an increasing function of investment.
Let's continue to assume, as macroeconomists usually do, constant returns to scale. We assume that for both \(H(\cdot,\cdot)\) and \(F(\cdot,\cdot)\). So if we double both \(K\) and \(L\) we can also double both \(C\) and \(I\). So the derivatives of \(F\) with respect to \(K\) and \(L\) depend only on the \(K/L\) ratio. And the derivatives of \(H\) with respect to \(C\) and \(I\) depend only on the \(I/C\) ratio.
The price of the capital good \(P_k\) (in terms of the consumption good) will equal the marginal cost of producing one more capital good (in terms of consumption goods foregone):
$$P_k = -\frac{dC}{dI} = \frac{H_c}{H_i},$$ which is an increasing function of \(I/C\).
The real wage \(W\) (in terms of the consumption good) will equal the marginal product of labour (the extra consumption goods produced):
$$W = \frac{dC}{dL} = \frac{F_L}{H_c},$$ which is an increasing function of \(I/C\) and an increasing function of \(K/L\).
The real capital rental \(R\) (in terms of the consumption good) will equal the marginal product of capital (the extra consumption goods produced):
$$R = \frac{dC}{dK} = \frac{F_K}{H_C},$$ which is an increasing function of \(I/C\) and an decreasing function of \(K/L\).
In equilibrium, the real rate of interest \(r\) (in terms of the consumption good) must equal the rate of return from owning one unit of the capital good. That rate of return will equal \(R/P_k\), plus the annual percentage rate at which \(P_k\) is rising. (If you pay $100 to buy the machine, rent it out for $5 per year, and the price of machines rises by 2% per year, your rate of return will be 5%+2%=7%, and if the rate of interest is also 7% you will be just indifferent between buying and not buying that machine.)
$$r = \frac{R}{P_k} + \bigg(\frac{dP_k}{dt}\bigg)\frac{1}{P_k}$$
Substituting for \(R\) and \(P_k\) we get:
$$r = \frac{F_K}{H_i} + \bigg(\frac{d(H_C/H_i)}{dt}\bigg)\frac{1}{H_C/H_i}$$
So that \(r\) will be a decreasing function of \(K/L\), a decreasing function of \(I/C\), and an increasing function of the rate at which \(I/C\) is rising over time. (In steady state the \(C/I\) ratio will be constant over time, so that second term will be zero.)
In the standard model, \(r\) is a decreasing function of \(K/L\) only.
In the standard model we get a perfectly elastic investment demand curve. An increase in desired saving and hence investment has no immediate effect on the rate of interest; it reduces the rate of interest slowly over time as the capital stock grows over time. \(K\) cannot jump, so \(r\) cannot jump (unless \(L\) jumps).
In the revised model we get a downward-sloping investment demand curve. An increase in desired saving and hence investment causes \(P_k\) to increase immediately and \(r\) to fall immediately.
I think that's a lot cleaner than the "adjustment costs" approach to getting a downward-sloping investment demand curve.
And it lets us talk about how changes in desired savings and the rate of interest will affect the price of capital goods.
It also shows what's wrong with "\(r = MPK\)", in a simple model.
You could add in a second capital good if you like. Just add \(K_2\) to \(F(\cdots)\), and \(I_2\) to \(H(\cdots )\), then you get a second equation for \(P_{k2}\), for \(R_2\), and for \(r\) as a function of \(P_{k2}\) and \(R_2\). But I don't think it makes as much difference. The problem is not aggregating capital goods. The problem is aggregating the capital good with the consumption good.
To complete the model we need to add a labour supply function and a savings function. One simple savings function would be a consumption-Euler equation where \(r\) is an increasing function of the growth rate in consumption, and so is an increasing function of \(I/C\).
But is it simple enough to teach? I need to think up some diagrams, and a good name for the \(H(\cdots)\) function, so students can understand it.
I don't know if anyone else has done it like this before. They may have.
I don't know if I got any of the math wrong. I may have. By the way, what am I implicitly assuming when I write \(H(C,I)=F(K,L)\) instead of \(G(C,I,K,L)=0\)? I originally planned to write the technology that second way, but thought the first way was a bit more intuitive.
(I thank Bob Murphy for sending me a copy of one of his papers, that inspired me to do this. (Got a link, Bob?). I think Bob and I are saying at least roughly the same thing. I'm just leaving out all the "what Samuelson said wrong" and "what Bohm-Bawerk said right" stuff that Bob goes into. I'm trying to keep it simple.)

Thursday, January 3, 2013

The loanable funds market

I've been thinking about the Austrian model, as described in Garrison's book, and its relationship with monetary economics.  Here is the Austrian model in a nutshell:

The economy is somewhere along the production possibilities frontier in the upper right: \(Y = C+I\).  The level of investment is by definition equal to the level of saving in the economy - that is, it is determined by the loanable funds market, where the saving (= supply) and investment (= demand) curves interact.  When the loanable funds market is in equilibrium, \(S = I\) at some equilibrium interest rate \(i_{eq}\).  Consumption, meanwhile, is equal to the value of all final goods and services, which have been produced over some period of time.

The interest rate pressures the intertemporal structure of production toward a particular slope.  That is, the rate at which value is added to goods as they move through stages of production (remember Macro 101, when you discuss why \(Y\) is the sum of final goods and services?) is pushed toward the interest rate.  If the value-added of a good compared to its production time is higher than the interest rate, firms will take advantage of arbitrage and invest in producing more of the good, increasing demand for loanable funds and decreasing the price of the good.  If the value-added of a good compared to its production time is less than the interest rate, investors will discontinue investment in firms producing the good and push the relevant funds into other investments, decreasing the interest rate and increasing the price of the good as marginal firms go out of business.

(There is an interesting discussion to be had at some other time as to the impact of interest rates on the internal structure of firms which encompass multiple steps of the production process.  The interest rate should "flex" such firms.)

While the chief contribution of the Austrian model is in disaggregating the capital structure of the economy and thus providing a mechanism by which the long-run can arise over many periods, here I want to think about the loanable funds market and the insight its particular abstraction gives into the broader structure of the economy.

Returning to the circular flow of Macro 101, firms purchase labor from households and households purchase final goods and services from firms.  Of course, this is a lie - in fact, the economy is a giant tangled mess of firms purchasing goods and services from each other, households selling their labor all over the place, firms purchasing labor from other firms, households selling goods and services to firms and other households, and we haven't even thrown the financial industry or the government into the mix - firms and households saving money through the financial industry, which fishes for investment opportunities, and of course the government with its sticky fingers and regulators in most transactions in the economy.

What's the function of the financial industry?  Steve Waldman opines that the financial industry is a morass of opaque risk-taking. Households don't want to know what sort of risks they're taking - they just want some return on their money.  Investments are repackaged, sliced, diced -- risk is packaged and maneuvered and spread out.

Let's run with the image of finance as a morass - literally.  Think of finance as a swamp through which water drains, from a lake on one side to a lake on the other.  At the end of the marsh, there's a steady current into the ocean, and on the one end, there's a steady current into the marsh.  But in the marsh, the water slowly wends this way and that, following aimless little streams and stopping in stagnant ponds, before it eventually outs into the lower lake.

But as far as the macroeconomy is concerned, the financial system is a black box.  Water goes into the swamp from one side and comes out the other.  Investors put money into one side, firms bid for investments on the other side, and the financial system equilibrates the two.  But., much like hot dogs and Project Mayhem, the first rule of finance is you do not ask questions about how it works.

Perhaps more to the point, you don't need to know how it works to know that it functions as a clearinghouse between investors and firms.  That's the beauty of the loanable funds model: it abstracts away the financial system so that details of financial interactions don't distract from the larger picture of the macroeconomy. As far as the macroeconomy is concerned, the details of how money moves from savers to investing firms is not relevant.  What matters is the capital stock accumulated via investment, the level of consumption deferred, and the impact of these decisions on growth and employment.

In order to place the financial system inc ontext, observe that the loanable funds market presents a savings (supply) curve and an investment (demand) curve relating dollars to a (single, risk-adjusted) interest rate.  We can interpret the savings curve as the relationship between the interest rate offered by the financial system and the willingness of savers to inject funds into the financial system.  Likewise, the investment curve is the relationship between the interest rate offered by the financial system and the willingness of firms to borrow funds from the financial system.

These curves and their interaction contain the macroeconomically relevant information in the financial system. Any macroeconomically relevant event in the financial system will manifest itself as a movement of curves in the loanable funds model.  Any macroeconomically irrelevant event in the financial system will be completely invisible in the loanable funds model - as it should be!

Tuesday, November 27, 2012

Problematicity of minimum wages, part 2

Some time ago, I wrote about why I consider minimum wages unempowering and harmful to the least privileged members of society.  I contended that minimum wages shut out from the labor market - a fundamental social institution - those people whose labor is least valuable.  It prevents them from exercising what little power they have and perpetuates their disenfranchisement by preventing them from accumulating social and human capital through employment.

I'd like to revisit that topic briefly to better explain why a minimum wage effectively bars the least valuable, least privileged, and least powerful members of society from employment.

The process of establishing wages is effectively a negotiation between employer and employee.  In some cases - when the employee's power is checked by the presence of many other potential employees - the employer can set the wage.  In other cases - when the employer's power is checked by the presence of many other potential employers - the employee can set the wage.  The freer the market, the less power both employee and employer have.  All the rest of the time, employee and employer negotiate and arrive at a compromise.

Negotiations end when either the parties reach an agreement, or when one party walks away from the negotiation.  The threat of walking away without a deal is a bargaining tactic.

This illuminates the brutal consequences of a minimum wage.  In the presence of a minimum wage, the employer cannot offer a wage low enough to make the potential employee a contributing laborer.  So the employer walks away from negotiation, leaving the employee out in the cold.  The minimum wage effectively bars the least valued members of society from participating in the labor market not by outright exercise of force, but by the conjunction of freedom to walk away from negotiation and conditions on the price of labor.

In a modern, wealthy welfare state, the people most affected by the minimum wage and shut out from labor force participation move from job to job, work for cash, consume very little, and rely on friends, family, charity, and welfare.  They're not starving to death, but they are shut out of permanent participation in the labor force.  Thus they have difficulty accumulating social and human capital - they're stuck in an effective poverty trap.

So here are two general classes of solutions to this disenfranchisement.  One type of solution is to remove the ability of employers to step away from the negotiating table.  Mandate that employers take any job applicant and pay them a set wage.

The other type of solution - the type I favor: do away with the minimum wage and replace it with an anti-poverty measure that genuinely empowers the underprivileged, instead of one that merely looks nice to us upper-middle-class folks while silently disenfranchising the poorest, least valuable members of our society.

Saturday, November 17, 2012

332 straight months of above-average temperatures

I ran across this on Facebook today.  It turns out that I have never, in my whole life, experienced a month with below-average temperatures:

The average temperature across land and ocean surfaces during October was 14.63°C (58.23°F). This is 0.63°C (1.13°F) above the 20th century average and ties with 2008 as the fifth warmest October on record. The record warmest October occurred in 2003 and the record coldest October occurred in 1912. This is the 332nd consecutive month with an above-average temperature.
In other words, for 332 straight months, the average temperature has been above the 20th century average temperature.

This got me thinking about just what the chances of that are.  I want to make one simple assumption and then test the hypothesis that global warming is not occurring.

Temperature Deviations

Let's look at temperature deviations.  Call the average temperature over the century \(T\).  We should think of this as the long-haul temperature.  If global warming isn't happening, then this is the "baseline temperature" of the Earth.

Each month, the average temperature doesn't necessarily need to be the same as the long-haul temperature.  It might be above or below, depending on whether there's something like El Nino going on, or if the sun is extra-bright, and so on. Either the temperature in that month is above or below the long-run temperature\(T\).  Whether or not the temperature is above or below -- one of those two outcomes -- is what we'll focus on.

Here's our one simplifying assumption.  The probability of monthly temperature being above or below \(T\) in one month does not depend on whether it was above or below \(T\) in previous months.

This lets us treat monthly temperature as a Bernoulli process.  This is a standard piece of finite mathematics which I'll proceed to explain.

Bernoulli Processes

(If you want to get to the meat, skip this section and go to the next one.)

A Bernoulli process is an experiment with the same two outcomes ("success" and "failure") and the same two probabilities (of success, of failure) repeated over and over again.  The probabilities don't change from experiment to experiment.

The standard example of a Bernoulli process is flipping a coin over and over again.  There are two outcomes each time (heads and tails), and each time the probabilities are the same (\(0.5\)) each).

The usual question in a Bernoulli process is, "What's the probability of getting this many successes in that many trials?"  For example, we might ask, "what's the probability of getting \(2\) heads if we flip a coin \(3\) times?"  Here's a quick explanation of where the answer comes from.  It requires two "black box facts."

We did not specify the order of the heads.  So, for example, the outcomes HHT, HTH, THH all qualify as "three heads."  Each of these outcomes has the same probability: since the probabilities don't change from experiment to experiment, the experiments are independent.  Black-box fact one: The probability of a string of independent outcomes is the product of each outcome's probability.  That is, the probability of HHT is the probability of H times the probability of H times the probability of T.

Now to get the probability of two heads (instead of HHT, say) we just have to add up the probabilities of all the different ways we can have two heads in three flips.  Each has the same probability, so we just need to multiply that probability by the number of different ways we can get two heads in three flips.  Black-box fact two: the number of ways of choosing two of the three flips to be heads (order doesn't matter) is the number \(C(3,2) = 3\).

So the probability of flipping two heads in three tosses is \(3(0.5)^3\).

More generally, the probability of getting \(k\) successes in \(n\) trials (if \(p\) is the probability of success and \(q\) is the probability of failure) is

Climate as Bernoulli Process

Our climate model is basically a series of coin flips.  If the long-haul temperature average isn't changing, then each month, the probability of above-average temperatures is just the same as the probability of below-average temperatures.  That is, there's a \(50\%\) chance that either occurs.  We want to know the probability of \(332\) above-average temperatures in as many trials.

This just like treating each month as a coin flip.  If you flip a coin \(332\) times, what's the chance that you get \(332\) heads in a row?

$$C(332,332)\bigg(\frac{1}{2}\bigg)^{332}\bigg(\frac{1}{2}\bigg)^0 = \frac{1}{2^{332}}.$$

That is an incredibly small, tiny, vanishing, eensy-weensy, little number.  It's on the order of \(10^{-100}\), that is, one in \(10^{100}\).  By comparison, there are roughly \(10^{80}\) atoms in the observable universe.  So if we gathered all of those atoms into one box, colored one blue and all the rest white, and grabbed one of the atoms while blindfolded, we're billions of times more likely to pick the blue atom than we are to have \(332\) straight months of above-average temperatures.

The tl;dr?  If global warming is not happening, and if the temperature each month is independent of the temperature the previous month, then the probability of \(332\) straight positive deviations is a billion trillion times less likely than reaching into a bin of all the atoms in the observable universe and randomly picking the one blue atom.  So, global warming is almost certainly happening.