Friday, July 24, 2020

Once Again the New York Times Proves it’s Clueless About the Gold Standard

8 MIN READ - The Cautious Optimism Correspondent for Economic Affairs and Other Egghead Stuff dispels yet another fallacious New York Times anti-gold standard column with factual historical corrections.

Federal Reserve Board of Governors nominee Judy Shelton

This week the Times ran yet another piece attacking Trump Federal Reserve Board of Governors nominee Judy Shelton, mostly aimed at her past statements supporting the dollar’s return to some form of gold standard.

Although Shelton has walked back her pro-gold stance to better her odds of confirmation, her revised position suggesting the Fed should follow some form of rules-based policy—akin to the well known Taylor Rule developed by Stanford economist John B. Taylor—has nevertheless drawn the ire of central bank apologists everywhere.

This time the paper has wheeled out former Obama administration Treasury official and SEC fraud lawsuit defendant Steven Rattner who predictably recycles the same fallacies and myths about gold that informed proponents of the gold standard have dispelled countless times before, and the Economics Correspondent does so again.

Rattner begins his anti-gold crusade by blaming it for the Great Depression:

1) “there’s the gold standard, a significant culprit in deepening the Great Depression and abandoned decades ago by every country in the world.”

Rattner leaves out that during the 1920’s and early 30’s the world was not operating under the genuine classical gold standard of the 1879-1914 period but rather a paper-gold hybrid “gold exchange standard” engineered by Great Britain.

This interwar pseudo-gold standard largely substituted central bank paper notes as “equal to gold” which allowed Europe in particular to print far more currency than it could ever possibly hope to convert from its finite gold reserves—a destabilizing inflation that the honest classical gold standard would never have permitted.

When recession struck in 1930 the result was a predictable deflation and suspension of gold convertibility since world central banks had circumvented the traditional discipline of gold and printed far more redemption promises than they could keep.

It is noteworthy that the USA was an exception, still operating on a purely gold coin standard but with an activist central bank. The Fed engineered its own QE inflation to help Britain with its chronic trade deficits in the late 1920’s and in doing so created the stock market and real estate bubbles of 1927-1929.

Furthermore, when the USA entered recession in 1929 a European-style deflation was not inevitable as the Fed still had plenty of gold to support the dollar (the U.S. held by far the largest gold reserves in the world at the time) but it chose to sit on its hands and do nothing as the money supply contracted back to its pre-1929 levels.

Even worse, as small unit banks began experiencing depositor runs across rural America, the Fed refused to carry out the very role it was created to perform: lender of last resort, providing short term loans to prevent liquidity failures just as larger banks had done in the pre-Fed era—a responsibility the Fed nationalized in 1914 and then neglected.

Far from gold being the culprit during the Great Depression, it was the “managed” gold standard of the Fed itself which overrode what normally would have been natural market adjustments to the economic downturn, an observation Milton Friedman won the Nobel Prize for.

Put more simply, had the world been on a genuine gold standard in the 1920’s and 1930’s instead of a paper-gold hybrid in Europe and the Fed suppressing the normal market movements of gold in America, there would never have been a Great Depression in the first place.

Or as George Mason University monetary economist Lawrence H. White has noted

“The interwar period shows us a case where central banks—not the gold standard—ran the show”


“Several authors identify genuine historical problems that they blame on the gold standard, when they should instead blame central banks for having contravened the gold standard.”

But of course Rattner leads his readers to blame “the gold standard” as primary culprit.

For a more detailed reading on the Fed’s role in starting and worsening the Great Depression, you can read the Economics Correspondent’s articles on 1920’s and 1930’s monetary policy at:

2) Rattner then goes back to the late 19th century to blame the gold standard for repeated banking panics in the USA. The suggestion is that a return to gold means a return to more frequent financial crises.

“Between 1880 and 1933, the United States experienced at least five full-fledged banking crises"

The first problem with this “analysis” is that three of the “full-fledged banking crises” were during the Great Depression itself (1930, 1931, and 1933), a period of gross central bank malfeasance that we’ve already discussed. That leaves two bona fide crises under the more genuine classical gold standard: 1893 and 1907.

More importantly Rattner seems to be unaware that many other countries were on the gold standard at the same time but suffered no banking crises. Great Britain experienced one very brief, short panic in 1890 (The Barings Crisis) even with its problematic monopoly central bank, and Canada experienced none.

In fact Canada was on gold and had no central bank from 1817 until 1935, when the Bank of Canada was established, and in that 118 years experienced no banking panics at all. Even in the early 1930’s when 10,000 U.S. banks closed their doors for good, not a single bank failed in Canada even though the country was hurt at least as badly by the Great Depression.

The same story of stability is true for Scotland which from 1716 to 1845 (129 years) was on a gold or silver standard with no central bank and experienced not a single banking panic.

Over a century free of financial crises: the gold-based decentralized systems produced a far superior track record to Rattner’s vaunted Fed fiat era.

3) More on the 19th century. While it’s true the United States had the most unstable banking system in the industrialized world during the classical gold standard period, it wasn’t gold—as evidenced by the stable experiences of Canada, Scotland, Great Britain, and others—but rather America’s peculiar but deadly mix of horrendous bank regulations to blame.

During the 19th and early 20th centuries most American states restricted bank branching, forcing banks to operate as “unit banks” with only one office in one location. At the founding of the Fed in 1914, the U.S. had over 25,000 banks, but over 95% of them had no branches.

Unit banking was a scourge for most of America’s history as a tiny unit bank could not diversify its loan portfolios or even its depositors. A unit bank in a farming region, for example, was completely tied to the fate of the local crop or a local industry. So if the price of corn plummeted, most of the banks in the corn belt failed.

Also, small unit banks were highly dependent on a handful of local wealthy depositors. During an economic downturn if one large depositor withdrew his funds the bank could fail overnight.

There are other hugely destabilizing consequences to America’s perverse unit banking design, but it suffices to say Canada and Scotland allowed nationwide branch banking from their industries’ inceptions. And England dropped unit banking in the early 1800’s, a decision that contributed to greater stability in the latter half of the century.

When depressions hit countries with nationwide branch banking, no one or two hard hit industries or wealthy depositors could bring a bank down since it had such a wide range of nationwide loans and depositors to draw from.

Incidentally, unit banking was still law in over two-thirds of U.S. states during the Great Depression which explains why so many small banks failed. In states that at least allowed unrestricted branching within state lines (interstate branching was still completely illegal during this period) banking was much more stable. In California, for example, even though banks were temporarily ordered to close their doors during FDR’s bank holiday, not a single bank actually failed.

The 1880-1914 experience in the U.S. was made even more volatile by backwards Civil War era legislation known as the National Banking Acts. These laws, designed to finance a war, required banks to back any currency/notes they issued 110% by U.S. Treasury bonds. If banks didn’t hold U.S. bonds, they weren’t allowed to issue currency. And after the war as the federal government retired "greenback" notes, private banks were once again the nation’s primary source of paper currency (as was the case in Canada and Scotland).

When the Civil War ended, the government consistently paid down the national debt and Treasury bonds began to disappear. But the regulations remained, tying banks’ hands and making them unable to issue currency since the bonds they needed no longer existed. So during harvest seasons when farmers came to withdraw cash to pay their hired hands regulations prohibited banks from issuing cash.

Since most laborers had no checking accounts in those days farmers, needing something to pay their workers with, then demanded gold coins which resulted in a drain on reserves from the banking system—which in turn precipitated a contraction of credit and often times recession. Combined with the instability of unit banking, these perverse regulations often created full-blown banking panics including major ones in 1873, 1893, and 1907 along with smaller incipient crises in 1884, 1890, and 1900. It's no coincidence that most of the era's crises began in the autumn.

While banking panics raged in America, across the border the Canadian banking system—also on gold—continued to operate smoothly.

The demonstrative stability of the unregulated, decentralized, gold-standard systems of Canada and Scotland are even more impressive when considering all the contemporary crises that occurred in the USA and England. The instability of such large economies could easily have spread to their smaller neighbors to the north, but the Canadian and Scottish banking systems were so strong and diversified that they remained solvent throughout.

So Rattner deceives his readers when blaming gold for the U.S. banking panics of the 19th century. The true blame lies with lousy bank regulations.

4) Rattner then contrasts the post-gold standard era and boasts that the U.S. has experienced only two banking crises since:

“in the past 87 years, we’ve had two [panics].”

Rattner doesn’t mention the U.S. was still on the international gold standard from 1933-1973, nearly half of those 87 years. If correlation equals causation then he must give gold half the credit.

More importantly, knowledgeable economists understand that it was the introduction of federal deposit insurance in 1933 that mitigated bank runs afterwards, not delinking the dollar from gold. As Milton Friedman and Anna Schwartz pointed out in their classic 1963 book “A Monetary History of the United States”…

“Federal insurance of bank deposits was the most important structural change in the banking system to result from the 1933 panic, and, indeed in our view the structural change most conducive to monetary stability since state bank notes were taxed out of existence immediately after the Civil War.”

In fact delinking the dollar from gold domestically doesn’t even pass the common sense test. Of the two policies, which is more likely to prevent depositors from withdrawing their money in a panic: Telling them they can’t redeem their funds for gold coins? Or that their funds are safe and insured up to $250,000 by the federal government?

5) Rattner moves on from gold and criticizes Shelton’s modified view that the Fed should at least be disciplined by some set rules.

But his contention that “[Shelton’s] view that interest rates should be ‘rules based’ would have prevented the central bank’s emergency cuts” is simply false.

Rules based policies for central banks process inputs such as inflation, unemployment, and GDP growth and given a slow enough economy can guide policy to lower rates in response. In fact the Taylor Rule itself produced a policy recommendation of slightly negative interest rates for a brief period in early 2009, right after the financial crisis (see Taylor Rule chart).

While the Economics Correspondent is no fan of negative interest rates, it’s absurd to argue a regime that can guide the central bank towards a negative rate policy under emergency conditions “prevents” a central bank from making emergency rate cuts.

6) Finally Rattner decides to take a swipe at President Trump himself, accusing “Mr. Trump… [of] doing his best to politicize this remarkable institution.”

The criticism of Trump is unremarkable in that it’s completely expected for a paper that would attack Trump for curing cancer or solving cold fusion.

But what’s laughable is calling the Federal Reserve a “remarkable institution.” Since opening its doors the Federal Reserve has only started a dozen-plus recessions, turned its own recession in 1929 into the Great Depression, raised the price level by 2,500% since its inception, and presided over higher overall unemployment and slower economic growth during its 106-year tenure than in the 106 years that preceded it.

For a more detailed comparison of the Fed’s economic performance against the pre-Fed era, see George Selgin’s excellent analysis at:

Monday, July 6, 2020

Inflation and Deflation Fallacies Part 1: "An Overheating Economy Stokes Inflation"

Click here to read the original Cautious Optimism Facebook post with comments

5 MIN READ - The Cautious Optimism Correspondent for Economic Affairs and Other Egghead Stuff has already set an analysis of inflation in motion with his recent series on the Fed’s 2020 coronavirus easing programs. Hence the decision to strike while the iron is hot and continue with an examination of longstanding inflation fallacies.

In previous articles we’ve employed the century-old Equation of Exchange to establish the primary factors that raise prices: money supply, output, and velocity.

For those not familiar with the Equation of Exchange (mv = py), the Economics Correspondent recommends his April summary of the subject at:

British economist David Ricardo (1817) and American Irving Fisher (1911) asserted higher prices are induced by three factors:

-First and foremost, an increase in the money supply as measured by M1 or the broader aggregate of M2.

-Second, falling output or the same number of dollars chasing a declining quantity of goods and services.

-And third, rising velocity—a more rapid turnover of dollars spurred usually by more bank lending or a greater propensity to spend and reduced propensity to hoard by businesses and consumers.

However during a normal economic expansion, output doesn’t decline. It rises. So blaming inflation on higher economic output is a blind alley.

And during expansions it’s possible that economic actors may be more willing to spend and banks more willing to lend, but changes in velocity tend to be moderated during normal times. Velocity typically plunges only during economic crises, and only soars during hyperinflations when actors try to spend their money quickly before it loses purchasing power.

So during normal expansions, the culprit driving higher prices is nearly always an expansion of the money supply.

As the Economic Correspondent has previously written, the money supply has grown by leaps and bounds over the last four months, but the last four months have been anything but a normal expansion. Monetary velocity has contracted as is typical during a major crisis or recession.

So during more normal economic times, politicians and central bankers may try to blame rising prices on an “overheating economy,” or greedy businessmen, labor unions, the weather, a tight job market, “cost push” or “demand pull” inflation pressures, but the true source is nearly always the central bank and the commercial banks it influences through the manipulation of interest rates.

Hence, Milton Friedman’s famous quote from 1970:

“Inflation is always and everywhere a monetary phenomenon in the sense that it is and can be produced only by a more rapid increase in the quantity of money than in output.”

Let’s see how understanding the monetary roots of inflation during economic expansions helps deconstruct the many inflationary fallacies circulated by government officials, the press, and central bankers. Starting with…


This fallacy is repeated on a near daily basis in the press during periods of growth: “If the economy grows too rapidly inflation will spiral out of control.”

The contention is obviously thwarted by theory. The more rapidly y (output) grows, the faster p (prices) will fall—all other things being equal.

As the Equation of Exchange tells us, prices move inversely to output. So if an economy with a money supply of $100 produces 100 widgets each year—assuming consumers want to buy all 100 widgets—the average price of widgets will tend towards $1. However if the following year the economy produces 110 widgets but the money supply remains $100, the price of widgets will fall to 91 cents. ie. The same number of dollars chasing more goods and services.

But not only is the allegation of the “overheating economy” contradicted by monetary theory, American history is rich with evidence of the precise opposite. Although Americans of the last 90 years have only known rising prices, the pre-Great Depression era offers a century-plus empirical exhibit of rapid economic growth alongside *falling* prices.

The Golden Age of America’s economic history is widely considered the Gilded Age, roughly 1865-1914 or the half century from the end of the Civil War to the beginning of World War I (1914 coinciding incidentally with the establishment of the Federal Reserve). That fifty year period contains two of America’s greatest decades of economic growth and the United States emerged as the world’s largest economy in the 1880’s.

From 1866 to 1913 American real GDP rose 588% ($466 million to $3.21 billion, constant 1913 dollars—source: Johnson, Williamson) or at a compounded growth rate of 4.2% per year.

By contrast, in the 47 years leading up to 2019—which has included two pretty good decades: the 1980’s and the 1990’s—GDP has risen by 254% ($6.05 trillion to $21.4 trillion, constant 2019 dollars—source: BEA) or 2.7% per year.

Given that the U.S economy grew to seven times its size from 1866-1913 vs only 3.5 times its size from 1972-2019, one would expect the Gilded Age to be a period of runaway hyperinflation—if we listen to the business news, government policymakers, and liberal economists. After all, every time GDP growth touches 3% we’re warned that unless the Fed acts uncontrollable inflation is right around the corner, so a near half-century of 4.2% growth with no Fed must have been a hyperinflationary disaster, right?

Of course not. From 1866 to 1913 the purchasing power of the U.S. dollar actually rose, not fell, from $1 to $1.58.

Putting it another way prices fell by 32.5%. What had cost $1 in 1866 only cost 67.5 cents by 1913. The United States actually experienced a half-century *deflation* averaging about 0.9% per year.

The historical record is again consistent with the Equation of Exchange. Under the classical gold standard, and with no Fed to push monetary expansion to its limits, U.S. money supply growth was limited roughly to the pace of new gold mining discoveries plus the balance of international payments. But as the growth of goods and services outstripped the growth of money, y (output) outgrew m (money supply) and the result was gently falling prices.

That’s nearly fifty years of evidence debunking the presumption that “too much economic growth sparks inflation.”

In fact, the evidence goes back even further. If we measure economic growth and prices from 1800 to the eve of the creation of the Fed (113 years), American GDP grew 9,800% or 4.0% per year for over a century. Again, government officials and Fed economists would have us believe the entire 19th century must have been an economic quagmire of hyperinflation since the economy must have been “overheating” virtually the entire time.

Instead, the dollar gained 76% in value and what cost $1 in 1800 sold for only 58 cents in 1913.

The same trends can be seen throughout the Industrial Revolution and Second Industrial Revolution in Great Britain: rapidly rising output and living standards with gently falling prices as the expansion of gold or silver-based money was unable to keep pace with the growth in goods and services. The rest of Europe and Canada, effectively the entire industrialized world—reflect the same experience.

The only interruptions were periods when governments suspended the gold standard—usually wars such as the War of 1812, American Civil War, or the Napoleonic Wars. But when peacetime and the gold standard returned, prices resumed their decline and across the entire 150 year period from the dawn of the Industrial Revolution to World War I prices fell across the board.

It was only when central banks were established and particularly western governments ended the domestic gold standard (1931-1935) that the era of uninterrupted price inflation—that people are so used to today—began. Beginning in the depths of the Great Depression, central banks launched a deliberate policy of producing money faster than the real economy grows, thus driving prices relentlessly and permanently upwards. In a dramatic reversal of the price level, what cost $1 in 1933 costs a little over $20 today.

The regimen of consistent and neverending inflation—usually moderate, sometimes extreme, always needless—remains a staple of monetary policy today.

We’ll continue examining numerous other inflation fallacies in Part 2.