Monday, February 19, 2018

Lessons from the Great Depression: Wages (Part 2 of 2)

Click here to read the original Cautious Optimism Facebook post with comments

11 MIN READ - Here is a Dispatch from the Cautious Optimism Correspondent for Economic Affairs and other Egghead stuff in his continuing series on lessons learned, unlearned and never learned from the Great Depression

CO is sure that you will find it as interesting, informative and provocative as he has.


With the collapse and overturning of the NIRA the Roosevelt administration became more combative with its anti-business rhetoric and the New Deal Brain Trust was replaced with more progressive, more radical, and more interventionist bureaucrats. One of FDR’s first moves to replace the failed NIRA was the passage of the National Labor Relations Act (NLRA, aka. the Wagner Act) in the summer of 1935.

Widely considered the single-most important piece of labor legislation of the 20th century, the NLRA created the National Labor Relations Board to enforce new bargaining powers granted to labor unions. Among the NLRB’s enforcements were a guarantee to collectively bargain with management, businesses were prohibited from refusing to collectively bargain, businesses were prohibited from firing, demoting, or transferring workers who attempted to unionize or collectively bargain, businesses were compelled to rehire striking workers under most conditions, and businesses were prohibited from setting promises not to unionize as a condition of employment even as unions were free to bargain for “closed shops” where employees could not be hired without joining the union.

The NLRA was also challenged in the courts, but since FDR had threatened to pack the Supreme Court with extra pro-New Deal judges in the fallout of the NIRA legal battle the high court justices became more compliant during the remainder of his presidency. In fact according to the Tenth Amendment Center, from 1937 to 1995 not a single piece of federal legislation was declared unconstitutional by the Supreme Court. A new era of acquiescence to executive power had begun. With the NLRB v. Jones & Laughlin Steel Corp case of early 1937, the Supreme Court validated the NLRA’s constitutionality and it has been enshrined in federal law ever since.

The NLRA, while not explicitly mandating wage increases, did vastly strengthen the hand of labor to not only collectively bargain, but also to strike against “unfair labor practices” without fear of losing employment when the strike was resolved since the NLRA guaranteed their reemployment. The result was a hyperbolic jump in strikes in 1937. BLS statistics from 1936 and 1937 reflect this:

Number of strikes 1936 to 1937: 2172/4740 (+118%)
Number of striking workers 1936 to 1937: 789K/1.86M (+136%)
Number of work-days lost to strikes 1936 to 1937: 13.9M/28.4M (+104%)


Furthermore federal enforcement of labor laws was skewed in labor’s favor under the Roosevelt administration. For example, unions quickly launched “sit-in strikes” where employees not only refused to work, they also refused to leave the factory floor while physically impeding replacements and management from using company machines and equipment—all while the NLRA’s enforcement arm looked the other way.

Unable to conduct business with striking workers and physical blockage of its workfloors and factories, businesses were forced into the intended and successful consequence of the legislation: an assured hefty raising of wages for union labor over 1936 levels—yet again above the market clearing equilibrium wage.

Combined with FDR’s top marginal income tax rate hike from 63% to 79%, and the Federal Reserve’s tightening policy requiring a doubling of commercial bank reserve ratios (both in 1936), the Wagner Act’s new artificial wage floor ushered in the famous Depression of 1937-1938 or the so-called “Depression within the Depression,” the third worst downturn of the 20th century behind the Depression of 1920-21 and the Great Depression slump of 1929-1933. Unemployment predictably ballooned from an intrayear low of 11% in early 1937 to an astonishing 20% in the summer of 1938.

And the Depression of 1937-38 is undoubtedly why unemployment at the beginning of 1940 was still higher than at the end of 1930 (16% vs 11%). For the third time in less than a decade, forcing wages above the market equilibrium level not only failed to deliver the promised recovery via greater spending power, it reversed the previously improving employment trend and set the economy back into its third technical recession since 1929.

Although the Wagner Act is still with us today, the extent of the NLRA's reach changed quickly during the postwar Truman administration. Harry Truman, a moderate with no interest in upholding the most radical elements of FDR’s New Deal, wasn’t willing to look the other way from violent union abuses or “sit-in strikes.” The practices, which had been stopped during WWII industrial production, were not allowed to resume in peacetime.

And in 1947 Congress passed the Labor Management Relations Act (aka. Taft-Hartley), a law designed to return the balance-of-power between management and labor. Among its many provisions, Taft-Hartley prohibited so-called “closed union shops,” ended the prohibition of “right-to-work” laws in any state that passed them, strengthened provisions that prevented strikers from physically blocking a workplace to entry from customers, management, and replacement workers, and required a minimum number of days’ notice before unions could strike.

Although simply repealing the Wagner Act (along with the Davis-Bacon and Norris-Laguardia Acts) and never signing Taft-Hartley at all would have been a cleaner way to level the playing field, the passage of Taft-Hartley did untie employers’ hands significantly given that the NLRA was still in effect.


The Great Depression was easily the worst economic downturn in American history. Compared to the next worst slump, the Depression of 1893, it took nearly three times as long to return to full private employment (seventeen years vs six), the peak unemployment rate was over double (25.9% vs 12.4%: Romer), and the loss of GDP at the slump's trough was more than double (-26.3% vs -10.3%).

Not coincidentally, at that time the Great Depression was also the only depression in American history where wages and prices were prevented by government from freely adjusting. In every other prior downturn when demand fell prices and wages were allowed to fall too. Nominally wages and prices dropped, but in real terms they were generally unchanged since all other prices fell together.

As elementary microeconomics teaches us, whenever any price is forced above the free market equilibrium level, supply outpaces demand and surpluses form. In the case of the Great Depression, the surpluses were not only unsold goods that were made too expensive by NIRA cartels, but also workers who became too expensive to fully employ. Newer demand-side economics and arguments that “higher wages will create greater consumer demand, compensate for the labor surplus, and lead to faster recovery than under free market conditions“ proved to be empty promises as the nation descended into unprecedented contraction and joblessness.

The historical lessons of forcing wages and prices upwards were painful but also resoundingly clear: wage and price controls don’t work. The market must be allowed to adjust and clear, and the sooner it happens the faster the recession will end.

Which brings us to today. Even though it was 80 years ago, the failures of the demand-side policies of the 1930’s provide a valuable guide to similar debates taking place even eight years in the wake of the Great Recession. For example, the argument that $15 minimum wage, raising taxes on the rich to prevent “income inequality,” and boosting the purchasing power or middle class and working Americans through higher government stimulus spending will all lead to a faster growing economy.

The $15 minimum wage is most closely related: many of the same economic stimulus arguments from the Great Depression have been rekindled by progressive forces. Predictably conservative and free market opponents have maintained that forcibly raising the price of labor results in lower demand for labor and reduced employment, ie. the classical argument. But just like in the 1930’s, progressives have countered that raising the minimum wage above the impersonal, heartless “free market” level (actually not even the free market level, just a lower minimum wage of $7.25) won’t induce job losses or reduce hours because the increased income will stimulate economic activity through greater spending power…which in turn will lead to more hiring. So once again the theoretical battle lines have been drawn. But more telling than theoretical argument is empirical evidence. And the Great Depression has plenty to guide us.

VI. MINIMUM WAGE ON THE GRAND STAGE (not just Seattle or San Francisco)

The present day laboratory for the $15/hr minimum wage experiment has been a handful of US cities—most notably Seattle—upon which both sides of the debate have descended to find evidence favorable to their case. And while so far the results in nearly all studies have suggested that hours have been reduced, jobs have been lost (or the rate of job creation has noticeably slowed) and benefits have been cut, the metrics are far from overwhelming.

One reason is that public support for minimum wage hikes tends to coalesce only when the economy is perceived as “sufficiently recovered” to withstand its impact. Therefore minimum wage hikes have been enacted in job markets that are already improving.

Also the higher minimum wage unemploys only a small segment of the workforce—namely the least productive workers whose hourly marginal revenue productivity falls between the old minimum wage and the new one. By contrast no doctor, lawyer, computer programmer, airline pilot, professional athlete, etc… gets thrown out of work simply because the minimum wage has risen from $7.25 to $15/hour. Those professions’ hourly marginal revenue productivity was well over $15 to start with. Also, workers with so few skills that their hourly marginal revenue productivity was already lower than the previous $7.25 minimum wage were largely jobless to start with since they had long been priced out of the market. Therefore they won’t be counted as a “job loss” after a minimum wage hike either.

(It's worth mentioning these two reasons expose a common error cited by both sides of the debate—looking at the overall unemployment rate to judge the effect of minimum wage. Rather one must look at the change in unemployment only in the low-skilled sector with pre-hike wages between the old minimum wage and the new higher one)

Given the small segment of workers effected, and the countereffect of job gains in the higher wage segment due to the already improving economy, the results have been statistically small enough to sway few people particularly the $15/hr crowd. So retorts of “flaws in the study” or “the overall economy is still going well” or “ten new restaurants have opened” (never mind that 15 closed in the same period) or “the employment numbers for the metro area are good” (even though the minimum wage was only law in the anchor city such as Seattle and many jobs were transferred to the unaffected suburbs) begin to impinge the conversation. Few people’s minds are changed because the experiment is simply not large enough in scale and the laboratory is too small.

But unknown to most people there already has been an experiment, a much larger experiment, in raising wages to help workers and stimulate economic recovery. It was conducted by Herbert Hoover and Franklin Roosevelt—on the grandest stage possible: not just Seattle and San Francisco, but the entire American economy.

From 1929 to 1939 three different nationwide campaigns were launched to raise not just the minimum wage but virtually all worker wages, therefore affecting the lion's share of America's workforce as opposed to just the lowest productivity workers in the minimum wage scale. Three times in the 1930’s the federal government pushed wages up for nearly all Americans: Herbert Hoover’s “high wage policy,” then FDR’s National Industrial Recovery Act price and wage floors and then the Warner Act's ensuing higher union wages—the latter two at a time of fragile recovery when unemployment was still in the double-digits.

All three high-wage campaigns coincided with the only three major episodes of rising unemployment (see chart below noting rising joblessness in 1929-March 1933, winter 1933-summer 1934, and fall 1937-summer 1938). Plus there was no meaningful relief from the 1937-38 job slump until the European outbreak of World War II when the US military began its rapid hiring campaign in anticipation of an upcoming major conflict.

Furthermore demand-boosting policies weren’t limited just to forcing wages higher. The other two prescriptions of the trifecta—taxing the rich and redistributing the money to the working classes via increased government deficit spending—were also launched. Herbert Hoover raised the top income tax rate from 25% to 63% in 1932, and Franklin Roosevelt raised it again to 79% in 1936 and yet again to 84% in 1940. Herbert Hoover doubled real federal government spending in just four years, and Franklin Roosevelt added the entire New Deal atop it.

In this grandiose and novel witches brew of economic interventions, with all its demand-boosting policies launched simultaneously and unequivocally, the Hoover and Roosevelt administrations gave American history its greatest ever experiment in countercyclical economic policy. The result of the experiment and its conclusions couldn’t be any clearer. Was it, as progressives have argued for the last ten years, a rapid recovery from recession with unprecedented strength? No, it was America’s worst and longest economic catastrophe. With the aftermath of the demand-side policies of the 1930's so clear, can we learn from our history's blunders and avoid repeating them again?

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.