The growth of the finance sector has probably gone too far

There is a lot to say about the finance sector. I want to share some research, but also add my thoughts quickly first. Finance has done a lot of good, and could do a lot more good. The sector has found ways to expand access to credit and improved the lives of millions of people. Unfortunately, there is a lot wrong with the sector. Because of financial innovation (which, again, has done a lot of good), finance has expanded rapidly over the past 30 years. Fast expansion and innovation also happens to mean that markets in finance were not deliberately designed to function well and maximize societal welfare. The development of shadow banking, securitization, and different forms of asset management come to mind. Instead, it seems to me, the economic agents who got there first developed the best system for them to make money, and it is only now, with Dodd-Frank, that the federal government is trying to catch up. This story makes sense to me, and the evidence seems to support something like it.

Jobs in finance pay huge premiums. Why? I think it has to do with market power and rent-seeking. This study by Profs Lindley and McIntosh seems to find little other explanation.

The measure of wages used from the New Earnings Survey is annual earnings, which therefore includes annual bonus payments – an important consideration when analysing the finance sector. Holding constant gender, age, and region of residence, finance sector workers are found to earn 48% more on average than non-finance sector workers. Part of this difference will be due to the characteristics of workers who tend to work in finance, be they more motivated, driven etc. Because the New Earnings Survey is a longitudinal dataset that observes the same individuals over time, we can control for any such characteristics even when they are not measured – as long as they remain constant over time – by looking at the change in wages for individuals who move into, or out of, the finance sector, and whose fixed unobserved characteristics will not have changed. The results suggest a 37% change in wages, on average, when individuals move between the non-finance and finance sectors. Thus, much of the finance sector premium remains even after controlling for such unobserved differences across workers.

In summary, the available evidence is most consistent with the rent sharing explanation for the finance sector pay premium. For this explanation to work, however, we also need it to explain why the premium is rising. This could be due to a rising opportunity to engage in rent sharing, due to financial deregulation, implicit insurance against risk through bank bailouts, and increasing complexity of financial products creating more asymmetric information, as well as increased incentives to aim for a larger share of rents due to falling top-end marginal tax rates. Whether governments want to enact policies to try to reduce the premium depends on whether they view it as a private sector matter with benign effects on the economy as a whole, or as having a distorting effect on the labour market, attracting the best workers away from potentially more socially useful jobs.

They dismissed a variety of hypotheses (that are consistent with my personal observations). Finance is not necessarily more skill intensive, does not seem to require different or more valuable individual cognitive skills, and does not require different skills or technologies.

To me the growth of finance does not sound benign. Researchers at the Kauffman Foundation agree, and hypothesize about the wider economic consequences of growing finance. They focus specifically on employment/wages and entrepreneurship. At core, they find some evidence that large numbers of extremely intelligent people who studied STEM in secondary and post-secondary education that otherwise might be entering STEM-related jobs end up in finance because they get paid more (a lot more).

What are the consequences of capital misallocation? Fundamentally, it means that capital—both human and financial—is being inefficiently allocated in the economy, with the result being that some sectors and opportunities are being starved, relatively speaking, while other sectors see a flood of capital, potentially producing a positive feedback cycle that exacerbates one or both of the preceding effects. In particular, capital misallocation can lead to inflated (deflated) asset prices, lower productivity, less innovation, less entrepreneurship, and, thereby, lowered job creation and overall economic growth. The mechanism that creates each of these effects is, of course, the flow of capital in the economy as exacerbated and distorted by financialization.

These hypotheses are, it seems, validated by Prof Ugo Panizza’s research. Thinking about these issues is also nothing new. Minsky, Kindleberger, and Tobin all theorized about the results that we are seeing borne out in today’s data.

The financial system acts like the central nervous system of modern market economies. Without a functioning banking and payment system, it would be impossible to manage the complex web of economic relationships that are necessary for a modern decentralized economy. Finance facilitates the exchange of goods and services, allows diversifying and managing risk, and improves capital allocation through the production of information about investment opportunities.

However, there is also a dark side of finance. Hyman Minsky and Charles Kindleberger emphasized the relationship between finance and macroeconomic volatility and wrote extensively about financial instability and financial manias. James Tobin suggested that large financial sector can lead to a misallocation of resources and that “we are throwing more and more of our resources, including the cream of our youth, into financial activities remote from the production of goods and services, into activities that generate high private rewards disproportionate to their social productivity.”

A large financial sector could also capture the political process and push for policies that may bring benefits to the sector but not to society at large. This process of political capture is partly driven by campaign contributions but also by the sector’s ability to promote a worldview in which what is good for finance is also good for the country. In an influential article on the lobbying power of the U.S. financial industry, former IMF chief economist Simon Johnson suggested that:

The banking-and-securities industry has become one of the top contributors to political campaigns, but at the peak of its influence, it did not have to buy favors the way, for example, the tobacco companies or military contractors might have to. Instead, it benefited from the fact that Washington insiders already believed that large financial institutions and free-flowing capital markets were crucial to America’s position in the world.

The objective of financial regulation is to strike the optimal balance between the risks and opportunities of financial deepening. After the collapse of Lehman Brothers, many observers and policymakers concluded that the process of financial deregulation that started in the 1980s went too far. It is in fact striking that, after 50 years of relative stability, deregulation was accompanied by a wave of banking, stock market, and financial crises. Calls for tighter financial regulation were eventually followed by the Dodd-Frank Wall Street Reform and Consumer Protection Act and by tighter capital standards in the Basel III international regulatory framework for banks.

Not surprisingly, the financial industry was not happy about this rather mild tightening in financial regulation. The Institute of International Finance argued that that tighter capital regulation will have a negative effect on bank profits and lead to a contraction of lending with negative consequences on future GDP growth. Along similar lines, the former chairman of the Federal Reserve, Alan Greenspan, wrote an op-ed in the Financial Times titled “Regulators must risk more, and intervene less,” stating that tighter regulation will lead to the accumulation of “idle resources that are not otherwise engaged in the production of goods and services” and are instead devoted “to fending off once-in-50 or 100-year crises,” resulting in an “excess of buffers at the expense of our standards of living”

Greenspan’s op-ed was followed by a debate on whether capital buffers are indeed idle resources or, as postulated by the Modigliani-Miller theorem, they have no effect on firms’ valuation. To the best of my knowledge, there was no discussion on Greenspan’s implicit assumption that larger financial sectors are always good for economic growth and that a reduction in total lending may have a negative effect on future standards of living.

In a new Working Paper titled “Too Much Finance?” and published by the International Monetary Fund, Jean Louis Arcand, Enrico Berkes, and I use various econometric techniques to test whether it is true that limiting the size of the financial sector has a negative effect on economic growth. We reproduce one standard result: at intermediate levels of financial depth, there is a positive relationship between the size of the financial system and economic growth. However, we also show that, at high levels of financial depth, a larger financial sector is associated with less growth. Our findings show that there can be “too much” finance. While Greenspan argued that less credit may hurt our future standard of living, our results indicate that, in countries with very large financial sectors, regulatory policies that reduce the size of the financial sector may have a positive effect on economic growth.

Countries with large financial sectors (the data are for the year 2006):

Source: Arcand, Berkes, and Panizza.

 

Leave a comment

Filed under Economics

C’mon, seriously? (Credit Ratings Agencies edition)

The SEC recently released rules for credit rating agencies (CRAs) and the responses from experts are interesting. The short version is that the credit rating system was bad before the financial crisis, and, I think, was a huge source of systemic risk. CRAs got paid by banks to give excellent ratings to miserable bonds backed up by subprime and other awful loans. This is a perfect example of market designed with perverse incentives, and Dodd-Frank has done a lot to fix the financial system, but apparently not enough to fix the credit rating system. Profs Cecchetti and Schoenholtz show how bad they were before the crisis:

Without the complicity of CRAs, it is hard to see how the lending that fed the housing boom could have been sustained. Their high ratings of mortgage-backed securities (MBS) were seen almost immediately as one of the villains in the drama. (For an overview of credit rating agencies, see here.)

During the years before the crisis, vast numbers of MBS pools where constructed, rated and sold. These typically included several thousand subprime mortgages. Each pool was cut up into pieces (tranches), with the cash flowing from the underlying mortgages allocated to the highest grade tranche first, and – if there was any left – to each successive tranche. This “waterfall” pattern – of filling up the top vessel first – was supposed to make the top tranche virtually risk free. The alchemy of transforming low-grade, high-risk mortgages into high-quality debt vastly increased the supply of housing credit and propelled housing prices upward, as MBS investors paid little attention to the quality of the underlying mortgages (information that was difficult and costly to obtain in any case).

What did the CRAs do? They blessed the alchemists, rewarding their MBS pools with extraordinarily high ratings, including (typically) a “super-senior AAA rating” for the top tranche. They did this based on statistical models calibrated using recent data from a period when nationwide housing prices had never fallen. (Compare that to the post-1890 history available here.)

Once housing prices tipped lower in 2007 and mortgages started to default in unison, CRAs slashed their ratings, fueling the bust just as high ratings had propelled the bubble. Taking all of the asset-backed securities (and collateralized debt obligations) rated AAA between 2005 and 2007, only 10% retained their original rating by the June 2009 (see chart). In fact, less than 20% were still investment grade!

Source:  IMF, Global Financial Stability Report, October 2009, Chapter 2, Figure 2.12.

Source:  IMF, Global Financial Stability Report, October 2009, Chapter 2, Figure 2.12.

By itself, the failure of expectations to materialize is not sufficient to demonstrate poor CRA behavior. But CRAs had strong incentives to pump up ratings. The most obvious was the concentration of their paying clients: in the half-dozen years before 2007, the top five MBS issuers accounted for 40% of the market, resulting in a large volume of repeat business. The resulting conflict of interest led to a documented bias toward high ratings.

They then share their thoughts on the new SEC rules and regs:

This brings us to the two new SEC regulations intended to address the problems with bond ratings. The first one requires CRAs to establish various internal controls and provide certifications aimed at the conflict of interest arising from the “issuer pays” arrangements. The second is about transparency, and compels the CRAs to publish reams of information about the pools they are rating (anonymized to protect the individual mortgage borrowers).

Will this help? Probably not much. Take the transparency problem first. In an earlier post, we noted that the market for U.S. MBS that are not government insured has virtually collapsed. It seems unlikely that additional information – without a long data history – will improve the estimates of correlations within the mortgages in a pool. Unless investors can judge the diversification of the underlying mortgages, they won’t know how to price the private-label MBS. And they are likely to remain suspicious of CRA models, especially if they result in high ratings.

What about conflicts of interest? Here, we have several reactions. First, however much the people constructing the ratings are removed from the revenue gathering in a business, they will always be aware of the connection. They know that customers shop for ratings. And, they can judge if a customer is happy with their work.

Second, and more troubling, the desire for inflated ratings is not limited to issuers; many buyers want inflated ratings, too! Much less has been written about this, but the incentive problems are the same on both sides of this transaction.

Two examples make the point: asset managers and bankers. Asset managers’ performance is measured relative to a benchmark. When that benchmark includes bonds, it will be based on standard indexes of averages in particular ratings categories. If a manager selects bonds within the ratings category that are riskier than the rating suggests, this will provide an excess expected return relative to the benchmark (albeit, at a greater risk). Because funds that perform poorly are often simply shut down, the survival of the outperformers makes it look like their managers have superior skills, even when they are simply choosing riskier portfolios.

For a banker, the riskier the assets, the bigger the regulatory capital buffer required. Large banks are supposed to use their own internal risk models, but small banks employ astandardized approach based on credit ratings. Again, if a bond is riskier than its rating makes it appear, it will have a higher expected return. That allows the banker to take greater risk without adding capital.

Addressing all of the issues inherent in risk assessment – problems involving not only faulty models and insufficient data, but poor incentives and free riding – seems beyond anyone’s current capacity. But regulators can do better than the SEC has. For example, proposals to reduce ratings shopping existed before Dodd-Frank, but the new rules do not address this issue. More important, the rules do not eliminate the centrality of ratings in capital regulation. While various regulators have made some moves to reduce the reliance on CRAs, they have not gone far enough. In particular, systemic intermediaries should not be allowed to outsource credit evaluation. For banks, we see a straightforward solution: internal models that are calibrated using hypothetical portfolios. We wrote about this here, and it still strikes us as something regulators should try. Perhaps a comparable approach could work for large nonbanks.

To conclude the story, we hope that the SEC will bark louder in the future. And we will be watching (with considerable doubt) to see if their efforts to date have much bite.

Ouch. Dean Baker actually had a different, fairly elegant solution to fundamentally change the ratings market, unfortunately good, innovative ideas don’t get much play in Congress.

Senator Al Franken proposed an amendment to Dodd-Frank that would have gone exactly this route. (I worked with his staff on the amendment.) The amendment would have had the Securities and Exchange Commission pick the rating agency. This common sense proposal passed the Senate overwhelmingly with bi-partisan support.

Naturally something this simple and easy couldn’t be allowed to pass into law. The amendment was taken out in conference committee and replaced with a requirement for the SEC to study the issue. After being inundated with comments from the industry, the SEC said Franken’s proposal would not work because it wouldn’t be able to do a good job assigning rating agencies. They might assign a rating agency that wasn’t competent to rate an issue. (Think about that one for a moment. What would it mean about the structure of an MBS if professional analysts at Moody’s or one of the other agencies didn’t understand it?)

All this is a little disappointing, but the “c’mon, seriously?” really hits you below. From Matt O’Brien at Wonkblog reporting on the current state of the credit rating market:

[A]s dim as the credit rating agencies might be, they aren’t so dim that they can’t perceive their own self-interest. And that’s getting paid to rate bonds. Here’s why that’s a problem. There are three major credit rating agencies, but Wall Street only needs one of them to rate a bond. So a bank can ask all of them what rating they would give a bond, and then go with the one that rates it highest. This “ratings shopping,” of course, gives credit rating agencies good reason—i.e.,  their bottom lines—to give banks the ratings they want. …

Dodd-Frank didn’t fix this, and now it’s coming back. Tracy Alloway of the Financial Times reports that banks are once again asking around to get AAA ratings on dubious bonds. One way to tell is that Fitch has only “been hired for four of the 29 subprime auto ABS deals this year, after telling issuers that the vast majority of bonds did not deserve AAA ratings.” Now, the good news is that the subprime auto loan market isn’t nearly as big, or systemically important, as the subprime mortgage market was before the crash. But the bad news is that we haven’t gotten rid of the credit rating agencies’ perverse incentives to rate bonds better than they deserve just to drum up business.

It was dumb enough to create a system that encourages the credit rating agencies to take a Panglossian view of the bonds they’re supposedly rating. It’d be even dumber to leave it in place after we’ve seen what a disaster it is.

[My vocabulary expanded today--Panglossian means naively or unreasonably optimistic and originated from a character in Voltaire's Candide]

C’mon, seriously?

Leave a comment

Filed under Economics

“The effects of tax cuts on growth are completely uncertain.”

Prof Dietz Vollrath reviews a Brookings paper by William Gale and Andy Samwick reviewing and analyzing the relationship between taxes and economic growth in the U.S.

Conclusion from the paper first:

The argument that income tax cuts raise growth is repeated so often that it is sometimes taken as gospel. However, theory, evidence, and simulation studies tell a different and more complicated story. Tax cuts offer the potential to raise economic growth by improving incentives to work, save, and invest. But they also create income effects that reduce the need to engage in productive economic activity, and they may subsidize old capital, which provides windfall gains to asset holders that undermine incentives for new activity.

And the a bit of the evidence:

They do not identify any change in the trend growth rate of real GDP per capita with changes in marginal income tax rates, capital gains tax rates, or any changes in federal tax rules.

Gale Samwick Fig 1

One of the first pieces of evidence they show is from a paper by Stokey and Rebelo(1995). This plots taxes as a percent of GDP in the top panel, and the growth rate of GDP per capita in the lower one. You can see that the introduction of very high tax rates during WWII, which effectively became permanent features of the economy after that, did not change the trend growth rate of GDP per capita in the slightest. The only difference after 1940 in the lower panel is that the fluctuations in the economy are less severe that in the prior period. Taxes as a percent of GDP don’t appear to have any relevant relationship to growth rates.

Gale Samwick Fig 2

The next piece of evidence is from a paper by Hungerford (2012), who basically looks only at the post-war period, and looks at whether the fluctuations in top marginal tax rates (on either income or capital gains) are related to growth rates. You can see in the figure that they are not. If anything, higher capital gains rates are associated with faster growth.

The upshot is that there is no evidence that you can change the growth rate of the economy – up or down – by changing tax rates – up or down.

 

Leave a comment

Filed under Economics

Why didn’t the Fed care more about the housing boom leading up to the financial crisis?

In some interesting new research, an economist (Prof. Golub), a political scientist (Prof. Kaya), and a sociologist (Prof. Reay) at Swarthmore are examining how and why the Federal Reserve failed to identify and stop the financial crisis.

Financial crises are caused by imprudent borrowing and lending, but as former Federal Reserve chairman William McChesney Martin noted, it is ultimately up to regulators to ‘take away the punch bowl’ when the larger economy is at risk. Indeed, many have criticised regulators for failing to anticipate and prevent the 2008 crash (Buiter 2012, Gorton 2012, Johnson and Kwak 2010, Roubini and Mihm 2010). Little work has been done, however, on why regulatory agencies failed to act despite warnings from prominent commentators (Borio and White 2004, Buffett 2003, Rajan 2005). While Barth et al. (2012) is a notable exception, their analysis leaves room for a closer study of specific institutions.

Our research (Golub et al. 2014) focuses on the Federal Reserve (Fed) – arguably the most powerful economic agency in the world. Although the Fed shares regulatory oversight of the financial sector with other agencies, in the pre-crisis period it had authority over bank holding companies, and one of its longstanding core mandates is “maintaining the stability of the financial system and containing systemic risk” (Federal Reserve 2005). The Gramm–Leach–Bliley Act of 1999 recognised the Fed as the ‘umbrella regulator’ of the financial system (Mayer 2001). Also, the Fed was well placed to assess potential problems, given its unique access to information from the US financial sector via 2,500 supervisory staff, top officials with multiple contacts, and approximately 500 professional economists.

Their conclusions are a bit disappointing but not altogether surprising. As one of the most powerful economic institutions in the world, the Fed saw the growth of the new securities and investment instruments, but was not particularly concerned.

In a June 2005 meeting that discussed housing prices and finance in depth, concerns were raised, but the overall mood of the meeting was largely optimistic. While some staff members argued “housing prices might be overvalued by as much as 20 percent”, others claimed “increasing home equity…has supported mortgage credit quality” (7, 8). FOMC members’ views were similarly divided. Governor Bies emphasised: “…[W]e need to figure out where to go on some of these practices that are on the fringes. But we haven’t done a sterling job…[S]ome of the risky practices of the past are starting to be repeated” (46). Governor Olson expressed similar views, and Boston President Minehan also wondered about “the complications of some of the newer, more intricate, and untested credit default instruments” that might lead to system-level turmoil (123). However, San Francisco President Yellen, in remarks several others praised, suggested that “financial innovations affecting housing could have improved the view of households regarding the desirability of housing as an asset to be held in portfolios…” (35). Overall, Chicago Fed President Moscow’s comment that he “found the information comforting” (47) reflected the general mood. Even Governor Bies was reassured at the end of the second day of discussions: “I’m not overly concerned. Especially with the record profits and capital in banks. I think there’s a huge cushion” (151).

The FOMC rarely discussed historical precedents, notably the near collapse of the hedge fund Long Term Capital Management (LTCM) in 1998, which featured high leverage, complex financial derivatives, and a rescue brokered by the Fed. Following an FOMC meeting and a conference call in 1998, between 1999 and 2006, LTCM was mentioned in meetings only twice in passing, disregarding this prescient warning by Governor Meyer at the September 1998 meeting: “[Th]is is an important episode for us to study…We are trying to decide what is systemic risk … I was getting telephone calls from reporters who knew more about LTCM than I did” (110).

Their analysis and discussion of the potential causes was interesting.

Most explanations of policymakers’ failure to anticipate the crisis have limited validity for the Fed, including: 1) regulatory capture by special interest groups; 2) free-market ideology; 3) overuse of abstract academic models; and 4) narrow focus on inflation targeting. On 1), the Fed may have suffered from ‘cognitive capture’ (Buiter 2012), but there is no evidence of bribery or corruption. Indeed, Fed policymakers and staff are highly respected professionals (Barth et al. 2012). Regarding 2), notwithstanding Greenspan’s well-known free-market views, former colleagues praise “his flexibility, his unwillingness to get stuck in a doctrinal straitjacket” (Blinder and Reis 2005:7). Furthermore, FOMC members expressed a diversity of views. And, on 3) and 4), while the Fed increasingly prioritised state-of-the-art academic-style research, FOMC discussions were highly pragmatic and inflation targeting was flexible, involving ‘constrained discretion’ (Bernanke 2003, Friedman 2006).

Instead, we emphasise two aspects of the Fed’s functioning. First, both Greenspan and Bernanke subscribed to Bernanke and Gertler’s (2001) view that identifying bubbles is very difficult, pre-emptive bursting may be harmful, and that central banks could limit the fallout from systemic financial disturbances through ex post interventions. The successful response to the 2001 dot-com bubble boosted the Fed’s confidence in this strategy. On this basis, Blinder and Reis (2005: 73) conclude “[Greenspan’s] legacy … is the strategy of mopping up after bubbles rather than trying to pop them”. The 2001 crisis, however, did not feature leverage and securitisation, unlike in 2008.

Second, the literature in political science and sociology on institutional dysfunctions illuminates the Fed’s lack of concern in the pre-crisis period (e.g. Barnett and Finnemore 1999, Vaughan 1999). Several of the Fed’s institutional routines likely reinforced its complacency.

One such feature is the scripted nature of FOMC meetings. As former Governor Meyer puts it, the “FOMC meetings are more about structured presentations than discussions and exchanges…Each member spoke for about five minutes, then gave way to the next speaker” (Meyer 2004: 39). Moreover, the priority on reaching consensus on interest-rate policy limits scope for sustained consideration of broader economic concerns. Further, FOMC staff briefings and FOMC discussions centre on the staff’s ‘Greenbook’ economic analyses and projections, which reinforces the tendency for consensus. Former Governor Meyer jokingly refers to the Greenbook as “the thirteenth member of the FOMC” (2004: 34). Figure 4 shows that over 60% of references to the ‘Greenbook’ in the 2005–2007 FOMC transcripts are supportive of its analysis.

Figure 4. FOMC comments on the Greenbook, 2005–2008

Source: Authors’ calculations from FOMC transcripts.

Additionally, the Greenbook focuses on the real economy with projections based on the FRB-US model that at the time had a limited financial sector, in line with contemporary macroeconomic models. At the September 2007 FOMC meeting, as the crisis was worsening, research director Stockton observed: “much of what has occurred [in the financial markets] doesn’t even directly feed into our models” (20).

Finally, ‘silo’ mentality appears to have isolated policymaking, research, and regulatory divisions. The Fed’s Division of Banking Supervision and Regulation (S&R) staff were rarely present at FOMC meetings, and the S&R division was mentioned just eight times at the meetings in 1996–2007. Indeed, Boston Fed President Rosengren identified this issue at a March 2008 FOMC meeting: “It is great to see some bank supervision people at this table… it might be useful to think …whether there are ways to do a better job of getting people in bank supervision to understand some of the financial stability issues we think about, and then vice versa. Maybe having some bank supervision people come to FOMC meetings might be one way to actually promote some of this” (189). Also, the Fed research staff’s increased priority on publication in academic journals over policy analysis likely reinforced the FOMC’s distance from emerging financial risks.

Our findings have important policy implications. The US Dodd–Frank Act has strengthened the Fed’s monitoring of systemically important financial institutions. Our research suggests that reforms to the Fed’s institutional structure – including collaboration among its different components (research, S&R, and the FOMC) and the nature of FOMC meetings – are also important.

 

Leave a comment

Filed under Economics

Skills gaps: two sides of an argument talking past each other

Last month, I highlighted some thoughts about skills gaps in the US. Recently there has been more talk about skills gaps, especially as it relates to the Beveridge curve (Paul Krugman pointed to this Cleveland Fed piece; also interesting, but not pertinent for this post is new research that Peter Diamond talks about in this interview). It refutes arguments that unemployment is being caused by unemployed workers who lack the skills necessary to fill open jobs.

I came across an interesting piece that challenges the idea that there is little to no skills gap. James Bessen challenged my beliefs, but his arguments did not sway me to believe that the skills gap is an essential driver of high unemployment.

Every year, the Manpower Group, a human resources consultancy, conducts a worldwide “Talent Shortage Survey.” Last year, 35% of 38,000 employers reported difficulty filling jobs due to lack of available talent; in the U.S., 39% of employers did. But the idea of a “skills gap” as identified in this and other surveys has been widely criticized. Peter Cappelli asks whether these studies are just a sign of “employer whining;” Paul Krugman calls the skills gap a “zombie idea” that “that should have been killed by evidence, but refuses to die.” The New York Times asserts that it is “mostly a corporate fiction, based in part on self-interest and a misreading of government data.” According to the Times, the survey responses are an effort by executives to get “the government to take on more of the costs of training workers.”

Really? A worldwide scheme by thousands of business managers to manipulate public opinion seems far-fetched. Perhaps the simpler explanation is the better one: many employers might actually have difficulty hiring skilled workers. The critics cite economic evidence to argue that there are no major shortages of skilled workers. But a closer look shows that their evidence is mostly irrelevant. The issue is confusing because the skills required to work with new technologies are hard to measure. They are even harder to manage. Understanding this controversy sheds some light on what employers and government need to do to deal with a very real problem.

Trying to criticize Prof. Krugman and building a straw man argument are not the ways to my heart, but Prof. Bessen does make an interesting distinction about what a “skills gap” is. He seems to be essentially talking past the people he tries to refute by changing the definition of “skills gap,” which, when referring to academic economic research usually refers to “skills mismatch.”

This issue has become controversial because people mean different things by “skills gap.” Somepublic officials have sought to blame persistent unemployment on skill shortages. I am not suggesting any major link between the supply of skilled workers and today’s unemployment; there is little evidence to support such an interpretation. Indeed, employers reported difficulty hiring skilled workers before the recession. This illustrates one source of confusion in the debate over the existence of a skills gap: distinguishing between the short and long term. Today’s unemployment is largely a cyclical matter, caused by the recession and best addressed by macroeconomic policy. Yet although skills are not a major contributor to today’s unemployment, the longer-term issue of worker skills is important both for managers and for policy.

Nor is the skills gap primarily a problem of schooling. Peter Cappelli reviews the evidence to conclude that there are not major shortages of workers with basic reading and math skills or of workers with engineering and technical training; if anything, too many workers may be overeducated. Nevertheless, employers still have real difficulties hiring workers with the skills to deal with new technologies.

Why are skills sometimes hard to measure and to manage? Because new technologies frequently require specific new skills that schools don’t teach and that labor markets don’t supply. Since information technologies have radically changed much work over the last couple of decades, employers have had persistent difficulty finding workers who can make the most of these new technologies.

This is interesting, but it is essentially saying “I’m right, but I can’t prove it,” which is frustrating but understandable in a good-faith discussion.

Consider, for example, graphic designers. Until recently, almost all graphic designers designed for print. Then came the Internet and demand grew for web designers. Then came smartphones and demand grew for mobile designers. Designers had to keep up with new technologies and new standards that are still changing rapidly. A few years ago they needed to know Flash; now they need to know HTML5 instead. New specialties emerged such as user-interaction specialists and information architects. At the same time, business models in publishing have changed rapidly.

Graphic arts schools have had difficulty keeping up. Much of what they teach becomes obsolete quickly and most are still oriented to print design in any case. Instead, designers have to learn on the job, so experience matters. But employers can’t easily evaluate prospective new hires just based on years of experience. Not every designer can learn well on the job and often what they learn might be specific to their particular employer.

The labor market for web and mobile designers faces a kind of Catch-22: without certified standard skills, learning on the job matters but employers have a hard time knowing whom to hire and whose experience is valuable; and employees have limited incentives to put time and effort into learning on the job if they are uncertain about the future prospects of the particular version of technology their employer uses. Workers will more likely invest when standardized skills promise them a secure career path with reliably good wages in the future.

Under these conditions, employers do, have a hard time finding workers with the latest design skills. When new technologies come into play, simple textbook notions about skills can be misleading for both managers and economists.

For one thing, education does not measure technical skills. A graphic designer with a bachelor’s degree does not necessarily have the skills to work on a web development team. Some economistsargue that there is no shortage of employees with the basic skills in reading, writing and math to meet the requirements of today’s jobs. But those aren’t the skills in short supply.

Other critics look at wages for evidence. Times editors tell us “If a business really needed workers, it would pay up.” Gary Burtless at the Brookings Institution puts it more bluntly: “Unless managers have forgotten everything they learned in Econ 101, they should recognize that one way to fill a vacancy is to offer qualified job seekers a compelling reason to take the job” by offering better pay or benefits. Since Burtless finds that the median wage is not increasing, he concludes that there is no shortage of skilled workers.

But that’s not quite right. The wages of the median worker tell us only that the skills of the median worker aren’t in short supply; other workers could still have skills in high demand. Technology doesn’t make all workers’ skills more valuable; some skills become valuable, but others go obsolete. Wages should only go up for those particular groups of workers who have highly demanded skills. Some economists observe wages in major occupational groups or by state or metropolitan area to conclude that there are no major skill shortages. But these broad categories don’t correspond to worker skills either, so this evidence is also not compelling.

To the contrary, there is evidence that select groups of workers have been had sustained wage growth, implying persistent skill shortages. Some specific occupations such as nursing do show sustained wage growth and employment growth over a couple decades. And there is more general evidence of rising pay for skills within many occupations. Because many new skills are learned on the job, not all workers within an occupation acquire them. For example, the average designer, who typically does print design, does not have good web and mobile platform skills. Not surprisingly, the wages of the average designer have not gone up. However, those designers who have acquired the critical skills, often by teaching themselves on the job, command six figure salaries or $90 to $100 per hour rates as freelancers. The wages of the top 10% of designers have risen strongly; the wages of the average designer have not. There is a shortage of skilled designers but it can only be seen in the wages of those designers who have managed to master new technologies.

This trend is more general. We see it in the high pay that software developers in Silicon Valley receive for their specialized skills. And we see it throughout the workforce. Research shows that since the 1980s, the wages of the top 10% of workers has risen sharply relative to the median wage earner after controlling for observable characteristics such as education and experience. Some workers have indeed benefited from skills that are apparently in short supply; it’s just that these skills are not captured by the crude statistical categories that economists have at hand.

And these skills appear to be related to new technology, in particular, to information technologies. The chart shows how the wages of the 90th percentile increased relative to the wages of the 50th percentile in different groups of occupations. The occupational groups are organized in order of declining computer use and the changes are measured from 1982 to 2012. Occupations affected by office computing and the Internet (69% of these workers use computers) and healthcare (55% of these workers use computers) show the greatest relative wage growth for the 90th percentile. Millions of workers within these occupations appear to have valuable specialized skills that are in short supply and have seen their wages grow dramatically.

highskilledwage

This evidence shows that we should not be too quick to discard employer claims about hiring skilled talent. Most managers don’t need remedial Econ 101; the overly simple models of Econ 101 just don’t tell us much about real world skills and technology. The evidence highlights instead just how difficult it is to measure worker skills, especially those relating to new technology.

What is hard to measure is often hard to manage. Employers using new technologies need to base hiring decisions not just on education, but also on the non-cognitive skills that allow some people to excel at learning on the job; they need to design pay structures to retain workers who do learn, yet not to encumber employee mobility and knowledge sharing, which are often key to informal learning; and they need to design business models that enable workers to learn effectively on the job (seethis example). Policy makers also need to think differently about skills, encouraging, for example, industry certification programs for new skills and partnerships between community colleges and local employers.

Although it is difficult for workers and employers to develop these new skills, this difficulty creates opportunity. Those workers who acquire the latest skills earn good pay; those employers who hire the right workers and train them well can realize the competitive advantages that come with new technologies.

 

That’s the end of his article. I found it extremely interesting, but not necessarily compelling for the point he wanted to make. I’m not convinced that the wage trends highlighted here have to do entirely with a within-industry “skills gap.” Arguing that wages rising faster at the 90th percentile than the 50th percentile because the 90th percentile has more skills ignores other possibilities (like market power in the labor market at the top of the income distribution, policies that redistribute income to the top, luck, etc.), and does not have a clear implication for policy. It sounds , in fact, like there are too many well educated workers, resulting in de-skilling trends(!!), so one important problem is the lack of **well-paying**, **high-skilled** jobs and job growth that is concentrated in low-paying, low-skill jobs, hollowing out the US economy in the process. This implies that, while education and training are really, really important, they are not the main problem in the US economy today.

[[I know of one employer that creates lots of well-paying, high-skill jobs and does a great job at training workers, but, unfortunately, it has been stuck in a near-freeze in hiring since about 2009 (I'm talking about the federal government.)]]

 

Leave a comment

Filed under Economics

Fact: Economy does better under Democrats; Why: Not sure yet

Since the end of WWII, the economy has done far better under Democratic Presidents than under Republicans. The empirical evidence is overwhelming.

Princeton economists Alan Blinder and Mark Watson have undertaken the task of figuring out what exactly is going on here (emphasis added):

This partisan gap has barely been noticed by researchers, but it is wide.1 Since the end of World War II, there have been 16 complete four-year presidential terms – seven Democratic and nine Republican. Growth of real GDP averaged 4.35% per annum under the Democratic presidents but only 2.54% under the Republicans. That partisan growth gap of 1.8 percentage points is large by any standard – it implies that real GDP grew by 18.6% during a typical Democratic four-year term, but only by 10.6% during a typical Republican term – and it is statistically significant despite the relative paucity of data.2 In fact, as Figure 1 shows, growth has always slowed down when a Republican president replaced a Democrat and always sped up when a Democrat replaced a Republican. There are no exceptions.3

Figure 1. Average annualised GDP growth, by presidential term

The data hold more surprises. Here are a few:

  1. Even though the US Constitution assigns power over the budget (and most other economic powers) to Congress, not to the president, there is no difference in growth rates depending on which party controls Congress. It’s the presidency that matters.
  2. The Democratic growth advantage is concentrated in the first two years of a presidency, especially the first, even though Republicans bequeath much slower-growing economies to Democrats and US GDP growth is positively serially correlated (ρ ≈ 0.40 in quarterly data).
  3. As indicated both by time series models and by genuine ex ante forecasts, Democrats do not inherit economies that are poised for more rapid growth. Granger-causality runs from party-to-growth not from growth-to-party. 

Economists and political scientists – not to mention the political commentariat – have devoted a huge amount of attention to the well-established fact that faster economic growth helps re-elect the incumbent party (see, for example, Fair 2011 for the US). But what about causation in the opposite direction – from election outcomes to economic performance? It turns out that the US economy grows faster – indeed, performs better by almost every metric – when a Democratic president occupies the White House.

Confronted with such stark partisan differences, a macroeconomist naturally wonders whether the explanation could be that fiscal policy was, on average, more expansionary under Democrats. We assess this possibility in a variety of ways and come up with the same answer: no. What about monetary policy, despite the Federal Reserve’s vaunted independence from politics? The answer here is that, if anything, monetary policy was more pro-growth under Republican presidents.4

If the partisan gap cannot be explained by differential monetary and fiscal policy, what does explain it? And do these explanatory factors suggest it was good luck or good policy? We searched over a wide variety of factors, mostly entered in the form of econometric ‘shocks’, that is, as residuals from regressions that include the variable’s own lags and the current and lagged values of GDP growth. Four showed econometric promise:5

  1. Oil price shocks;
  2. Total factor productivity (TFP) shocks, adjusted to remove cyclical influences;
  3. Foreign (that is, European) growth shocks;
  4. Shocks to consumer expectations of future economic conditions.

In addition, defence spending shocks mattered in samples that include the Korean War, but not much in samples that do not. Using all five of these variables enables us to explain about half of the partisan gap in GDP growth rates since 1947.

As we peruse the list of explanatory variables, the first (oil shocks) looks to be mainly good luck, although US foreign policy (rather than economic policy) certainly played a role. (Think about George W Bush’s invasion of Iraq, for example.) The second variable (TFP) should in principle measure improvements in technology – and so be mostly driven by luck. But a wide variety of economic policies, ranging from R&D spending to regulation and much else, might influence TFP in multiple, subtle ways. And TFP shocks affect the economy with long lags, so that a portion of the TFP-induced strong growth for Democrats was inherited from previous administrations. The third (real growth in Europe) should not have much to do with US economic policies. And when you couple the fourth variable (consumer expectations) with the observed fact that spending on consumer durables grows much faster under Democrats, you get a tantalising suggestion of a self-fulfilling prophecy – consumers, expecting faster growth under Democratic presidents, buy more durable goods on that belief, which makes the economy grow faster. Did they know something economists didn’t?6

Reading this summary of their research, my conclusion is that they don’t really know why this phenomenon occurs. Intuitively none of these stories jump out at me as a strong explanation for why the economy would grow faster under Democrats.
For me, reading this, the scary part for policy makers and policy-oriented researchers is that the actual policies enacted under Democratic Presidents may not have a whole lot to do with the faster economic growth. If all we have to do is get someone elected, then how much do the policies really matter? (I suspect they still matter a lot, and are just particularly hard to test when looking solely at GDP. I think the explanation may have something to do with the difference between a President that believes in “better government” versus one who believes in “smaller government.” Maybe an active and semi-competent executive branch that actually tries to do its job could cause the economy to function better.)
The other worry for me is that latching onto these facts will lead to a sort of confirmation bias. Despite my anxiety about figuring out why the economy grows faster under Democrats, I think that people (voters) definitely need to know about Blinder and Watson’s work.

Leave a comment

Filed under Economics

“Skills gap” evidence or something (or The U.S. economy looks weird cont.)

Dean Baker makes a really insightful point using Jobs Openings and Labor Turnover Survey (JOLTS) data (I mention this because this data series has become really important since Janet Yellen started showing different types of employment statistics in explaining monetary policy, other than the unemployment rate alone).

Politicians love to talk about a “skills gap,” where U.S. workers are unemployed and businesses have jobs unfilled because workers just don’t have the requisite skills/education to do the jobs. (This seems to be an attempt to frame the Great Recession as a supply-side problem, when it really has to do with a shortfall in demand. This is important because the policy implications of one have to do with focusing on education and training workers, which is nice, but the other calls for fiscal and/or monetary stimulus to make up for the demand shortfall. The supply-side story distracts from the real problems and correct policies.) The skills gap story does not appear to be an important part of unemployment in today’s economy, but to the extent it does exist, Dean Baker only sees evidence of it in retail and restaurant business:

Floyd Norris has an interesting column comparing the numbers of job openings, hirings, and quits from 2007 with the most recent three months in 2014. The most striking part of the story is that reported openings are up by 2.1 percent from 2007, while hirings are still down by 7.5 percent.

While Norris doesn’t make this point, some readers may see this disparity as evidence of a skills gap, where workers simply don’t have the skills for the jobs that are available. If this is really a skills gap story then it seems that it is showing up most sharply in the retail and restaurant sectors. (Data are available here.) Job openings in the retail sector are up by 14.6 percent from their 2007 level, but hires are down by 0.7 percent. Job opening in the leisure and hospitality sector are up by 17.0 percent, while hiring is down by 7.4 percent.

If the disparity between patterns in job openings and hires is really evidence that workers lack the skills for available jobs then perhaps we need to train more people to be clerks at convenience stores and to wait tables.

I think he hits the exact right note with that last sentence.

I actually posted about some research a while back that touches on a de-skilling trend where higher skilled workers are increasingly performing jobs traditionally performed by lower-skill workers.

I’m not sure what all this means, but the U.S. economy sure looks weird.

Leave a comment

Filed under Economics