Since the Great Depression and World War II (WWII), the federal government's role in promoting certain economic goals has become increasingly formalized through legislative mandates. The Employment Act of 1946 stated that it was the "continuing policy and responsibility" of the federal government to use its powers "to promote maximum employment, production, and purchasing power." This was a logical extension of the trend toward more government responsibility for reining in the harsher consequences of an unfettered free-enterprise economy. The results have been longer expansions, fewer recessions, and more muted economic cycles. It is not a coincidence that the three longest expansions of the past two centuries are after 1960. On the other hand, long-term growth has averaged a somewhat slower pace since the 1940s.
After the stagflationary decade of the 1970s, which saw neither full employment nor price stability because of stagflation, the 1946 act was replaced in 1978 by the even more specific Full Employment and Balanced Growth Act, which became known as the Humphrey-Hawkins Act after its legislative sponsors. Among other things, this act formalized the role of the Board of Governors of the Federal Reserve "to establish a monetary policy that maintains long-run growth, minimizes inflation, and promotes price stability." It also created the requirement that the Fed transmit a "Monetary Policy Report to the Congress" twice a year.
In addition to this increasingly formalized guidance from Congress, the Fed was learning how a fiat-money system—a government-issued currency that is not backed by a physical commodity, such as gold or silver—works. Up until the 1930s, the issue of price stability was determined by the international gold standard, which committed central-bank participants to adjust monetary policy to maintain their currencies at a fixed price to gold. While this created long-term price stability, it also resulted in rising social tensions after World War I, when countries like Britain underwent harsh deflationary adjustments to bring their currencies back to pre-war gold values that were abandoned during inflationary war-time policies.
After World War II, as the gold standard was abandoned in favor of fiat-money systems, central banks experimented with different operating systems that eventually created the stagflationary instability of the 1970s, when the last vestiges of the gold anchor were abandoned.
Paul Volcker was called in by President Carter in 1979 to break the inflationary spiral and promptly began to rein in money-supply growth by hiking real interest rates to unprecedented levels. The last four decades have been characterized by declining inflation and interest rates with lower lows and lower highs in the successive business cycles since 1982 (Exhibit 1). The key problem for central banks the past two decades has become stopping this disinflation trend from becoming a debt-deflation bust like the 1930s depression. Rather than worrying about inflation exceeding reasonable levels, central banks have been struggling to keep inflation positive. Throughout the post-war period, the Fed has been gradually formalizing its communication and operating goals to foster transparency and public acceptance of its extraordinary power over the economy. At the same time, it has learned from its experiment with fiat-money and economic research that its best policy prescription is to anchor long-run inflation expectations at a low level if it is to successfully achieve its other objectives of maximum growth and employment.
The adoption of a specific 2% inflation target is the end result of decades of learning from this experience with fiat-money. The economics community and central banks around the world have embraced the goal of low, well-anchored inflation expectations.
Exhibit 1: Ten-Year Moving Averages of Inflation.
At his June 19, post-Federal Open Market Committee (FOMC) press conference, Jerome Powell was asked whether the Fed would consider raising its inflation target to 4%, as some economists have suggested. His answer was "no" based on the fact that other major central banks have a 2% target and therefore it would disrupt this global anchoring of inflation around 2%. One reason major-currency exchange rates have been relatively stable in recent years is this common 2% inflation goal. If inflation is similar across countries, it tends to anchor exchange rates through the long-run mechanism that competitive economic forces naturally create to maintain purchasing power parity.
While this is a good reason to maintain the 2% target, it begs the question of why 2% is the magic number. In theory, the global order could be anchored around 4% instead of 2%. The basic lesson of the monetary policy experience of the past 75 years is the merit of stable, well-anchored inflation expectations for achieving the best long-run performance of the economy: maximum growth, production and employment, the goals cited in the legislative mandates for federal government economic policies.
While stable inflation expectations are by now a well-documented basis for economic stability and realizing economic potential, the choice of 2% as the target is more open to debate. Generally, it's agreed that a negative target, or deflation, is to be avoided. That's why it's been so rare since the 1930s. On the other hand, some economists argue a zero target for inflation is a better goal because it adheres to the literal meaning of price stability.
However, there is a general consensus among economists that a zero-inflation environment implies extended periods of deflation that are prone to trigger the kind of debt-liquidation that was associated with depressions before World War II. Variations in inflation around a zero target imply deflation about half the time, which is judged unacceptable based on this historical experience and the fact that modern economies are much more levered and laden with debt than in the past. When borrowers knew deflation was a real possibility, they were more likely to avoid excessive leverage. Today, they have levered up more on the assumption that deflation is off the table. Thus, the economy is much more vulnerable to a deflationary shock than in the past.
With zero inflation ruled out, the question becomes what is the optimal amount of inflation Modern central banks have settled on a low, positive amount, namely 2%. This allows for some fluctuation above and below 2% without persistent deflation but also avoids the instability that seems to develop when inflation rises into the mid-single digits.
It is not a coincidence, for example, that the best long-run equity returns tend to occur when inflation averages low and stable in the 1%–3% range. If real gross domestic product (GDP) growth averages 2% or 3%, then nominal GDP growth with a 2% inflation rate would likely average 4% or 5%. This also implies that personal incomes, retail sales and corporate revenue growth would likely be anchored in this 4% to 5% range. Based on recent historical experience, this seems to be about the minimum growth rate in cash flows through the economy to service existing debt and allow for growth at potential. Low real interest rates also are a necessary part of this new economic mix of high debt with low, stable inflation.
Clearly, a higher inflation target would allow for faster cash flow growth and higher interest rates. It would require a transition period to re-anchor inflation expectations at a higher level. It would also require global agreement or else it may create more exchange-rate volatility. The Germans have resisted the 2% target, regarding it as a top, not a middle. The European Central Bank and the Bank of Japan have averaged closer to 1% and zero inflation, respectively. The U.S. has averaged about 1.5% since 2000. A reasonable first step would be for central banks to first meet their current targets before seriously considering raising them.
In order to boost inflation to meet their long-run inflation mandates, central banks need to increase accommodation for the foreseeable future. The Fed's quantitative tightening and 2018 rate hikes have caused a global deflationary shock. As a result, inflation worldwide is falling further below the 2% target, and inflation expectations are in danger of becoming unanchored to the downside.
To remedy this situation, an extended period of reflationary policy will be needed, with inflation running above 2% to compensate for the shortfall of the past two decades if expectations are to return to the Fed's target. We believe this global reflation effort should create a synchronized world expansion in 2020. The longer the Fed delays this reflationary effort, the greater the effort will eventually be, in our opinion.
On the eve of the 50th anniversary of the Apollo Mission, and the unfolding tech cold war between the U.S. and China, we thought it an opportune time to review the long arc of U.S. research and development (R&D), a key ingredient of American economic growth and prosperity. Below we underscore the importance and "spillover" effects of U.S. government-funded research; the limitations of relying on private sector R&D to drive innovation; and the rising challenge from China.
For investors with a long time horizon, we remain long-term bulls on technology, believing the dawn of the technology race between the U.S. and China will likely only accelerate the level and pace of global tech spending. We believe investment opportunities lie in both U.S. and Chinese tech leaders. Split the difference since at this juncture, there is no clear winner—China faces significant technological hurdles, while the U.S. needs to rethink and rebalance the role of public/private sector R&D spending. Read on.
A half-century ago this month, America achieved what no other nation had done before: It landed a man on the moon. Nothing better epitomized the scientific talent and resources of the United States at the time—and the importance of government-funded research in driving innovation and economic growth.
Government-funded R&D soared in the aftermath of WWII, rising 20-fold between 1940 and 1964, when federal R&D spending reached a peak of nearly 2% of GDP (Exhibit 2). Then, the engine of American innovation was the U.S. government, with public sector agencies like Defense Advanced Research Projects Agency (DARPA), the Advanced Research Projects Agency (ARPA), the Atomic Energy Commission and, of course, NASA—the National Aeronautics and Space Administration—spawning and creating the technological capabilities that would drive U.S. economic growth for decades.
Exhibit 2: Decline in Federally Funded R&D Offset By Rise in Business Funding.
As Jonathan Gruber and Simon Johnson note in their book, Jump Starting America, "It is hard to find an area of technology development that has not been affected by the NASA enterprise in some fashion."
According to the authors, NASA has spawned hundreds of commercial spin-offs, including digital camera sensors, precision global positioning (GPS) systems, advance water filtration and airplane wing designs among many other goods and services. In addition, integrated circuits, semiconductors, computer hardware and software, satellites, flat-screen panels, drones, the internet—all of these wealth-enhancing products were hatched by federally funded R&D over the decades, creating numerous positive "spillover" effects on real growth.
From the book, The Entrepreneurial State,2 author Mariana Mazzucato notes:
"From the development of aviation, nuclear energy, computers, the Internet, biotechnology, and today's development in green technology, it is, and has been, the State—not the private sector—that has kick-started and developed the engine of growth, because of its willingness to take risks in areas where the private sector has been too risk averse."
Speaking of risk, nothing was riskier than landing a man on the moon, but on July 20, 1969, the United States did just that. It was the crowning moment for U.S.-government funded research.
Even before Neil Armstrong became the first man to walk on the moon, publicly funded R&D was in structural decline and has continued to fade over the decades.
The mounting cost of the Vietnam War, rising public sector expenditures associated with the Great Society programs like Medicare, and expanding federal budget deficits—all of these factors converged to downgrade publicly funded R&D starting in the mid-1960s. Since the start of this century, ballooning federal deficits and the cost of wars have continued to weigh on R&D expenditures, with the public sector share of total R&D outlays falling to roughly 20% in 2017, versus a high of roughly 70% in the mid-1960s, according to data from the National Science Foundation.
More mind-numbing is Exhibit 3, which depicts federal spending on debt versus research and development. Note that in 2018, the U.S. government shelled out some $325 billion on interest payments on its debt, a figure 2.8 times larger than federal outlays on R&D ($114 billion).
Exhibit 3: Paying for the Past vs. Funding the Future.
The good news is that the private sector has stepped into the breach, with businessfunded R&D rising twelvefold from 1980 to 2017. Owing to expanding outlays from the business sector, the U.S. remains the world's number one spender on global R&D, accounting for $543 billion of R&D in purchasing-power parity dollars, ahead of China's $496 billion in 2017. Leading the way in the U.S. have been such key sectors as computing and electronics, software and internet, healthcare, automobiles and industrials. With these sectors at the vanguard and owing to the risk-taking DNA of the U.S. economy, America remains a technological superpower.
That said, there are some important caveats to private sector-led R&D growth. First, research that is undertaken by private firms is for the benefit of the firm—not for society in general— which limits the "spillover" effects. Firms tend to underinvest in ground-breaking research or pull the plug early if results are unfavorable. Second, since most private sector research is proprietary, there is less incentive among firms to share or devolve information with competitors on why an invention or product failed, resulting in duplicating efforts and costs. Third, in the pharmaceutical industry, the combination of prolonged development periods (or lags to commercialization) and insufficient patent protection result in an underinvestment in many drugs with long development periods. And finally, as noted in Jump Starting America, "private R&D is increasingly turning away from basic exploratory scientific research toward more commercially oriented development." Whereas research made up roughly one-third of private R&D in 1987, the percentage has slipped to one-fifth, which means the private sector is spending less and less money on the ground-breaking moon shots of the future.
R&D spending in China is unequivocally and unabashedly driven by the state. For decades, the government has accelerated research outlays, pouring funds into emerging industries such as artificial intelligence, robotics and electric vehicles. Guided by the country's stateled industrial program — "Made in China 2025" — China seeks to modernize its economy by moving up the manufacturing value chain and investing in the key industries of the future to become a "world powerhouse of scientific and technological innovation" by midcentury. The government has even set a target to increase R&D spending as a percentage of GDP to 2.5% by 2020, up from 2.13% in 2017.
By prioritizing innovation-led growth, China has emerged as a world leader of R&D spending. The country's share of global R&D expenditures has risen from just 5% in 2000 to 25% in 2017, while America's share has declined over the years (Exhibit 4). Both public and private sources have contributed toward China's R&D growth, and given the significant presence of state-owned enterprises in China's economy, the two sectors often work in conjunction with one another.
Exhibit 4: Rise of China as an Innovation Superpower.
R&D share calculated in terms of current purchasing-power parity dollars. Global R&D is a sum of the OECD countries plus Argentina, China, Romania, Russia, Singapore, South Africa, and Taiwan. Source: Organisation of Economic Co-operation and Development. Data as of June 2019.
In addition to supportive government policies, a number of factors have contributed to China's rise as an innovation superpower. These include a massive consumer market; a large pool of skilled labor; already established manufacturing capabilities and supply chains; and a supportive innovation ecosystem.
In the end, we believe China is well on its way to becoming a technological superpower—and is already a leader in global e-commerce transactions, industrial robots, artificial intelligence and 5G capabilities. As inconvertible evidence: China stunned the world this year when the Chang'e 4 spacecraft landed on the "far side" or "dark side" of the moon, a first in space exploration.
The landing was an outsized demonstration to the world that China's technological and scientific capabilities are for real. So are America's. The great tech contest of the 21st century is on.
As of June 30, gold's second-quarter gain of 9.1% has handily outperformed the total returns of the S&P 500 (+4.3%) and the broader Russell 3000 (+4.1%), the latter capturing 98% of the investable U.S. equity market. From a technical analysis perspective, the yellow metal's greater-than-six-year high versus the U.S. dollar suggests a potential upside break out of a pattern of sideways price movement since mid-2013, which if sustained may herald longer-term appreciation (see Exhibit 5). Providing confirmation of the move, gold has also broken similar technical patterns versus the euro, the Australian and Canadian dollars, and the Japanese yen, according to 13D Global Strategy and Research.
Exhibit 5: After Forming a Multi-year Base, Is Gold's Breakout (shaded green) the Start of a New Uptrend?
Source: Chief Investment Office; data as of June 30, 2019. Past performance does not guarantee future results. Performance would differ if a different time period was displayed. Short-term performance shown to illustrate more recent trend.
While gold appreciated in the second quarter, the U.S. dollar fell 1.2%. The greenback's move partly reflects the market's view of an 75% chance that the Fed, in its July 31 meeting, will cut its policy interest rate by 0.25% and just over 20% odds of a 0.50% reduction, according to Bloomberg. Moreover, expectations for further cuts this year have risen. In our view, also adding uncertainty to the U.S. dollar's longer-term outlook was the fallout from the decision by Mario Draghi, President of the European Central Bank, to pledge monetary stimulus should weakness in European economic activity persist, which initially sent the euro lower against the dollar. Shortly after, President Trump tweeted that the currency's depreciation was "making it unfairly easier for them to compete against the USA," suggesting a desire for a weaker dollar.
We also view gold as an appealing diversifier in light of continued geopolitical risks in the Middle East and prolonged Sino-U.S. tensions linked to technological dominance and national security concerns. Its 30-year correlation with the S&P 500 and the U.S. 10-year Treasury bond stands at -0.03 and 0.08, respectively, implying little performance relationship. However, near term, the Chief Investment Office's (CIO's) base case calls for President Trump and Chinese President Xi to build on a truce agreed to at the G-20 meeting, which may dent gold's recent outperformance. A comprehensive trade deal between both countries, which reduces uncertainty, would likely prove a more significant headwind for gold, in our view.