Measurement of the U.S. Economy is a Job that Never Stops; Here’s Why GDP Numbers Get Revised

Like fireworks and baseball, BEA’s annual revision of GDP is a summer tradition. Toward the end of every July, the U.S. Bureau of Economic Analysis incorporates previously unavailable sources of data for the past three years into its estimates of the U.S. economy’s performance.

This year we will release the results of the 2014 annual revision of GDP and its components on July 30, bringing in new and revised source data for the first quarter of 2014 back to 2011. As part of a “flexible annual revision,” we will also incorporate the results of the comprehensive restructuring of BEA’s international transactions accounts, resulting in revisions to GDP and select components back to the first quarter of 1999.

BEA also produces revisions to quarterly GDP numbers – three estimates for a given quarter. Each includes updated, more complete, and more accurate information as it becomes available. The first, called the “advance” estimate, typically receives the most attention and is released roughly four weeks after the end of a quarter. For example, the first estimate of GDP for this year’s January-to-March quarter came out near the end of April. The first estimate for the second quarter will come out July 30.

Once every five years, BEA produces a  “comprehensive” revision to its GDP statistics, incorporating changes to how the U.S. economy is measured as well as more complete source data all the way back to 1929.  Last year was one of those years.  New data, new methodologies, changes in definitions and classifications, and changes in presentations were all incorporated into 2013’s comprehensive GDP revision.

When we revise a major economic indicator, it’s not unusual for some to ask us, “Why didn’t you get it right the first time?”

It’s not that the earlier estimate was wrong. Rather, it’s the result of a delicate balancing act BEA performs to simultaneously achieve the two most important qualities of its estimates—accuracy and timeliness.

The public wants accurate data and wants it as soon as possible. To meet that need, BEA publishes early estimates that are based on partial data. In most quarters, these early estimates capture the direction and trends of various components of the economy, thereby providing an “early read” that is timely enough to be relevant to business and government decision makers.

The advance, quarterly estimate of GDP offers the first comprehensive picture of how the economy is performing in a given quarter and provides a picture of whether the economy is slowing down or speeding up and which components of spending are responsible for those changes. It tells us the pace at which shoppers are shopping and what they are buying. It also tells us what businesses are producing and investing in, what government is spending, and how much we are buying and selling abroad. It also tells us about trends in key variables, like saving and inflation.

When BEA calculates the advance estimate, we don’t yet have complete source data, with the largest gaps in data for the third month of the quarter. In particular, the advance estimate lacks complete source data on inventories, trade, and consumer spending on services. Therefore, we must make assumptions for these missing pieces based in part on past trends. As part of this process, we publish a detailed technical note that lays out the assumptions we made for a particular estimate. We also publish detailed materials on the standardized procedures and methods used in the various vintages of the GDP estimates.

As new and more complete data become available, we incorporate that information into the second and third GDP estimates. About 45 percent of the advance estimate is based on initial or early estimates from various monthly and quarterly surveys that are subject to revision for various reasons, including late respondents that are eventually incorporated into the survey results. Another roughly 14 percent of the advance estimate is based on historical trends.

By the second GDP estimate, we have new data for the third month and revised data for earlier months. By the third estimate, a lot more data is available so that only 17 percent of the GDP estimate is based on information from the first set of monthly and quarterly surveys.

Even though GDP estimates are revised over time as more complete and accurate data become available, studies show that the general picture of economic activity does not change:

  • The overall pattern of change in GDP over business cycles is little changed by revisions.
  • Revisions to long-term growth rates are small, averaging less than 0.1 percentage point for average growth rates between 1985 and 2009.
  • There are no substantial revisions—as measured by shares of GDP—in key measures such as investment, government spending, or the national saving rate.

Measuring GDP for the U.S. economy is always a work in progress. Because BEA faces so many challenges in measuring GDP, our estimates are informative, but never really final. Our advance estimates strike a good balance between accuracy and timeliness, given the data available at the time. Successive revisions reflect BEA’s commitment to incorporate both more complete source data when they become available and improved methods for measuring a rapidly changing economy.

More information on the 2014 Annual Revision can be found on BEA’s website.

New Commerce Department report explores huge benefits, low cost of government data

Today we are pleased to roll out an important new Commerce Department report on government data.   “Fostering Innovation, Creating Jobs, Driving Better DecisionsThe Value of Government Data714,” arrives as our society increasingly focuses on how the intelligent use of data can make our businesses more competitive, our governments smarter, and our citizens better informed. 

And when it comes to data, as the Under Secretary for Economic Affairs, I have a special appreciation for the Commerce Department’s two preeminent statistical agencies, the Census Bureau and the Bureau of Economic Analysis.   These agencies inform us on how our $17 trillion economy is evolving and how our population (318 million and counting) is changing, data critical to our country.   Although “Big Data” is all the rage these days, the government has been in this  business for a long time: the first Decennial Census was in 1790, gathering information on close to four million people, a huge dataset for its day, and not too shabby by today’s standards as well. 

Just how valuable is the data we provide?   Our report seeks to answer this question by exploring the range of federal statistics and how they are applied in decision-making.   Examples of our data include gross domestic product, employment, consumer prices, corporate profits, retail sales, agricultural supply and demand, population, international trade and much more.

Clearly, as shown in the report, the value of this information to our society far exceeds its cost – and not just because the price tag is shockingly low: three cents, per person, per day.   Federal statistics guide trillions of dollars in annual investments at an average annual cost of $3.7 billion: just 0.02 percent of our $17 trillion dollar economy covers the massive amount of data collection, processing and dissemination.   With a statistical system that is comprehensive, consistent, confidential, relevant and accessible, the federal government is uniquely positioned to provide a wide range of statistics that complement the vast and growing sources of private sector data. 

Our federally collected information is frequently “invisible,” because attribution is not required. But it flows daily into myriad commercial products and services.   Today’s report identifies the industries that intensively use our data and provides a rough estimate of the size of this sector.   The lower-bound estimate suggests government statistics help private firms generate revenues of at least $24 billion annually – more than six times what we spend for the data.   The upper-bound estimate suggests annual revenues of $221 billion! 

This report takes a first crack at putting an actual dollars and cents value to government data. We’ve learned a lot from this initial study, and look forward to honing in even further on that figure in our next report. 

Mark Doms, Under Secretary for Economic Affairs

5 Q’s for U.S. Department of Commerce’s Under Secretary of Economic Affairs Mark Doms

The Center for Data Innovation spoke with Mark Doms, Under Secretary of Economic Affairs at the U.S. Department of Commerce, in Washington, DC. Under Secretary Doms discussed the current efforts at the Commerce Department to increase the availability and timeliness of high-quality data, as well as promote data-driven innovation in the government and economy.

This interview has been lightly edited.

Daniel Castro: The Department of Commerce’s 2014-2018 Strategic Plan laid out a number of objectives related to data. Can you highlight some of these objectives and talk about your approach so far?

Mark Doms: Commerce’s strategic data plan has three main parts. We’re close to a major announcement on the first objective, which I’ll discuss a little more in a moment, and we’re formulating our strategy on parts two and three, including holding listening sessions around the country with our customers.

Goal 1: Better data delivery. How can we better make available all the data we collect? Whether its Census data, weather data, trade data, patent and trademark data, etc., what are the right open source standards and architectures to best unlock our diverse datasets? How should we be inventorying and prioritizing them? I don’t want to give too much away, but Secretary Pritzker will be addressing our strategy directly on Monday at a speech she will be giving at the Esri Users Conference in San Diego, which I will also be attending.

Goal 2: Better data-driven government. What data or combination of datasets should we be using to better evaluate the effectiveness of our own programs and initiatives? What access do we need across different government agencies to optimize our operations?

Goal 3: Promoting the 21st century data economy. What data do our customers, the private sector and local, state, federal governments, need that we aren’t currently collecting? What are the skills needed to drive a data economy workforce, and how do we promote efforts for American students and workers to obtain them?

I will also note that while our focus at Commerce is on making more data available because of the positive benefits for our economy, government, and society, we hold protecting and safeguarding the public’s data as our first and foremost responsibility.

Castro: What are some interesting insights you’ve learned so far during your listening tour about how different groups are using Department of Commerce data?

Doms: We’ve heard several common themes. One is that U.S. government data has to be easier to find and use. Another common theme is that businesses and governments want more localized, granular data; small geographic areas (metropolitan areas and neighborhoods, not state data which they already have), and they need it in a timelier fashion. As a result, we’re formulating a plan to better meet our customer’s needs, and we’ll be announcing that plan soon.

Castro:How is the Census Bureau changing its data collection practice, and what will be better under the new data collection scheme?

Doms: I like to say, producing bad data is easy, producing good, useful data is hard. Census has been in the business of producing good data since 1790. Counting the four million people living on American soil was probably our nation’s first big dataset. We’ve been at this a long time and we’re good at it.

With that said, we have to look at ways to reduce respondent burden on surveys. With the bombardment of emails and other surveys confronting Americans these days, it’s getting harder to get people to respond. So if a person or family has already provided a piece of information to the government in another way (such as how many kids a couple has), then what do we have to do to get that information from other government agencies instead of requesting the information AGAIN from the family?

There’s also a ton of money that can be saved through innovation in the Census. For instance, to classify a house as unoccupied, we currently have to send someone to knock on the door several times. But if we used administrative records, like Medicare and tax records, we could more easily ascertain whether anyone lived there. That innovation alone could save over a billion dollars by eliminating the need to send someone to personally, repeatedly, knock on those doors. That’s not chump change.

Castro: What are some of your key takeaways from the recent Open Data 500 Roundtable, and what are your next actions with regard to it?

Doms: The Open Data Roundtable was a great experience. For the first time, it brought together all the bureaus at Commerce at once, with many of their customers in the private sector. Sitting down with sophisticated data users forces an exchange of ideas, ideas that happen to align closely with our strategic data plan. For instance, we heard again and again that finding government data is challenging. We also heard from a wide variety of users about what data they need, how they take our data and combine it with other data to make new data products, and create new data businesses. Really exciting stuff and I look forward to participating in more of these events. We’re hosting a Data Jam with the White House’s Office of Science and Technology Policy, Friday the 25th in Baltimore if you’d like to join and jam out.

Castro: The Department of Commerce has made a big push to get NOAA data out to more users. What was keeping it from releasing more data before, and what do you expect to change with the additional available information?

Doms: NOAA faces several challenges. One is the sheer volume of information that is produced by satellites, weather stations, buoys…20 terabytes a day. Mind boggling. But maybe 10 percent is currently available to the public. Another is how should that data be stored so that it can be more easily accessed. Given the data’s size, should that platform allow manipulation and analysis of the data, or should it more simply be a platform from which copies can be made?

What’s really exciting about the NOAA weather data is how the private sector is coming up with new data products and services all the time, services that target farmers, helping them plan for the season ahead using predictive modeling. This is really cool stuff.

As you may know, NOAA released a Request for Interest (RFI) this past spring, essentially testing whether the private sector had an interest in trying to unlock their data for them, given the government’s strained resources. The response, I’m told, was overwhelming. More than 50 companies replied with interest in assisting. I’m excited to see where it leads.