series – census book

datacensusgov

Navigating the New data.census.gov

June 2019 is the final month that the Census Bureau will post new data in the American Factfinder (AFF). From this point forward, all new datasets will be published via the new data dissemination platform data.census.gov. The second chapter of my book (now available for pre-order!) is devoted to navigating this new interface. In this post I’ll provide a preview / brief tutorial of the advanced search functions.

The new interface is search-driven, so you can type the names of topics and geographies or table ID numbers to find and explore data tables. There are spiffy data profiles for several geographies, and you have the ability to make basic thematic maps. The search interface makes it much easier to casually browse and discover data, so go ahead and explore.

I’d still recommend having a search strategy to find precisely what you need. Keyword searching alone isn’t going to cut it, because you’re searching across tens of thousands of tables in dozens of datasets. The good news is that the same strategy I’ve used for the AFF can be applied to data.census.gov: use the advanced search to filter by survey, year, geography, and topic to narrow down the list of possible tables to a manageable number, and then search or browse through those results to find what you need.

Let’s say we want to download the most recent data on home values for all the counties in Pennsylvania (or a state of your choosing). On data.census.gov click on the advanced search link under the search box. On the advanced search page scroll to the bottom to the filters. We’ll address them one by one:

Surveys. These represent all the different census datasets. Select ACS 5-Year Estimates Detailed Tables. Detailed socio-economic characteristics of the population are primarily published in the ACS. The 1-Year estimates are published for all geographies that have at least 65k people. Since most states have rural counties that have less than this threshold, we’ll have to use the 5-year estimates to get all the counties. The detailed tables are narrow, focusing on estimates for a single variable. The other options include profiles (lots of different data for one place) and subject tables (narrower in scope than profiles, but broader than the detailed tables).

filter by survey

Years. At the moment 2017 is the latest year for the ACS, so let’s select that. This quickly eliminates a lot of tables that we’re not interested in.

Geography. Choose 050 – County, then scroll down and choose Pennsylvania in the County (State) list, then All counties in Pennsylvania in the final list.

filter by geography

Topics. For this example choose Housing, then Financial Characteristics, then Housing Value and Purchase Price. Of all the filter options, this one is the most opened-ended and may require some experimentation based on what you’re looking for.

filter by topic

Codes. We don’t need to filter by codes in this example, but if we were searching for labor or business-related data we’d use this filter to limit results to specific sectors or industries by NAICS codes.

Underneath the filter menu, click the View All Results button. This brings us to the first results page, which provides a list of tables, maps, and pages related to our search. Click the button to View All Tables under the tables section.

This brings us to the table results page; the list of tables is displayed on the left, and the currently selected table is displayed on the right; in this case Value of owner-occupied housing units is shown, with counts of units by value brackets. At this stage, we can scroll through the list and browse to find tables with data that we’re interested in. We can also access the filters at the top of the list, if we want to modify our search parameters.

table results

A little further down the results list is a table for Median Value. Selecting that table will preview it on the right. Hit the Customize Table button. This opens the table in its own dedicated view. Hit the blue drop down arrow to the right of the table name, and you can modify the geography, year, or time-period on the left. On the right is a Download option. Hit download and you’ll be prompted to download a CSV file. In the download you’ll get three text files that contain metadata, the data, and descriptive information about the download. Click Download and you can save it.

customize table

Back on the customize table page, you can navigate back to the table results by clicking on “Tables” in the breadcrumb links that appear in the top left-hand corner. Then you can browse and choose additional tables.

That’s it! Not bad, right? Well, there are always caveats. At the moment, the biggest one is that you can’t easily download most geographies that are contained within other geographies. With one click we can filter to select all counties within a state, or all states within the nation. But if we wanted all census tracts in a county or all county subdivisions in a state, there aren’t any “All geographies in…” options for these geographies. We’d have to select each and every tract within a county, one at a time…

While data.census.gov is now relatively stable, it’s still under development and additional features like this should (hopefully) be implemented as time passes between now and the 2020 census. This is one reason why the American Factfinder will survive for another year, as we’ll still need to lean on it to accomplish certain tasks. Of course, there are other options within the Census Bureau (the API, the FTP site) and without (NHGIS, MCDC, Census Reporter) for accessing data.

The new platform currently provides access to several datasets from the present back to the year 2010: the decennial census, the ACS, population estimates, and several of the business datasets. The first new datasets that will be published in data.census.gov (and NOT in the AFF) include the 2017 Economic Census this summer and the 1-year 2018 ACS in September.

View the Release Notes and FAQs for more details about the platform: general documentation, recent developments, bugs, and planned enhancements. The Census Bureau also has an archived webinar with slides that discuss the transition.

Calculate margin of error for ratio (mean income)

Calculating Mean Income for Groups of Geographies with Census ACS Data

When aggregating small census geographies to larger ones (census tracts to neighborhoods for example) when you’re working with American Community Survey (ACS) data, you need to sum estimates and calculate new margins of error. This is straightforward for most estimates; you simply sum them, and take the square root of the sum of squares for the margins of error (MOEs) for each estimate that you’re aggregating. But what if you need to group and summarize derived estimates like means or medians? In this post, I’ll demonstrate how to calculate mean household income by aggregating ZCTAs to United Hospital Fund neighborhoods (UHF), which is a type of public health area in NYC created by aggregating ZIP Codes.

I’m occasionally asked how to summarize median household income from tracts to neighborhood-like areas. You can’t simply add up the medians and divide them, the result would be completely erroneous. Calculating a new median requires us to sort individual household-level records and choose the middle-value, which we cannot do as those records are confidential and not public. There are a few statistical interpolation methods that we can use with interval data (number of households summarized by income brackets) to estimate a new median and MOE, but the calculations are rather complex. The State Data Center in California provides an excellent tutorial that demonstrates the process, and in my new book I’ll walk through these steps in the supplemental material.

While a mean isn’t as desirable as a median (as it can be skewed by outliers), it’s much easier to calculate. The ACS includes tables on aggregate income, including the sum of all income earned by households and other population group (like families or total population). If we sum aggregate household income and number of households for our small geographic areas, we can divide the total income by total households to get mean income for the larger area, and can use the ACS formula for computing the MOE for ratios to generate a new MOE for the mean value. The Census Bureau publishes all the ACS formulas in a detailed guidebook for data users, and I’ll cover many of them in the ACS chapter of my book (to be published by the end of 2019).

Calculating a Derived Mean in Excel

Let’s illustrate this with a simple example. I’ve gathered 5-year 2017 ACS data on number of households (table B11001) and aggregate household income (table B19025) by ZCTA, and constructed a sheet to correlate individual ZCTAs to the UHF neighborhoods they belong to. UHF 101 Kingsbridge-Riverdale in the Bronx is composed of just two ZCTAs, 10463 and 10471. We sum the households and aggregate income to get totals for the neighborhood. To calculate a new MOE, we take the square root of the sum of squares for each of the estimate’s MOEs:

Calculate margin of error for new sum

Calculate margin of error for new sum

To calculate mean income, we simply divide the total aggregate household income by total households. Calculating the MOE is more involved. We use the ACS formula for derived ratios, where aggregate income is the numerator of the ratio and households is the denominator. We multiply the square of the ratio (mean income) by the square of the MOE of the denominator (households MOE), add that product to the square of the MOE of the numerator (aggregate income MOE), take the square root, and divide the result by the denominator (households):

=(SQRT((moe_ratio_numerator^2)+(ratio^2*moe_ratio_denominator^2))/ratio_denominator)
Calculate margin of error for ratio (mean income)

Calculate margin of error for ratio (mean income)

The 2013-2017 mean household income for UHF 101 is $88,040, +/- $4,223. I always check my math using the Cornell Program on Applied Demographic’s ACS Calculator to make sure I didn’t make a mistake.

This is how it works in principle, but life is more complicated. When I downloaded this data I had number of households by ZCTA and aggregate household income by ZCTA in two different sheets, and the relationship between ZCTAs and UHFs in a third sheet. There are 42 UHF neighborhoods and 211 ZCTAs in the city, of which 182 are actually assigned to UHFs; the others have no household population. I won’t go into the difference between ZIP Codes and ZCTAs here, as it isn’t a problem in this particular example.

Tying them all together would require using the ZCTA in the third sheet in a VLOOKUP formula to carry over the data from the other two sheets. Then I’d have to aggregate the data to UHF using a pivot table. That would easily give me sum of households and aggregate income by UHF, but getting the MOEs would be trickier. I’d have to square them all first, take the sum of these squares when pivoting, and take the square root after the pivot to get the MOEs. Then I could go about calculating the means one neighborhood at a time.

Spreadsheet-wise there might be a better way of doing this, but I figured why do that when I can simply use a database? PostgreSQL to the rescue!

Calculating a Derived Mean in PostgreSQL

In PostgreSQL I created three empty tables for: households, aggregate income, and the ZCTA to UHF relational table, and used pgAdmin to import ZCTA-level data from CSVs into those tables (alternatively you could use SQLite instead of PostgreSQL, but you would need to have the optional math module installed as SQLite doesn’t have the capability to do square roots).

Portion of households table. A separate aggregate household income table is structured the same way, with income stored as bigint type.

Portion of households table. A separate aggregate household income table is structured the same way, with income stored as bigint type.

Portion of the ZCTA to UHF relational table.

Portion of the ZCTA to UHF relational table.

In my first run through I simply tried to join the tables together using the 5-digit ZCTA to get the sum of households and aggregate incomes. I SUM the values for both and use GROUP BY to do the aggregation to UHF. In PostgreSQL pipe-forward slash: |/ is the operator for square root. I sum the squares for each ZCTA MOE and take the root of the total to get the UHF MOEs. I omit ZCTAs that have zero households so they’re not factored into the formulas:

SELECT z.uhf42_code, z.uhf42_name, z.borough,
    SUM(h.households) AS hholds,
    ROUND(|/(SUM(h.households_me^2))) AS hholds_me,
    SUM(a.agg_hhold_income) AS agghholds_inc,
    ROUND(|/(SUM(a.agg_hhold_income_me^2))) AS agghholds_inc_me
FROM zcta_uhf42 z, hsholds h, agg_income a
WHERE z.zcta=h.gid2 AND z.zcta=a.gid2 AND h.households !=0
GROUP BY z.uhf42_code, z.uhf42_name, z.borough
ORDER BY uhf42_code;
Portion of query result, households and income aggregated from ZCTA to UHF district.

Portion of query result, households and income aggregated from ZCTA to UHF district.

Once that was working, I modified the statement to calculate mean income. Calculating the MOE for the mean looks pretty rough, but it’s simply because we have to repeat the calculation for the ratio over again within the formula. This could be avoided if we turned the above query into a temporary table, and then added two columns and populated them with the formulas in an UPDATE – SET statement. Instead I decided to do everything in one go, and just spent time fiddling around to make sure I got all the parentheses in the right place. Once I managed that, I added the ROUND function to each calculation:

SELECT z.uhf42_code, z.uhf42_name, z.borough,
    SUM(h.households) AS hholds,
    ROUND(|/(SUM(h.households_me^2))) AS hholds_me,
    SUM(a.agg_hhold_income) AS agghholds_inc,
    ROUND(|/(SUM(a.agg_hhold_income_me^2))) AS agghholds_inc_me,
    ROUND(SUM(a.agg_hhold_income) / SUM(h.households)) AS hhold_mean_income,
    ROUND((|/ (SUM(a.agg_hhold_income_me^2) + ((SUM(a.agg_hhold_income)/SUM(h.households))^2 * SUM(h.households_me^2)))) / SUM(h.households)) AS hhold_meaninc_me
FROM zcta_uhf42 z, hsholds h, agg_income a
WHERE z.zcta=h.gid2 AND z.zcta=a.gid2 AND h.households !=0
GROUP BY z.uhf42_code, z.uhf42_name, z.borough
ORDER BY uhf42_code;
Query in pgAdmin and portion of result for calculating mean household income

Query in pgAdmin and portion of result for calculating mean household income

I chose a couple examples where a UHF had only one ZCTA, and another that had two, and tested them in the Cornell ACS calculator to insure the results were correct. Once I got it right, I added:

CREATE VIEW household_sums AS

To the top of the statement and executed again to save it as a view. Mission accomplished! To make doubly sure that the values were correct, I connected my db to QGIS and joined this view to a UHF shapefile to visually verify that the results made sense (could also have imported the shapefile into the DB as a spatial table and incorporated it into the query).

Mean household income by UHF neighborhood in QGIS

Mean household income by UHF neighborhood in QGIS

Conclusion

While it would be preferable to have a median, calculating a new mean for an aggregated area is a fair alternative, if you simply need some summary value for the variable and don’t have the time to spend doing statistical interpolation. Besides income, the Census Bureau also publishes aggregate tables for other variables like: travel time to work, hours worked, number of vehicles, rooms, rent, home value, and various subsets of income (earnings, wages or salary, interest and dividends, social security, public assistance, etc) that makes it possible to calculate new means for aggregated areas. Just make sure you use the appropriate denominator, whether it’s total population, households, owner or renter occupied housing units, etc.

Census Workshop Recap

I’ve been swamped these past few months, revising my census book, teaching a spatial database course, and keeping the GIS Lab running. Thus, this will be a shorter post!

Last week I taught a workshop on understanding, finding, and accessing US Census Data at the Metropolitan Library Council of New York. If you couldn’t make it, here are the presentation slides and the group exercise questions.

Most of the participants were librarians who were interested in learning how to help patrons find and understand census data, but there were also some data analysts in the crowd. We began with an overview of how the census is structured by dataset, geography, and subject categories. I always cover the differences between the decennial census and the ACS, with a focus on how to interpret ACS estimates and gauge their reliability.

For workshops I think it’s best to start with searching for profiles (lots of different data for one place). This gives new users a good overview of the breadth and depth of the types of variables that are available in the census. Since this was a New York City-centric crowd we looked at the City’s excellent NYC Population Factfinder first. The participants formed small groups and searched through the application to answer a series of fact-finding questions that I typically receive. Beyond familiarizing themselves with the applications and data, the exercises also helped to spark additional questions about how the census is structured and organized.

Then we switched over to the Missouri Census Data Center’s profile and trends applications (listed on the right hand side of their homepage) to look up data for other parts of the country, and in doing so we were able to discuss the different census geographies that are available for different places. Everyone appreciated the simple and easy to use interface and the accessible tables and graphics. The MCDC doesn’t have a map-based search, so I did a brief demo of TIGERweb for viewing census geography across the country.

Once everyone had this basic exposure, we hopped into the American Factfinder to search for comparison tables (a few pieces of data for many places). We discussed how census data is structured in tables and what the difference between the profile, summary, and detailed tables are. We used the advanced search and I introduced my tried and true method of filtering by dataset, geography, and topic to find what we need. I mentioned the Census Reporter as good place to go for ACS documentation, and as an alternate source of data. Part of my theme was that there are many tools that are suitable for different needs and skill levels, and you can pick your favorite or determine what’s suitable for a particular purpose.

We took a follow-the-leader approach for the AFF, where I stepped through the website and the process for downloading two tables and importing them into a spreadsheet, high-lighting gotchas along the way. We did some basic formulas for aggregating ACS estimates to create new margins of error, and a VLOOKUP for tying data from two tables together.

We wrapped up the morning with a foreshadowing of what’s to come with the new data.census.gov (which will replace the AFF) and the 2020 census. While there’s still much uncertainty around the citizenship question and fears of an under count, the structure of the dataset won’t be too different from 2010 and the timeline for release should be similar.

Washington DC street

Using the ACS to Calculate Daytime Population

I’m in the home stretch for getting the last chapter of the first draft of my census book completed. The next to last chapter of the book provides an overview of a number of derivatives that you can create from census data, and one of them is the daytime population.

There are countless examples of using census data for site selection analysis and for comparing and ranking places for locating new businesses, providing new public services, and generally measuring potential activity or population in a given area. People tend to forget that census data measures people where they live. If you were trying to measure service or business potential for residents, the census is a good source.

Counts of residents are less meaningful if you wanted to gauge how crowded or busy a place was during the day. The population of an area changes during the day as people leave their homes to go to work or school, or go shopping or participate in social activities. Given the sharp divisions in the US between residential, commercial, and industrial uses created by zoning, residential areas empty out during the weekdays as people travel into the other two zones, and then fill up again at night when people return. Some places function as job centers while others serve as bedroom communities, while other places are a mixture of the two.

The Census Bureau provides recommendations for calculating daytime population using a few tables from the American Community Survey (ACS). These tables capture where workers live and work, which is the largest component of the daytime population.

Using these tables from the ACS:

Total resident population
B01003: Total Population
Total workers living in area and Workers who lived and worked in same area
B08007: Sex of Workers by Place of Work–State and County Level (‘Total:’ line and ‘Worked in county of residence’ line)
B08008: Sex of Workers by Place of Work–Place Level (‘Total:’ line and ‘Worked in place of residence’ line)
B08009: Sex of Workers by Place of Work–Minor Civil Division Level (‘Total:’ line and ‘Worked in MCD of residence’ line)
Total workers working in area
B08604: Total Workers for Workplace Geography

They propose two different approaches that lead to the same outcome. The simplest approach: add the total resident population to the total number of workers who work in the area, and then subtract the total resident workforce (workers who live in the area but may work inside or outside the area):

Daytime Population = Total Residents + Total Workers in Area - Total Resident Workers

For example, according to the 2017 ACS Washington DC had an estimated 693,972 residents (from table B01003), 844,345 (+/- 11,107) people who worked in the city (table B08604), and 375,380 (+/- 6,102) workers who lived in the city. We add the total residents and total workers, and subtract the total workers who live in the city. The subtraction allows us to avoid double counting the residents who work in the city (as they are already included in the total resident population) while omitting the residents who work outside the city (who are included in the total resident workers). The result:

693,972 + 844,345 - 375,380 = 1,162,937

And to get the new margin of error:

SQRT(0^2 + 11,107^2 + 6,102^2) = 12,673

So the daytime population of DC is approx 468,965 people (68%) higher than its resident population. The district has a high number of jobs in the government, non-profit, and education sectors, but has a limited amount of expensive real estate where people can live. In contrast, I did the calculation for Philadelphia and its daytime population is only 7% higher than its resident population. Philadelphia has a much higher proportion of resident workers relative to total workers. Geographically the city is larger than DC and has more affordable real estate, and faces stiffer suburban competition for private sector jobs.

The variables in the tables mentioned above are also cross-tabulated in other tables by age, sex, race, Hispanic origin , citizenship status, language, poverty, and tenure, so it’s possible to estimate some characteristics of the daytime population. Margins of error will limit the usefulness of estimates for small population groups, and overall the 5-year period estimates are a better choice for all but the largest areas. Data for workers living in an area who lived and worked in the same area is reported for states, counties, places (incorporated cities and towns), and municipal civil divisions (MCDs) for the states that have them.

Data for the total resident workforce is available for other, smaller geographies but is reported for those larger places, i.e. we know how many people in a census tract live and work in their county or place of residence, but not how many live and work in their tract of residence. In contrast, data on the number of workers from B08604 is not available for smaller geographies, which limits the application of this method to larger areas.

Download or explore these ACS tables from your favorite source: the American Factfinder, the Census Reporter, or the Missouri Census Data Center.

Lying with Maps and Census Data

I was recently working on some examples for my book where I discuss how census geography and maps can be used to intentionally skew research findings. I suddenly remembered Mark Monmonier’s classic How To Lie with Maps. I have the 2nd edition from 1996, and as I was adding it to my bibliography I wondered if there was a revised edition.

To my surprise, a 3rd edition was just published in 2018! This is an excellent book: it’s a fun and easy read that provides excellent insight into cartography and the representation of data with maps. There are concise and understandable explanations of classification, generalization, map projections and more with lots of great examples intended for map readers and creators alike. If you’ve never read it, I’d highly recommend it.

If you have read the previous edition and are thinking about getting the new one… I think the back cover’s tagline about being “fully updated for the digital age” is a little embellished. I found another reviewer who concurs that much of the content is similar to the previous edition. The last three chapters (about thirty pages) are new. One is devoted to web mapping and there is a nice explanation of tiling and the impact of scale and paid results on Google Maps. While the subject matter is pretty timeless, some more updated examples would have been welcome.

There are many to choose from. One of the examples I’m using in my book comes from a story the Washington Post uncovered in June 2017. Jared Kushner’s real estate company was proposing a new luxury tower development in downtown Jersey City, NJ, across the Hudson River from Manhattan. They applied for a program where they could obtain low interest federal financing if they built their development in an area were unemployment was higher than the national average. NJ State officials assisted them with creating a map of the development area, using American Community Survey (ACS) unemployment data at the census tract level to prove that the development qualified for the program.

The creation of this development area defies all logical and reasonable criteria. This affluent part of the city consists of high-rise office buildings, residential towers, and historic brownstones that have been refurbished. The census tract where the development is located is not combined with adjacent tracts to form a compact and contiguous area that functions as a unit, nor does it include surrounding tracts that have similar socio-economic characteristics. The development area does not conform to any local conventions as to what the neighborhoods in Jersey City are based on architecture, land use, demographics, or physical boundaries like major roadways and green space.

Jersey City Real Estate Gerrymandering Map

Census tracts that represent the “area” around a proposed real estate development were selected to concentrate the unemployed population, so the project could qualify for low interest federal loans.

Instead, the area was drawn with the specific purpose of concentrating the city’s unemployed population in order to qualify for the financing. The tract where the development is located has low unemployment, just like the tracts around it (that are excluded). It is connected to areas of high unemployment not by a boundary, but by a single point where it touches another tract diagonally across a busy intersection. The rest of the tracts included in this area have the highest concentration of unemployment and poverty in the city, and consists primarily of low-rise residential buildings, many of which are in poor condition. This area stretches over four miles away from the development site and cuts across several hard physical boundaries, such as an interstate highway that effectively separates neighborhoods from each other.

The differences between this development area and the actual area adjacent but excluded from the project couldn’t be more stark. Gerrymandering usually refers to the manipulation of political and voting district boundaries, but can also be used in other contexts. This is a perfect example of non-political gerrymandering, where areas are created based on limited criteria in order to satisfy a predefined outcome. These areas have no real meaning beyond this purpose, as they don’t function as real places that have shared characteristics, compact and contiguous boundaries, or a social structure that would bind them together.

The maps in the Post article high-lighted the tracts that defined the proposal area and displayed their unemployment rate. In my example I illustrate the rate for all the tracts in the city so you can clearly see the contrast between the areas that are included and excluded. What goes unmentioned here is that these census ACS estimates have moderate to high margins of error that muddy the picture even further. Indeed, there are countless ways to lie with maps!

Business and Labor Force Data: The Census and the BLS

I’m still cranking away on my book, which will be published by SAGE Publications and is tentatively titled Exploring the US Census: Your Guide to America’s Data. I’m putting the finishing touches on the chapter devoted to business datasets.

Most of the chapter is dedicated to the Census Bureau’s (CB) Business Patterns and the Economic Census. In a final section I provide an overview of labor force data produced by the Bureau of Labor Statistics (BLS). At first glance these datasets appears to cover a lot of the same ground, but they do vary in terms of methodology, geographic detail, number of variables, and currency / frequency of release. I’ll provide a summary of the options in this post.

The Basics

Most of these datasets provide data for business establishments, which are individual physical locations where business is conducted or where services or industrial operations are performed, and are summarized by industries, which are groups of businesses that produce similar products or provide similar services. The US federal government uses the North American Industrial Classification System (NAICS), a hierarchical series of codes used to classify businesses and the labor force into divisions and subdivisions at varying levels of detail.

Since most of these datasets are generated from counts, surveys, or administrative records for business establishments they summarize business activity and the labor force based on where people work, i.e. where the businesses are. The Current Population Survey (CPS) and American Community Survey (ACS) are exceptions, as they summarize the labor force based on residency, i.e. where people live. The Census Bureau datasets tend to be more geographically detailed and present data at one point in time, while the BLS datasets tend to be more timely and are focused on providing data in time series. The BLS gives you the option to look at employment data that is seasonally adjusted; this data has been statistically “smoothed” to remove fluctuations in employment due to normal cyclical patterns in the economy related to summer and winter holidays, the start and end of school years, and general weather patterns.

Many of the datasets are subject to data suppression or non-disclosure to protect the confidentiality of businesses; if a given geography or industrial category has few establishments, or if a small number of establishments constitutes an overwhelmingly majority of employees or wages, data is either generalized or withheld. Most of these datasets exclude agricultural workers, government employees, and individuals who are self-employed. Data for these industries and workers is available through the USDA’s Census of Agriculture and the CB’s Census of Governments and Nonemployer Statistics.

The CB datasets are published on the Census Bureau’s website via the American Factfinder, the new data.census.gov, the FTP site and API, and via individual pages dedicated to specific programs. The BLS datasets are accessible through a variety of  applications via the BLS Data Tools. For each of the datasets discussed below I link to their program page, so you can see fuller descriptions of how the data is collected and what’s included.

The Census Bureau’s Business Data

Business Patterns (BP)
Typically referred to as the County and ZIP Code Business Patterns, this Census Bureau dataset is also published for states, metropolitan areas, and Congressional Districts. Published on an annual basis from administrative records, the number of employees, establishments, and wages (annual and first quarter) is published by NAICS, along with a summary of business establishments by employee size categories.
Economic Census
Released every five years in years ending in 2 and 7, this dataset is less timely than the BP but includes more variables: in addition to employment, establishments, and wages data is published on production and sales for various industries, and is summarized both geographically and in subject series that cover the entire industry. The Economic Census employs a mix of enumerations (100% counts) and sample surveying. It’s available for the same geographies as the BP with two exceptions: data isn’t published for Congressional Districts but is available for cities and towns.

Bureau of Labor Statistics Data

Current Employment Statistics (CES)
This is a monthly sample survey of approximately 150k businesses and government agencies that represent over 650k physical locations. It measures the number of workers, hours worked, and average hourly wages. Data is published for broad industrial categories for states and metropolitan areas.
Quarterly Census of Employment and Wages (QCEW)
An actual count of business establishments that’s conducted four times a year, it captures the same data that’s in the CES but also includes the number of establishments, total wages, and average annual pay (wages and salaries). Data is tabulated for states, metropolitan areas, and counties at detailed NAICS levels.
Occupational Employment Statistics (OES)
A bi-annual survey of 200k business establishments that measures the number of employees by occupation as opposed to industry (the specific job people do rather than the overall focus of the business). Data on the number of workers and wages is published for over 800 occupations for states and metro areas using the Standard Occupational Classification (SOC) system.

Labor Force Data by Residency

Current Population Survey (CPS)
Conducted jointly by the CB and BLS, this monthly survey of 60k households captures a broad range of demographic and socio-economic information about the population, but was specifically designed for measuring employment, unemployment, and labor force participation. Since it’s a survey of households it measures the labor force based on where people live and is able to capture people who are not working (which is something a survey of business establishments can’t achieve). Monthly data is only published for the nation, but sample microdata is available for researchers who want to create their own tabulations.
Local Area Unemployment Statistics (LAUS)
This dataset is generated using a series of statistical models to provide the employment and unemployment data published in the CPS for states, metro areas, counties, cities and towns. Over 7,000 different areas are included.
American Community Survey (ACS)
A rolling sample survey of 3.5 million addresses, this dataset is published annually as 1-year and 5-year period estimates. This is the Census Bureau’s primary program for collecting detailed socio-economic characteristics of the population on an on-going basis and includes labor force status and occupation. Data is published for all large geographies and small ones including census tracts, ZCTAs, and PUMAs. Each estimate is published with a margin of error at a 90% confidence interval. Labor force data from the ACS is best used when you’re OK with generally characterizing an area rather than getting a precise and timely measurement, or when you’re working with an array of ACS variables and want labor force data generated from the same source using the same methodology.

Wrap Up

In the book I’ll spend a good deal of time navigating the NAICS codes, explaining the impact of data suppression and how to cope with it, and covering the basics of using this data from an economic geography approach. I’ve written some exercises where we calculate location quotients for advanced industries and aggregate ZIP-Code based Business Patterns data to the ZCTA-level. This is still a draft, so we’ll have to wait and see what stays and goes.

In the meantime, if you’re looking for summaries of additional data sources in any and every field I highly recommend Julia Bauder’s excellent Reference Guide to Data Sources. Even though it was published back in 2014 I find that the descriptions and links are still spot on – it primarily covers public and free US federal and international government sources.

BLS Data Portal

Bureau of Labor Statistics Data Tools