Author: Frank

Geospatial Data Librarian at Baruch CUNY

Final PostGIS Result

Measuring Polygon Overlap in QGIS and PostGIS

I was helping someone with a project this semester where we wanted to calculate overlap between two different polygon layers (postal code areas and grid cells) for over forty countries throughout the world. The process involved calculating the area of overlap and percentage of total overlap between each postal area and grid cell. We began our experiment in QGIS and perfected the process, but ultimately failed because the software was not able to handle the large number of polygons: almost 2 million postal codes and over 60k grid cells. Ultimately we employed PostGIS, which was more efficient and able to do the job.

In this post I’ll outline the steps for calculating area and polygon overlap in both QGIS (as an example of desktop GIS software) and PostGIS (as an example of a spatial database); I’ll assume you have some familiarity with both. For this example I’ll use two layers from the Census Bureau’s TIGER Line Shapefiles: Congressional Districts (CDs) and ZIP Code Tabulation Areas (ZCTAs). We’ll calculate how ZCTAs overlap with CD boundaries.

Before we begin, I should say that overlap is a technical term for a specific type of spatial selection. Overlapping features must share some interior space, and the geometry of one feature is not entirely enclosed within the geometry of another. I am NOT using the term overlap in this technical sense here – I’m using it more generally to refer to features that share any interior space with another, including areas that are entirely enclosed with another (i.e. 100% overlap).

QGIS

Since we’re measuring areas, the first step is to reproject our layers to a projected coordinate system that preserves area (an equal area projection). If we were working in a local area we could use a UTM or (in the US) a State Plane Zone. For continents and large countries like the US we could use Albers Equal Area Conic. If we were working globally we could use Mollweide or a Cylindrical Equal Area projection. The US Census layers are in the geographic coordinate system NAD 83. To reproject them, we select each one in the layers panel, right click, and choose save as. Browse and save them as new files, hit the CRS button, search for North America Albers Equal Area (AEA), select it, and save the new layers in that system. In the map window we select one of the new layers, right click, and choose Set Project CRS from Layer to apply the new system to the map window.

Map of CDs and ZCTAs in NAD 83

Congressional Districts (red) and ZCTAs (orange) in NAD 83

Map of CDs and ZCTAs in AEA

Congressional Districts (red) and ZCTAs (orange) in North America Albers Equal Area Conic

Next, we need to create a new field where we calculate the area for the ZCTAs. The census layers already come with pre-calculated area attributes, but we’ll need to calculate our own. Open the attribute table for the ZCTAs and hit the field calculator button (looks like an abacus). In the menu we create a new field called areatotal and populate it with the expression:

$area * 0.00000038610

$area is a geometry function that calculates the area of each polygon. Since the AEA projection uses square meters as its unit, the area will be in square meters. Multiplying by this fraction gives us square miles (or if you prefer, divide by 1000000 to get square kilometers). It’s important that we set the field type to a real / decimal number and specify a meaningful length (total number of digits) and precision (number of digits right of the decimal place). A length of 20 and a precision of 5 gives us 15 places to the left of the decimal point and 5 to the right, which should be plenty. Hit Calculate, exit out of the edit mode, and save changes.

QGIS Field Calculator

Calculating area in the QGIS Field Calculator

Before calculating the overlap it’s a good idea to check the geometry of each layer to make sure all of the polygons are valid (i.e. properly constructed), otherwise we will run into errors. Use Vector – Geometry Tools – Check Validity to check geometry, and if anything is broken open the Processing box and search for the Fix Geometry Tool. In this example both layers have valid geometry.

Use Vector – Geoprocessing – Union to meld the ZCTA and CD layers together. This will create unique polygons that consist of geometry occupied by a unique ZCTA and CD combination. So in instances where there is overlap between layers the polygon will be split into two (or more) pieces. See the image below, which illustrates CDs and ZCTAs before and after unioning in the Philadelphia area.

Congressional Disticts and ZCTAs in Philly

CDs and ZCTAs in Philly

ZCTAs in Philly after union with Congressional Districts

Split ZCTAs after union with Congressional Districts

Processing time will vary based on the number of features, their level of detail (nodes per polygon), the number of overlaps, and the number of attributes (columns) per layer. There are 444 CD features and about 33k ZCTAs. While these numbers aren’t huge, the polygons are very detailed and there is a fair amount of overlap: it took me approx 1.5 hours to run. To minimize processing time you could create copies of these layers, modify them by deleting attribute columns, and run the process on this modified layer. You should strip everything out except some unique identifiers and the totalarea field; you can always join the results back to the larger body of attributes later if you need them.

Once the process is complete, open the attribute table for the unioned layer and create a new calculated field called piecearea, where you calculate the area for these smaller pieces. At this stage you have what you need to calculate overlap: for these pieces you have columns with the total area of the original ZCTA and the area of this ZCTA piece that overlaps with a particular CD. You can add an additional calculated field called pct_in (length 5 precision 2) where you divide one by the other to get a percentage:

( “piecearea” / “totalarea” ) * 100

If a ZCTA record appears once in the table that means it’s fully inside one CD, and it should have a percentage of 100%. Otherwise it will appear multiple times, which means there is overlap and this will be reflected in the percentages. The output below is for ZCTAs 19138 through 19141 in Philadelphia, PA. Compare this to the maps above (these ZCTAs are located towards the center of the map). 19138 and 19139 are wholly within one CD, while 19140 and 19141 are split across two CDs. Unfortunately, QGIS doesn’t provide a simple way for hiding columns, so I can’t clearly represent the result in the image below – you’ll see a clearer picture from the PostGIS process. But you’ll end up with the attributes from both layers, so you can see what CD each ZCTA falls in.

QGIS Attribute Table

Attribute table with areas and percentages

PostGIS

The QGIS method is fine if you don’t have many polygons to calculate, but if you have a large number of features the process will either take a long time, or will crash (incidentally ArcGIS would be no different).

PostGIS to the rescue. For this approach, first you create a spatial database and activate the PostGIS extension with the command CREATE EXTENSION postgis. Then you can load the shapefiles into PostGIS using the shapefile loader that is bundled with PostGIS, or you could use the QGIS DB Manager to load them. During the import process you need to specify that the layers are in NAD 83 by specifying the correct EPSG code, changing the SRID from 0 to 4269.

PostGIS doesn’t have many global or continental projected coordinate system definitions, so we’ll have to add one for North America Albers Equal Area to its spatial reference table. A quick visit to Spatial Reference and a search for this system yields the definition, and we can get a PostGIS Insert statement that we can copy and paste into a SQL query window in our database. Before executing it, we have to change the SRID number in the statement from 9102008 to 102008 to avoid violating a check restraint that prevents IDs from being larger than 6 digits.

With the definition in place, we create a series of blank tables that will hold our two layers, and then run an insert statement where we take columns we want from the original tables and bring them into the new tables. In the course of doing this, we also transform the geometry from NAD 83 to Albers. At the end it’s important to create a spatial index on the geometry, as it will really speed up spatial selections.

BEGIN;

CREATE TABLE zctas_aea (
zcta5 varchar(5) PRIMARY KEY,
geom geometry (Multipolygon, 102008)
);

INSERT INTO zctas_aea (zcta5, geom)
SELECT zcta5ce10, ST_Transform(geom, 102008)
FROM tl_2018_us_zcta510;

CREATE INDEX zctas_aea_geom_gist
ON zctas_aea
USING gist (geom);

COMMIT;
BEGIN;
CREATE TABLE cds_aea (
geoid varchar(4) PRIMARY KEY,
statefp varchar(2),
name text,
session varchar(3),
geom geometry (Multipolygon, 102008)
);

INSERT INTO cds_aea (geoid, statefp, name, session, geom)
SELECT geoid, statefp, namelsad, cdsessn, ST_Transform(geom, 102008)
FROM tl_2018_us_cd116;

CREATE INDEX cds_aea_geom_gist
ON cds_aea
USING gist (geom);

COMMIT;

Once the data is inserted we can check the geometry validity with ST_IsValid, and if there is bad geometry we can fix it with another statement using ST_MakeValid, where IN contains identifiers for bad geometry discovered in the previous statement.

SELECT geoid, ST_IsValid(geom) AS notvalid,
ST_IsValidReason(geom) AS reason
FROM cds_aea
WHERE NOT ST_IsValid(geom);
UPDATE cds_aea
SET geom=ST_MakeValid(geom)
WHERE geoid IN (INSERT LIST OF IDS HERE);

We can execute the overlap operation with a single statement. PostGIS allows you to calculate area on the fly with the ST_Area function, and there are two functions for overlap: ST_Intersects acts as a spatial join that relates one layer to the other by selecting all features that Intersect, while ST_Intersection selects the actual pieces of each feature’s geometry that intersect. This example is just for Pennsylvania, which we select using the state FIPS code ’42’ from the CD layer.  It’s a good idea to get the statement right on a sample of records before executing it on the entire set. The double colons are a PostgreSQL shortcut for casting data types from one type to the other. This is necessary when using the ROUND function to produce a non-integer result (as ROUND can’t be used to round real decimal numbers produced from the AREA function to a fixed number of decimal places).

SELECT z.zcta5 AS zcta, c.geoid AS cd, c.name AS cdname,
ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) *  0.00000038610)::numeric,2) AS area_piece,
ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) / ST_Area(z.geom) * 100)::numeric,1) AS pct_in
FROM zctas_aea z, cds_aea c
WHERE ST_Intersects(z.geom, c.geom) AND c.statefp = '42'
ORDER BY z.zcta5, c.geoid, pct_in DESC;

This statement took me about 20 seconds to run. The results (see below) include several records that QGIS didn’t return, where the area and overlap is 0, either due to an infinitely small area of overlap that rounds to zero or strict interpretation of intersect (which includes areas that overlap and touch). While there is an ST_Overlap function, it will not return geometries where one geometry is completely contained within another (so we can’t use that). For example, ZCTAs 19138 and 19139 appear within one district but there are two records for them, one with a 100% value and another with a 0% value.

Query results in PostgreSQL

Result of intersect operations and area calculations in pgAdmin / PostGIS

We can toss these records by either deleting them from the final result when the process is finished, or we can add another statement to our WHERE clause to filter them out:

AND ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) *  0.00000038610)::numeric,2) > 0

This lengthened the execution time to 30 seconds and dropped the number of records from 2,523 to 2,061.

Once the statement looks good, we can drop the AND filter for Pennsylvania and generate a result for the entire country. Using pgAdmin 4 we can write the result directly out as a CSV. Or, you can preface the statement with CREATE VIEW overlap AS to save the statement as a query which you can call up any time. Or, you can preface the statement with CREATE TABLE overlap AS and the result of the query will be saved in a new table. This takes longer than the other two options, but gives you the ability to query and modify the resulting table. Exporting the table out as a CSV can be accomplished quickly, giving you the best of options 1 and 3. The final code and result is shown below.

CREATE TABLE zcta_cd_overlap AS
SELECT z.zcta5 AS zcta, c.geoid AS cdistrict, c.name AS cdname,
ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) *  0.00000038610)::numeric,2) AS area_piece,
ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) / ST_Area(z.geom) * 100)::numeric,1) AS pct_in
FROM zctas_aea z, cds_aea c
WHERE ST_Intersects(z.geom, c.geom) AND
ROUND((ST_Area(ST_Intersection(z.geom, c.geom)) *  0.00000038610)::numeric,2) > 0
ORDER BY z.zcta5, c.geoid, pct_in DESC;
Final PostGIS Result

Final Result in PostGIS / pgAdmin

Conclusion – which is best?

I’m using a 64-bit Lenovo Thinkpad laptop that has 4 Intel processors at 2.3Ghz and 8 gigs of memory. I’m running Xubuntu 18.04 and am using QGIS 3.4 Madeira, PostgreSQL 10, PostGIS 2.4, and pgAdmin 4. With 444 CDs and 33k ZCTAs it took me over 1.5 hours to run the union operation in QGIS, and that’s without altering the attribute tables to delete unnecessary columns. Executing the PostGIS statement, simply writing the output to the screen with the caveat to exclude areas with 0, took only 12 minutes. Writing the result to a new table took 22 minutes.

For the larger project that I mentioned at the beginning of this post, neither QGIS nor ArcGIS was able to complete the union process between 2 million polygons and 60k grid areas without crashing, even when we reduced the number of attribute columns to a bare minimum. It took PostGIS about 50 minutes to execute the overlap query and print the output to the screen or directly to a CSV, and about 3 hours to write the results to a new table.

I think the PostGIS approach is more straightforward and gives you more control over the process. There’s no need calculate area in advance or to delete attribute columns, as you can simply choose to include or exclude the ones you want. Finding and fixing invalid geometry in PostGIS is simpler, and the process is faster to execute. Rest assured you can handle layers with large numbers of features. I’ve wondered if the problems with QGIS and ArcGIS might be mitigated by using something other than a shapefile, like the newer geopackage format which is built on SQLite. I have no idea but it would be worth trying if you really wanted or needed to go the desktop GIS route for large files.

Factorio forest landscape

Exploring New Worlds in Factorio

The first draft is finished and I sent my book off for review earlier this month, and I’ve been back to work full-time for two months now. It’s been a difficult transition, so I thought I’d write a more lighthearted post this month about imaginary geographic worlds (as luck would have it, the Geo NYC Meetup group is discussing fictional mapping next week).

I’ve always enjoyed top-down simulation games; I still have my original copy of SimCity from 1989, in the box with the diskettes. More recently, I started playing a top-down, world-exploration, operations management, logistical simulator game called Factorio. The premise is you are the sole survivor of a team of scientists and engineers who have crash landed on an unexplored world. Using the scrap metal of your ship, a few simple tools, and the abundant resources on the planet, your goal is to build a rocket to launch a satellite into space to alert the crew of a successive spaceship of your presence. Scattered across the planet are concentrations of resources: water, trees (for wood), stone, iron ore, copper ore, oil, coal, and uranium. With an ax and a few scavenged plates from the ship, you begin by building a stone furnace for smelting metals. You use your ax to mine some stone to build the furnace, some iron for smelting, and some coal for fuel. Once you’ve smelted some metal you can construct a drill to mine the materials and insert them into the smelter automatically.

Smelting the ore converts it into refined material: stone to bricks, iron ore to iron plates, and copper ore to copper plates. Initially you can take these materials and manually craft them to make products: iron plates become iron gear wheels, copper plates become copper wire, which in turn can be crafted to create higher order parts like electronic circuits and finished machine products. Ultimately you’ll construct assembly plants that take the necessary materials and build the products for you, and the outputs can be used as inputs for other products.

Factorio mining drills

Mining ore

Factorio smelters

Smelting ore to plates

Factorio assemblers

Assembling products

 

 

 

 

 

 

The game becomes a logistical puzzle, where you mine ores from various deposits and move them to be smelted, and then move the refined materials to different assembly plants to create higher-order products. You transport everything using conveyor belts and inserters, which grab materials from belts and insert them into the smelters, assemblers, and other structures. You construct pumps, boilers, and steam engines powered by coal or wood to generate electricity to power the entire factory, and in order to keep developing higher-order goods you combine certain materials to produce “science”; little colorful beakers of liquid that you move on belts to laboratories to keep research humming.

As the game and your research progresses you develop technology that allows you to better explore the world and access additional resources, as you’ll eventually deplete the original deposits near the factory. You can develop solar and nuclear power as cleaner electricity alternatives, drill and refine oil to create fuels and plastics, build cars to explore the landscape, and construct railroads to transport more distant materials to your base. As your factory expands you have to grapple with the logistical hurdles of moving products created at disparate ends of the plant together in order to create new products, forcing you to either plan ahead or reconfigure your layout as time passes (or build some drones to fly the materials around). The clock is always ticking as the game is played in real time (it’s not turn-based).

Factorio factory

Main Factorio screen showing portion of a factory and map layout

At some point you face a new problem: you are not alone on this planet. There are some large, scary-looking insect creatures living there that don’t like all the pollution that’s coming from your factory, and they don’t particularly like you. Once they become irritated enough they attack and chew up your factory, and you along with it. Sadly there is no negotiating with them (they’re not sentient), so some of your attention and resources must be spent on weapons. You can take a purely defensive approach, building walls and gun turrets to protect your base as well as armor and shields to protect yourself. Or you can barrel out in a tank or use artillery to destroy them as they encroach on your operations. You can also develop cleaner, less polluting technologies to irritate them less.

An additional challenge is that the game keeps changing. Even though it’s been out for several years Factorio is still in a beta phase, but given it’s maturity and update cycle it’s super stable. The developers are part of a small company in the Czech Republic who focus primarily on this game. Factorio is available for purchase via their website and via Steam for all operating systems, and has been downloaded over a millions of times. The fanatical fan base appreciates the ability to mod practically every aspect of the game, and they form a community that’s crucial to the game’s development through testing and feedback. Factorio is definitely a member of a new generation of games where part of the challenge is learning how to play it. I’ve crawled through the extensive wiki, scoured Reddit for advice, and watched several YouTube series to figure out how it works.

Regardless of how many times you’ve played it, there is always something new to tinker with. Many players enjoy the engineering and mathematical side of the game. Their goal is to build the most efficient system, perfectly balance inputs and outputs, and create the best ratios for production. Others go for scale, building the largest possible factory with the most throughput. There are railroaders who enjoy building the trains, and warriors who focus on combat with the voracious bugs. Beyond building the rocket, the game has a number of challenges that players attempt to master, and it can be played solo or multiplayer for gamers who want to work together or simply explore each other’s layouts and solutions.

As a geographer, I enjoy the actual worlds themselves and the unique challenge each environment presents. While you can create blueprints and use the same design for a railroad station or solar power generator over and over again, you’re forced to change your overall factory layout based on the location of resources and configuration of the terrain. Prior to launching a new game, you specify the general size, frequency, and richness of resources, trees, water, and enemy bugs, and you can keep generating maps until you find one you like. While many of the efficiency aficionados want flat playing surfaces, I enjoy the complexity of fitting your factory in around the oceans and forests, and the challenge of exploring and shipping in materials from far flung places.

The world itself is quite beautiful. The developers provide extensive details about the development and inner-workings of the game, including the processes for generating logical and realistic looking landscapes. There are lush deciduous forests in vibrant autumn colors, desert wastelands strewn with rocks, and clusters of baobab-like trees on the dry plains. Even though they’re just bits and bytes I limit what I harvest, because I hate chopping them down. Unlike the real world, mining ore is much less destructive and the material is simply scraped off the surface, leaving unblemished soil behind. A finite portion of the world is visible on the map when you begin the game, and the surrounding area is cloaked in darkness. You can reveal more of the terrain by building radar stations at your base, and can explore on foot or go further afield once you’ve constructed vehicles. The world has no end, and stretches into infinity.

Factorio forest landscape

Factorio has sparked my curiosity in unexpected ways. As I’m mining ores and moving them into smelters to produce metals, I started to wonder: what is smelting anyway? How do you actually extract metals from rocks? My exposure to chemistry was limited to my junior year of high school where I struggled with balancing formulas and memorizing the periodic table. Fortunately I discovered some fascinating books and videos that made the subject engaging. Material scientist Mark Miodownik’s Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World, is an accessible, informative, and often hilarious exploration of the materials we use everyday. You’ll learn the basic chemistry behind paper, iron, ceramics, even chocolate! Periodic Tales: A Cultural History of the Elements, from Arsenic to Zinc by Hugh Aldersey-Williams is perfect if you want to learn some basics about chemistry and material science from a historical science perspective. NOVA aired a solid three-part series a few years back called Treasures of the Earth that revealed the secrets behind gems, metals, and power sources.

I resisted the temptation to play for the year I was on sabbatical, as it’s too easy to get sucked into it. A few hours here and there throughout a month, and by the time I launch that rocket into space 30 hours have gone by! Initially I feel a bit guilty, sinking so much time into a game. But when you consider how much time the average person spends watching TV or looking at stupid stuff on their phone (4 hours and 2.5 hours respectively, EVERY DAY!), enjoying the occasional game that challenges your mind and sparks your imagination is a good alternative. Similar to the Minecraft phenomena, I think it has great potential as an educational tool for learning about logistics, planning, geology and materials science, and engineering. And for the geographers out there, there are infinite worlds to explore.

Factorio desert landscape

Washington DC street

Using the ACS to Calculate Daytime Population

I’m in the home stretch for getting the last chapter of the first draft of my census book completed. The next to last chapter of the book provides an overview of a number of derivatives that you can create from census data, and one of them is the daytime population.

There are countless examples of using census data for site selection analysis and for comparing and ranking places for locating new businesses, providing new public services, and generally measuring potential activity or population in a given area. People tend to forget that census data measures people where they live. If you were trying to measure service or business potential for residents, the census is a good source.

Counts of residents are less meaningful if you wanted to gauge how crowded or busy a place was during the day. The population of an area changes during the day as people leave their homes to go to work or school, or go shopping or participate in social activities. Given the sharp divisions in the US between residential, commercial, and industrial uses created by zoning, residential areas empty out during the weekdays as people travel into the other two zones, and then fill up again at night when people return. Some places function as job centers while others serve as bedroom communities, while other places are a mixture of the two.

The Census Bureau provides recommendations for calculating daytime population using a few tables from the American Community Survey (ACS). These tables capture where workers live and work, which is the largest component of the daytime population.

Using these tables from the ACS:

Total resident population
B01003: Total Population
Total workers living in area and Workers who lived and worked in same area
B08007: Sex of Workers by Place of Work–State and County Level (‘Total:’ line and ‘Worked in county of residence’ line)
B08008: Sex of Workers by Place of Work–Place Level (‘Total:’ line and ‘Worked in place of residence’ line)
B08009: Sex of Workers by Place of Work–Minor Civil Division Level (‘Total:’ line and ‘Worked in MCD of residence’ line)
Total workers working in area
B08604: Total Workers for Workplace Geography

They propose two different approaches that lead to the same outcome. The simplest approach: add the total resident population to the total number of workers who work in the area, and then subtract the total resident workforce (workers who live in the area but may work inside or outside the area):

Daytime Population = Total Residents + Total Workers in Area - Total Resident Workers

For example, according to the 2017 ACS Washington DC had an estimated 693,972 residents (from table B01003), 844,345 (+/- 11,107) people who worked in the city (table B08604), and 375,380 (+/- 6,102) workers who lived in the city. We add the total residents and total workers, and subtract the total workers who live in the city. The subtraction allows us to avoid double counting the residents who work in the city (as they are already included in the total resident population) while omitting the residents who work outside the city (who are included in the total resident workers). The result:

693,972 + 844,345 - 375,380 = 1,162,937

And to get the new margin of error:

SQRT(0^2 + 11,107^2 + 6,102^2) = 12,673

So the daytime population of DC is approx 468,965 people (68%) higher than its resident population. The district has a high number of jobs in the government, non-profit, and education sectors, but has a limited amount of expensive real estate where people can live. In contrast, I did the calculation for Philadelphia and its daytime population is only 7% higher than its resident population. Philadelphia has a much higher proportion of resident workers relative to total workers. Geographically the city is larger than DC and has more affordable real estate, and faces stiffer suburban competition for private sector jobs.

The variables in the tables mentioned above are also cross-tabulated in other tables by age, sex, race, Hispanic origin , citizenship status, language, poverty, and tenure, so it’s possible to estimate some characteristics of the daytime population. Margins of error will limit the usefulness of estimates for small population groups, and overall the 5-year period estimates are a better choice for all but the largest areas. Data for workers living in an area who lived and worked in the same area is reported for states, counties, places (incorporated cities and towns), and municipal civil divisions (MCDs) for the states that have them.

Data for the total resident workforce is available for other, smaller geographies but is reported for those larger places, i.e. we know how many people in a census tract live and work in their county or place of residence, but not how many live and work in their tract of residence. In contrast, data on the number of workers from B08604 is not available for smaller geographies, which limits the application of this method to larger areas.

Download or explore these ACS tables from your favorite source: the American Factfinder, the Census Reporter, or the Missouri Census Data Center.

Lying with Maps and Census Data

I was recently working on some examples for my book where I discuss how census geography and maps can be used to intentionally skew research findings. I suddenly remembered Mark Monmonier’s classic How To Lie with Maps. I have the 2nd edition from 1996, and as I was adding it to my bibliography I wondered if there was a revised edition.

To my surprise, a 3rd edition was just published in 2018! This is an excellent book: it’s a fun and easy read that provides excellent insight into cartography and the representation of data with maps. There are concise and understandable explanations of classification, generalization, map projections and more with lots of great examples intended for map readers and creators alike. If you’ve never read it, I’d highly recommend it.

If you have read the previous edition and are thinking about getting the new one… I think the back cover’s tagline about being “fully updated for the digital age” is a little embellished. I found another reviewer who concurs that much of the content is similar to the previous edition. The last three chapters (about thirty pages) are new. One is devoted to web mapping and there is a nice explanation of tiling and the impact of scale and paid results on Google Maps. While the subject matter is pretty timeless, some more updated examples would have been welcome.

There are many to choose from. One of the examples I’m using in my book comes from a story the Washington Post uncovered in June 2017. Jared Kushner’s real estate company was proposing a new luxury tower development in downtown Jersey City, NJ, across the Hudson River from Manhattan. They applied for a program where they could obtain low interest federal financing if they built their development in an area were unemployment was higher than the national average. NJ State officials assisted them with creating a map of the development area, using American Community Survey (ACS) unemployment data at the census tract level to prove that the development qualified for the program.

The creation of this development area defies all logical and reasonable criteria. This affluent part of the city consists of high-rise office buildings, residential towers, and historic brownstones that have been refurbished. The census tract where the development is located is not combined with adjacent tracts to form a compact and contiguous area that functions as a unit, nor does it include surrounding tracts that have similar socio-economic characteristics. The development area does not conform to any local conventions as to what the neighborhoods in Jersey City are based on architecture, land use, demographics, or physical boundaries like major roadways and green space.

Jersey City Real Estate Gerrymandering Map

Census tracts that represent the “area” around a proposed real estate development were selected to concentrate the unemployed population, so the project could qualify for low interest federal loans.

Instead, the area was drawn with the specific purpose of concentrating the city’s unemployed population in order to qualify for the financing. The tract where the development is located has low unemployment, just like the tracts around it (that are excluded). It is connected to areas of high unemployment not by a boundary, but by a single point where it touches another tract diagonally across a busy intersection. The rest of the tracts included in this area have the highest concentration of unemployment and poverty in the city, and consists primarily of low-rise residential buildings, many of which are in poor condition. This area stretches over four miles away from the development site and cuts across several hard physical boundaries, such as an interstate highway that effectively separates neighborhoods from each other.

The differences between this development area and the actual area adjacent but excluded from the project couldn’t be more stark. Gerrymandering usually refers to the manipulation of political and voting district boundaries, but can also be used in other contexts. This is a perfect example of non-political gerrymandering, where areas are created based on limited criteria in order to satisfy a predefined outcome. These areas have no real meaning beyond this purpose, as they don’t function as real places that have shared characteristics, compact and contiguous boundaries, or a social structure that would bind them together.

The maps in the Post article high-lighted the tracts that defined the proposal area and displayed their unemployment rate. In my example I illustrate the rate for all the tracts in the city so you can clearly see the contrast between the areas that are included and excluded. What goes unmentioned here is that these census ACS estimates have moderate to high margins of error that muddy the picture even further. Indeed, there are countless ways to lie with maps!

Business and Labor Force Data: The Census and the BLS

I’m still cranking away on my book, which will be published by SAGE Publications and is tentatively titled Exploring the US Census: Your Guide to America’s Data. I’m putting the finishing touches on the chapter devoted to business datasets.

Most of the chapter is dedicated to the Census Bureau’s (CB) Business Patterns and the Economic Census. In a final section I provide an overview of labor force data produced by the Bureau of Labor Statistics (BLS). At first glance these datasets appears to cover a lot of the same ground, but they do vary in terms of methodology, geographic detail, number of variables, and currency / frequency of release. I’ll provide a summary of the options in this post.

The Basics

Most of these datasets provide data for business establishments, which are individual physical locations where business is conducted or where services or industrial operations are performed, and are summarized by industries, which are groups of businesses that produce similar products or provide similar services. The US federal government uses the North American Industrial Classification System (NAICS), a hierarchical series of codes used to classify businesses and the labor force into divisions and subdivisions at varying levels of detail.

Since most of these datasets are generated from counts, surveys, or administrative records for business establishments they summarize business activity and the labor force based on where people work, i.e. where the businesses are. The Current Population Survey (CPS) and American Community Survey (ACS) are exceptions, as they summarize the labor force based on residency, i.e. where people live. The Census Bureau datasets tend to be more geographically detailed and present data at one point in time, while the BLS datasets tend to be more timely and are focused on providing data in time series. The BLS gives you the option to look at employment data that is seasonally adjusted; this data has been statistically “smoothed” to remove fluctuations in employment due to normal cyclical patterns in the economy related to summer and winter holidays, the start and end of school years, and general weather patterns.

Many of the datasets are subject to data suppression or non-disclosure to protect the confidentiality of businesses; if a given geography or industrial category has few establishments, or if a small number of establishments constitutes an overwhelmingly majority of employees or wages, data is either generalized or withheld. Most of these datasets exclude agricultural workers, government employees, and individuals who are self-employed. Data for these industries and workers is available through the USDA’s Census of Agriculture and the CB’s Census of Governments and Nonemployer Statistics.

The CB datasets are published on the Census Bureau’s website via the American Factfinder, the new data.census.gov, the FTP site and API, and via individual pages dedicated to specific programs. The BLS datasets are accessible through a variety of  applications via the BLS Data Tools. For each of the datasets discussed below I link to their program page, so you can see fuller descriptions of how the data is collected and what’s included.

The Census Bureau’s Business Data

Business Patterns (BP)
Typically referred to as the County and ZIP Code Business Patterns, this Census Bureau dataset is also published for states, metropolitan areas, and Congressional Districts. Published on an annual basis from administrative records, the number of employees, establishments, and wages (annual and first quarter) is published by NAICS, along with a summary of business establishments by employee size categories.
Economic Census
Released every five years in years ending in 2 and 7, this dataset is less timely than the BP but includes more variables: in addition to employment, establishments, and wages data is published on production and sales for various industries, and is summarized both geographically and in subject series that cover the entire industry. The Economic Census employs a mix of enumerations (100% counts) and sample surveying. It’s available for the same geographies as the BP with two exceptions: data isn’t published for Congressional Districts but is available for cities and towns.

Bureau of Labor Statistics Data

Current Employment Statistics (CES)
This is a monthly sample survey of approximately 150k businesses and government agencies that represent over 650k physical locations. It measures the number of workers, hours worked, and average hourly wages. Data is published for broad industrial categories for states and metropolitan areas.
Quarterly Census of Employment and Wages (QCEW)
An actual count of business establishments that’s conducted four times a year, it captures the same data that’s in the CES but also includes the number of establishments, total wages, and average annual pay (wages and salaries). Data is tabulated for states, metropolitan areas, and counties at detailed NAICS levels.
Occupational Employment Statistics (OES)
A bi-annual survey of 200k business establishments that measures the number of employees by occupation as opposed to industry (the specific job people do rather than the overall focus of the business). Data on the number of workers and wages is published for over 800 occupations for states and metro areas using the Standard Occupational Classification (SOC) system.

Labor Force Data by Residency

Current Population Survey (CPS)
Conducted jointly by the CB and BLS, this monthly survey of 60k households captures a broad range of demographic and socio-economic information about the population, but was specifically designed for measuring employment, unemployment, and labor force participation. Since it’s a survey of households it measures the labor force based on where people live and is able to capture people who are not working (which is something a survey of business establishments can’t achieve). Monthly data is only published for the nation, but sample microdata is available for researchers who want to create their own tabulations.
Local Area Unemployment Statistics (LAUS)
This dataset is generated using a series of statistical models to provide the employment and unemployment data published in the CPS for states, metro areas, counties, cities and towns. Over 7,000 different areas are included.
American Community Survey (ACS)
A rolling sample survey of 3.5 million addresses, this dataset is published annually as 1-year and 5-year period estimates. This is the Census Bureau’s primary program for collecting detailed socio-economic characteristics of the population on an on-going basis and includes labor force status and occupation. Data is published for all large geographies and small ones including census tracts, ZCTAs, and PUMAs. Each estimate is published with a margin of error at a 90% confidence interval. Labor force data from the ACS is best used when you’re OK with generally characterizing an area rather than getting a precise and timely measurement, or when you’re working with an array of ACS variables and want labor force data generated from the same source using the same methodology.

Wrap Up

In the book I’ll spend a good deal of time navigating the NAICS codes, explaining the impact of data suppression and how to cope with it, and covering the basics of using this data from an economic geography approach. I’ve written some exercises where we calculate location quotients for advanced industries and aggregate ZIP-Code based Business Patterns data to the ZCTA-level. This is still a draft, so we’ll have to wait and see what stays and goes.

In the meantime, if you’re looking for summaries of additional data sources in any and every field I highly recommend Julia Bauder’s excellent Reference Guide to Data Sources. Even though it was published back in 2014 I find that the descriptions and links are still spot on – it primarily covers public and free US federal and international government sources.

BLS Data Portal

Bureau of Labor Statistics Data Tools

Sedona Hike

XYZ Tiles and WMS Layers in QGIS 3

I did a lot of hiking around Sedona, Arizona a few weeks ago, and wanted to map my GPS way points and tracks in QGIS over some WMS (web mapping service) base map layers. I recently switched to QGIS 3 since I need to use that in my book (by the time it comes out 2.18 will be old news), and had to spend time starting from scratch since the plugin I always used was no longer available (ahhh the pitfalls of relying on 3rd party plugins – see my last post on SQLite). I thought I’d share what I learned here.

I was using the OpenLayers plugin in QGIS 2.x as an easy resource to add base maps to my projects. You could pull in layers from OSM, Google, Bing, and others. It turns out that plugin is no longer available for QGIS 3.x. So I searched around and found some suggestions for a different plugin called QuickMapServices which was a great replacement. But alas, that worked in QGIS 3.0 but is not compatible (as of now) for QGIS 3.2.

So I’m back to adding WMS layers manually. There is a new feature in QGIS for adding XYZ Tiles; this is a little better than WMS because the base map can be rendered a bit quicker. I found a tip in the Stack Exchange that you can add an OSM tiles layer with this url:

http://tile.openstreetmap.org/{z}/{x}/{y}.png 

Select XYZ Tiles in the Browser, right click, New connection, give it a name, add the URL. You can modify the X Y Z coordinates where the map centers and zooms by default. Once you’ve created the connection, you can simply drag the OSM layer into the map window to render it.

Adding the OSM XYZ Tiles in QGIS

One problem that always creeps up: when you add other layers and adjust the zoom, sometimes the rendering of the base map looks poor, i.e. the features and labels look blurry or blocky. When you’re pulling data from a web map layer, as you zoom in it swaps out the tiles for more detailed ones appropriate for that scale. But when you’re zooming in QGIS things can get out of synch, as your map window zoom may not be enough to trigger the switch in the map tiles, or those map tiles are just not meant to be rendered at that scale. If you right click on a blank area of the toolbar, you can activate the Tile Scale panel and can use the slider to adjust the window zoom in synch with the tiles, so you can operate at the scales that are appropriate for the tiles. The way points and track for our hike alongside Schnebly Hill Road are shown below, and the labels for the points represent our elevation in feet.

OSM Tile Layer with Tile Scale Panel

If the slider is grayed out, select the OSM layer in the Layers menu, right click, and select Set CRS  – Set Project CRS From Layer. Web mapping services typically use EPSG 3857 Pseudo Mercator as the coordinate reference system / map projection by default. If your other vectors layers aren’t in that system, you can have the base map draw to their system or vice versa by selecting the layer, right clicking, and choosing Set CRS. But for the tile scale to work properly EPSG 3857 must be the project CRS.

Lastly, I’ve always liked the USGS WMS layers, which are never included in the plugins that I’ve seen. The USGS provides layers for: imagery, imagery with topographic features, shaded relief, and the USGS topographic map layer:

https://basemap.nationalmap.gov/arcgis/rest/services

USGS Link for WMS Layer for Topographic Maps

You can click on one of the services, and at the top (in small print) are urls for their services in WMS and WMTS. The last one is a web mapping tile service, which is a bit faster than WMS. Click on the WMTS link, and copy the url from the address in the browser. Then in QGIS select WMS / WMTS layers, right click, add a new connection, give it a name and paste the url. This is url for the topographic map:

https://basemap.nationalmap.gov/arcgis/rest/services/USGSTopo/MapServer/WMTS/1.0.0/WMTSCapabilities.xml

Once again, you can drag the layer into the map window to render it, and you can use the Tile Scale panel to adjust the zoom. Here’s our hike with the topo map as the base:

USGS Topographic Map in QGIS