gis

Anything about GIS software

Noise Complaint Kernels and Contours

Kernel Density and Contours in QGIS: Noisy NYC

In spatial analysis, kernel density estimation (colloquially referred to as a type of “hot spot analysis”) is used to explore the intensity or clustering of point-based events. Crimes, parking tickets, traffic accidents, bird sightings, forest fires, incidents of infections disease, anything that you can plot as a point at a specific period in time can be studied using KDE. Instead of looking at these features as a distribution of discrete points, you generate a raster that represents a continuous surface of values. You can either measure the density of the incidents themselves, or the concentration of a specific attribute that is tied to those incidents (like the dollar amount of parking tickets or the number of injuries in traffic accidents).

In this post I’ll demonstrate how to do a KDE analysis in QGIS, but you can easily implement KDE in other software like ArcGIS Pro or R. Understanding the inputs you have to provide to produce a meaningful result is more important than the specific tool. This YouTube video produced by the SEER Lab at the University of Florida helped me understand what these inputs are. They used the SAGA kernel tool within QGIS, but I’ll discuss the regular QGIS tool and will cover some basic data preparation steps when working with coordinate data. The video illustrates a KDE based on a weight, where there were single points that had a count-based attribute they wanted to interpolate (number of flies in a trap). In this post I’ll cover simple density based on the number of incidents (individual noise complaints), and will conclude by demonstrating how to generate contour lines from the KDE raster.

For a summary of how KDE works, take a look at the entry for “Kernel” in the Encyclopedia of Geographic Information Science (2007) p 247-248. For a fuller treatment, I always recommend Christopher Lloyd’s Spatial Data Analysis: An Introduction to GIS Users (2010) p 93-97 by Oxford Press. There’s also an explanation in the ArcGIS Pro documentation.

Data Preparation

I visited the NYC Open Data page and pulled up the entry for 311 Service Requests. When previewing the data I used the filter option to narrow the records down to a small subset; I chose complaints that were created between June 1st and 30th 2022, where the complaint type began with “Noise”, which gave me about 75,000 records (it’s a noisy town). Then I hit the Export button and chose one of the CSV formats. CSV is a common export option from open data portals; as long as you have columns that contain latitude and longitude coordinates, you will be able to plot the records. The NYC portal allows you to filter up front; other data portals like the ones in Philly and DC package data into sets of CSV files for each year, so if you wanted to apply filters you’d use the GIS or stats package to do that post-download. If shapefiles or geoJSON are provided, that will save you the step of having to plot coordinates from a CSV.

NYC Open Data 311 Service Requests

With the CSV, I launched QGIS, went to the Data Source Manager, and selected Delimited Text. Browsed for the file I downloaded, gave the layer a common sense name, and under geometry specified Point coordinates, and confirmed that the X field was my longitude column and the Y field was latitude. Ran the tool, and the points were plotted in the basic WGS 84 longitude / latitude system in degrees, which is the system the coordinates in the data file were in (generally a safe bet for modern coordinate data, but not always the case).

QGIS Add Delimited Text and Plot Coordinates

The next step was to save these plotted points in a file format that stores geometry and allows us to do spatial analysis. In doing that step, I recommend taking two additional ones. First, verify that all of the plotted data have coordinates – if there are any records where lat and long are missing, those records will be carried along into the spatial file but there will be no geometry for them, which will cause problems. I used the Select Features by Expression tool, and in the expression window typed “Latitude” is not null to select all the features that have coordinates.

QGIS Select by Expression

Second, transform the coordinate reference system (CRS) of the layer to a projected system that uses meters or feet. When we run the kernel tool, it will ask us to specify a radius for defining the density, as well as the size of the pixels for the output raster. Using degrees doesn’t make sense, as it’s hard for us to conceptualize distances in degrees, and they are not a constant unit of measurement. If you’ve googled around and read Stack Exchange posts or watched videos where a person says “You just have to experiment and adjust these numbers until your map looks Ok”, they were working with units in fractions of degrees. This is not smart. Transform the system of your layers!

I selected the layer, right clicked, Export, Save Selected Features As. The default output is a geopackage, which is fine. Otherwise you could select ESRI shapefile, both are vector formats that store geometry. For file name I browse … and save the file in a specific folder. Beside CRS I hit the globe button, and in the CRS Selector window typed NAD83 Long Island in the filter at the top, and at the bottom I selected the NAD83 / New York Long Island (ftUS) EPSG 2263 option system in the list. Every state in the US has one or more state plane zones that you can select for making optimal maps for that area, in feet or meters. Throughout the world, you could choose an appropriate UTM zone that covers your area in meters. For countries or continents, look for an equidistant projection (meters again).

QGIS Export – Save As

Clicked a series of Oks to create the new file. To reset my map window to match CRS of the new file, I selected that file, right clicked, Layer CRS, Set Project CRS from Layer. Removed my original CSV to avoid confusion, and saved my project.

QGIS Noise Complaints in Projected CRS

Kernel Density Estimation

Now our data is ready. Under the Processing menu I opened the toolbox and searched for kernel to find Heatmap (Kernel Density Estimation) under the Interpolation tools. The tool asks for an input point layer, and then a radius. The radius is used to define an area for calculating a local density estimate around each point. We can use a formula to determine an ideal radius; the hopt method seems to be commonly employed for this purpose.

To use the hopt formula, we need to know the standard distance for our layer, which measures the degree to which features are dispersed around the spatial mean or center of the distribution. A nice 3rd party plugin was created for calculating this. I went to the the plugins menu, searched for the Standard Distance plugin, and added it. Searched for it in the Processing toolbox and launched it. I provided my point layer for input, and specified an output file. The other fields are optional (if we were measuring an attribute of the points instead of the density of the points, we could specify the attribute as a weight column). The output layer consists of a circle where the center is the mean center of the distribution, and the circle represents the standard deviation. The attribute table contains one record, with the standard distance attribute of 36,046.18 feet (if no feature was created, the likely problem is you have records in the point file that don’t have geometry – delete them and try again).

Output from the Standard Distance Plugin

Knowing this, I used the hopt formula:

=((2/(3N))^0.25)SD

Where N is the number of features and SD is the standard distance. I used Excel to plug in these values and do the calculation.

((2/(374526))^0.25)36046.18 = 1971.33

Finally, I launched the heatmap kernel tool, specified my noise points as input, and the radius as 1,971 feet. The output raster size does take some experimentation. The larger the pixel size, the coarser or more general the resolution will be. You want to choose something that makes sense based on the size of the area, the number of points, and / or some other contextual information. Just like the radius, the units are based on the map units of your layer. If I type in 100 feet for Pixel X, I see I’ll have a raster with 1,545 rows and 1,565 columns. Change it to 200 feet, and I get 773 by 783. I’ll go with 200 feet (the distance between a “standard” numbered street block in midtown Manhattan). I kept the defaults for the other options.

QGIS Heatmap Kernel Density Estimation Window

The resulting raster was initially displayed in black and white. I opened the properties and symbology menu and changed the render type from Singleband gray to Singleband pseudocolor, and kept the default yellow to red scheme. Voila!

Kernel Density Estimate of NYC Noise Complaints June 2022

In June 2022 there were high clusters of noise complaints in north central Brooklyn, northern Manhattan, and the southwest portion of the Bronx. There’s a giant red hot spot in the north central Bronx that looks like the storm on planet Jupiter. What on earth is going on there? I flipped back to the noise point layer and selected points in that area, and discovered a single address where over 2,700 noise complaints about a loud party were filed on June 18 and 19! There’s also an address on the adjacent block that registered over 900 complaints. And yet the records do not appear to be duplicates, as they have different time stamps and closing dates. A mistake in coding this address, multiple times? A vengeful person spamming the 311 system? Or just one helluva loud party? It’s hard to say, but beware of garbage in, garbage out. Beyond this demo, I would spend more time investigating, would try omitting these complaints as outliers and run the heatmap tool again, and compare this output to different months. It’s also worth experimenting with the color classification scheme, and some different pixel sizes.

Kernel Results Zoomed In

Contour Lines

Another interesting way to visualize this data would be to generate contour lines based on the kernel output. I did a search for contour in the processing toolbox, and in the contour tool I provided the kernel noise raster as the input. For intervals between contour lines I tried 20 feet, and changed the attribute name to reflect what the contour represents: COMPLAINT instead of ELEV. Generated the new file, overlaid on top of the kernel, and now you can see how it represents the “elevation” of complaints.

Noise Complaint Kernel Density with Contour Lines

Switch the kernel off, symbolize the contours and add some labels, and throw the OpenStreetMap underneath, and now you can explore New York’s hills and valleys of noise. Or more precisely, the hills and valleys of noise complainers! In looking at these contours, it’s important to remember that they’re generated from the kernel raster’s grid cells and not from the original point layer. The raster is a generalization of the point layer, so it’s possible that if you look within the center of some of the denser circles you may not find, say, 340 or 420 actual point complaints. To generate a more precise set of contours, you would need to decrease the pixel size in the kernel tool (from say 200 feet to 100).

Noise Complaint Contours in Lower Manhattan, Northwest Brooklyn, and Long Island City

It’s interesting what you can create with just one set of points as input. Happy mapping!

UN ICSC Retail Price Index Map

UN Retail Price Index Time Series

We recently launched our fledgling geodata portal on GitHub for the open datasets we’ll create in our new lab. In the spring we carved out a space on the 11th floor of the Sciences Library at Brown which we’ve christened GeoData@SciLi, a GIS and data consultation and work space. We’ll be doing renovations on both the webspace and workspace over the summer.

Our inaugural dataset was created by Ethan McIntosh, a senior (now graduate) who began working with me this spring. The dataset is the United Nations International Civil Service Commission’s (UN ICSC) Retail Price Indices with Details (RPID). The index measures the cost of living based on several categories of goods and services in duty stations around the world. It’s used to adjust the salaries of the UN’s international staff relative to UN headquarters in New York City (index value of 100 = cost of living in New York). The data is updated six times a year, published in an Excel spreadsheet that contains a macro that allows you to look up the value of each duty station via a dropdown menu. The UN ICSC makes the data public by request; you register and are granted access to download the data in PDF and Excel format in files that are packaged in one month / year at a time.

We were working with a PhD student in economics who wanted to construct a time-series of this data. Ethan wrote a Python script to aggregate all of the files from 2004 to present into a single CSV; the actual values for each country / duty station were stored in hidden cells that the macro pulled from, and he was able to pull them from these cells. He parsed the data into logical divisions, and added the standard 3-letter ISO 3166 country code to each duty station so that each record now has a unique place identifier. His script generates three outputs: a basic CSV of the data in separate month / year files, a “long” (aka flat) time series file where each record represents a specific duty station and retail index category or weight for a given month and year, and a “wide” time series file where the category / weight has been pivoted to a column, so each record represents all values for a duty station for a given month / year. He’s written the program to process and incorporate additional files as they’re published.

While the primary intention was to study this data as a time series in a statistical analysis, it can also be used for geospatial analysis and mapping. Using the wide file, I created the map in the header of this post, which depicts the total retail index for February 2022 for each country, where the value represents the duty station within the country (usually the capital city). I grabbed some boundaries from Natural Earth and joined the data to it using the ISO code. I classified the data using natural breaks, but manually adjusted the top level category to include all countries with a value greater than or equal to the base value of 100.

There were only five duty stations that were more expensive than New York, with values between 102 and 124: Tokyo, Ashkhabad (Turkmenistan), Singapore, Beirut, and Hong Kong. Beijing and Geneva were equivalent in price at 100. The least expensive stations with values between 52 and 69 were: Caracas (Venezuela), Tripoli, Damascus, Ankara (Turkey), Bucharest (Romania), Mbabane (Eswatini – formerly Swaziland), and Sofia (Bulgaria). There appears to be regional clustering of like values, although I didn’t run any tests. The station in the US that’s measured relative to NYC is Washington DC (index value of 89).

The final datasets and code used to generate them are available on GitHub, and we’ll update it at least once, if not a couple times, a year. We are not providing the original month / year macro spreadsheets; if you want those you should register with the UN ICSC and access them there. If you’re using our data files, you should still register with them, as they would like to be aware of how their data is being used.

We will post additional projects, datasets, and code in individual repos as we create them, linked to from our main page. I’m working on creating a basic metadata profile for our lab, so we’ll provide structured metadata for each of our datasets in the near future.

Hurricanes 2021

GIS Data for US Coastal Storms and Floods

Over the course of this academic year I’ve helped many students find GIS data related to coastal storms and flooding in the US. There’s a ton of data available, particularly from NOAA, but there are so many projects and initiatives that it can be tough to find what you’re looking for. So I’ll share a few key resources here.

NOAA’s DigitalCoast is a good place to start; it’s a catalog of federal, state, and US territory projects and websites that provide both spatial and non-spatial datasets related to coastal storms and flooding. You can filter by place and data type; there are even a few global sources. Most of the projects I mention below are cataloged there.

Given the size of many of these datasets, the ArcGIS File Geodatabase is often used for packaging and distribution. Once you’ve downloaded and unzipped one, it looks like a folder with lots of subfolders and files. If you’re an ArcGIS user, use the Catalog pane to browse your file system and add a connection to the database / folder to access its contents. If you’re a QGIS user, use the Data Manager and on the Vector tab change the source type from File to Directory. In the Source Type dropdown you can choose OpenFileGDB, and browse and select the database, which appears as a folder. Once you hit the Add button, you’ll be prompted to choose the features in the DB that you wish to add to the project.

Adding a File Geodatabase in QGIS
Adding a File Geodatabase in QGIS

FEMA Flood Hazards and Disasters

The FEMA flood maps are usually the first thing that comes to mind when folks set out to find data on flooding, but good luck finding their GIS data. I’ve searched through their main program site for the National Flood Hazard Layer and followed every link, but can’t for the life of me find the connection to the page that has actual GIS data; there are map viewer tools, scanned paper maps, web mapping services, and everything else under the sun.

If you want FEMA flood data in a GIS format: GO HERE! This is the record in data.gov for the National Flood Hazard Layer. The links at the bottom include this one: Download Seamless Nationwide NFHL GIS data. The data is packaged in an ArcGIS File Geodatabase, with one polygon feature class for flood zones. They’re categorized into 100 and 500 year zones, open water bodies, areas outside of flood zones, and areas outside flood zones protected by levees. The pic below illustrates 100 and 500 year zones overlaid on the OpenTopoMap.

FEMA Flood Maps. Light blue areas are 500 year zones, dark blue are 100 year
FEMA Flood Hazard Layer, 100 year zones in dark blue, 500 year in light blue

FEMA also has a GIS data feed for current and historical emergencies and disasters, that are available in a variety of formats both spatial and non-spatial. These are county-level layers that indicate where disaster areas were declared and what kind of funding or assistance is / was available.

NOAA Sea Level Rise

The FEMA maps assess both past events and current conditions to model the likelihood of flooding in a 100 or 500 year period for a major storm event. A different way of looking at flooding is to consider sea level rise due to climate change, where the impact of sea level rise is measured in different increments. Instead of the impact of a one-shot event, this illustrates potential long term change. NOAA’s Sea Level Rise (SLR) viewer allows you to easily visualize the impact of sea level rise in 1 foot increments, between 1 and 10 feet. You can download the data by US state or territory for coastal areas. There are separate downloads for sea level rise, rise depth, the confidence intervals for the models, as well as DEMs and flood frequency. The sea level rise data is package in an ArcGIS file geodatabase, with two sets of files (a low estimate and high estimate) in one foot increments. An example of 6 feet in sea level rise is shown below.

NOAA Sea Level Rise 6ft Layer
NOAA Sea Level Rise. Areas in pink illustrate sea level 6 feet higher than present

NOAA National Hurricane Center

Beyond showing the general impact of flooding or sea level rise, you can also look at the track of individual hurricanes and tropical storms. The National Hurricane Center’s GIS data page provides historical forecasts – the projected path and cone of storms, windspeeds, storm surges, etc. You choose your year, then can choose a storm, and then a particular day. You can use this data to see how the forecasts evolved as the storm moved. When we’re in hurricane season, you can also see what the circumstances are day by day for tracking new storms.

If you want to see what actually happened (as opposed to a forecast), you can dig through the data page and browse the different options. There’s the Tropical Cyclone Report (TCR) which provides “information on each tropical cyclone, including synoptic history, meteorological statistics, casualties and damages, and the post-analysis best track (six-hourly positions and intensities). Tropical cyclones include depressions, storms and hurricanes.” The default page shows you the Atlantic, but you can swap to Eastern or Central Pacific using the link at the top. Storms are listed alphabetically (and thus by date) and your format options are shapefile or KML. There’s a map at the bottom that depicts and labels all the storms for that season. You actually get four shapefiles in a download; a point file that contains a number of measurements, a line file for the storm track, a polygon file for the radius of the storm, and another polygon with the wind swath. The layers for 2021’s Tropical Storm Henri are illustrated below.

NOAA Tropical Cyclone Report Layers
Layers from NOAA”s NHC Tropical Cyclone Report, Tropical Storm Henri 2021

GIS data for the storms begins in 2010 with KMZ files (which you’ll need to convert in ArcGIS or QGIS to make them useful beyond display purposes), and shapefiles appear in 2015. Further back in time are just PDF reports and map scans.

If you really want to go back and time and get all the tracks at once, there’s the HURDAT2 database; one for the Atlantic (1851 to present) and another for the Pacific (1949 to present). It’s a csv file that contains coordinates for the track of every storm, which you can process to create a geospatial file using a points to line tool. Or – you can grab a version where that’s already been created! The International Best Track Archive for Climate Stewardship (IBTrACS) keeps a running CSV and shapefile of all global storms. Scroll down and choose shapefile (CSV is another option). The download page is just a list of files – you can choose points or lines, storms by ocean (East Pacific, North Atlantic, North Indian, South Atlantic, South Indian, South Pacific, West Pacific), or grab everything in lists that are: active, everything (ALL), last 3 years, or since 1980. Below is an example of all storms in the North Atlantic – there are quite a lot (see below)! You get storm speed and direction, wind speed and direction, coordinates, and identifiers associated with the storm as points and lines. A subset of this data for the 2021 season is displayed in the feature image at the top of this post.

IBTrACS Historical Hurricane Tracks
Historical hurricane / storm tracks from 1851 to 2021 in the North Atlantic from IBTrACS

How About the Weather?

There are many places you can go for this and the best source depends on the use case. More often than not, I end up using the Local Climatological Database. Choose a geographic type, then a specific area, and you’ll see all the weather stations in this area. Add them to the cart, and then view the cart once you have all the stations you want. On the next screen choose an output format (CSV or TXT fixed width) and a date range. You submit an order and wait a bit for it to be compiled, and are notified by email when it’s ready for download. Mixed in this CSV are records that are monthly, daily, and hourly, so after downloading you’ll want to extract just the period you’re interested in. Data includes temperature, precipitation, dew point, wind speed and direction, humidity, barometric pressure, and cloud cover.

NOAA Local Climatological Data Map Tool
Map Tool search interface for NOAA Local Climatological Data

Some processing is required to make these files GIS ready. Each record represents an observation at a station at a given point in time, so if you plot these “as is” the likely idea is you’re making an illustrated time series of some sort, as you’ll have tons of observations plotted on a few spots (where the stations are). If this isn’t desirable, then you’ll filter records to create extracts for just a given point in time, maybe separate features for each time period. For monthly summaries you can pivot time to columns, to create a column for each month and indicator. This would be impractical for daily or hourly summaries, unless you’re focusing on a single month for the former or day / week for the latter (otherwise you’ll have a bazillion columns).

Annoyingly, the CSV option doesn’t include any of the station information in the download (like the standard WBAN ID, name, longitude, latitude, and elevation) except for one unique identifier. I know that this information was all included in the past, and am not sure why it was dropped. The TXT version includes the station info, but fixed-width files are a pain to work with. If you are working with a small number of stations, you can pull the station info individually by previewing the station on the download screen (click on the station title or little eye symbol). The five digit WBAN number is included as the last 5 digits of the identifier in the CSV, so you can identify and relate each one. If you don’t want to mess with copying and pasting, you can generate a second extract for all the stations for just a single day and download that in the TXT format, and then parse just the station columns and associate them with your main table.

There are multiple ways that you can create extracts for this data beyond the example I just provided, available from the main data tools page. For a more refined search you can select the summary period (yearly, monthly, daily, hourly) and targeted variables in advance. There are also FTP options for bulk downloads.

One thing that surprises folks who are new to working with this data, is that there aren’t many weather stations. For the LCD, my home state of Delaware only has three, one in each county. The entire City of New York only has three as well, at each of the airports and one in Central Park. If you’re not interested in points and want areas, then you would need to gather a significant number of stations and do interpolation. Or – use data that’s already modeled. I mentioned PRISM at Oregon State in a previous post, as a nice source for national US rasters of temperature and precipitation that you can generate for dailies, monthlies, and normals.

QGIS Example

QGIS 3.16 Tutorial Workbook

I just released a new edition of my introductory QGIS manual for QGIS 3.16 Hannover (the current long term release), and as always I’m providing it under Creative Commons for classroom use and self-directed learning. I’ve also updated my QGIS FAQs handout, which is useful for new folks as a quick reference. This material will eventually move to a Brown University website, but when that happens I’ll still hold on to my page and will link to the new spot. I’m also leaving the previous version of the tutorial written for QGIS 3.10 A Coruna up alongside it, but will pull that down when the fall semester begins.

The new edition has a new title. When I first wrote Introduction to GIS Using Open Source Software, free and open source (FOSS) GIS was a novelty in higher ed. QGIS was a lot simpler, and I had to pull in several different tools to accomplish basic tasks like CRS transformations and calculating natural breaks. Ten years later, many university libraries and labs with GIS services either reference or support QGIS, and the package is infinitely more robust. So a name change to simply Introduction to GIS with QGIS seemed overdue.

My move from Baruch CUNY to Brown prompted me to make several revisions in this version. The biggest change was swapping the NYC-based business site selection example with a Rhode Island-based public policy one in chapters 2 and 3. The goal of the new hypothetical example is to identify public libraries in RI that meet certain criteria that would qualify them to receive funding for after school programs for K-12 public school students (replacing the example of finding an optimal location for a new coffee shop in NYC). In rethinking the examples I endeavored to introduce the same core concepts: attribute table joins, plotting coordinates, and geoprocessing. In this version I do a better job of illustrating and differentiating between creating subsets of features by: selecting by attributes and location, filtering (a new addition), and deleting features. I also managed to add spatial joins and calculated fields to the mix.

Changes to chapter 4 (coordinate reference systems and thematic mapping) were modest; I swapped out the 2016 voter participation data with 2020 data. I slimmed down Chapter 5 on data sources and tutorials, but added an appendix that lists web mapping services that you can add as base maps. Some material was shuffled between chapters, and all in all I cut seven pages from the document to slim it down a bit.

As always, there were minor modifications to be made due to changes between software versions. There were two significant changes. First, QGIS no longer supports 32 bit operating systems for Windows; it’s 64 bit or nothing, but that seems to be fairly common these days. Second, the Windows installer file is much bigger (and thus slower to download), but it helps insure that all dependencies are there. Otherwise, the differences between 3.16 and 3.10 are not that great, at least for the basic material I cover. In the past there was occasionally a lack of consistency regarding basic features and terminology that you’d think would be well settled, but thankfully things are pretty stable this time around.

If you have any feedback or spot errors feel free to let me know. I imagine I’ll be treading this ground again after the next long term release take’s 3.16’s place in Feb / Mar 2022. For the sake of stability I always stick with the long term release and forego the latest ones; if you’re going to use this tutorial I’d recommend downloading the LTR version and not the latest one.

Philadelphia Redlining Map

Redlining Maps for GIS

I received several questions during the spring semester about redlining maps; where to find them, and how many were made. Known officially as Residential Security Maps, they were created by the Home Owners Loan Corporation in the 1930s to grade the level of security or risk for making home loans in residential portions of urban areas throughout the US. This New Deal program was intended to help people refinance mortgages and prevent foreclosures, while increasing buying opportunities to expand home ownership.

Areas were evaluated by lenders, developers, and appraisers and graded from A to D to indicate their desirability or risk level. Grade A was best (green), B still desirable (blue), C definitely declining (yellow), and D hazardous (red). The yellow and red areas were primarily populated by minorities, immigrants, and low income groups, and current research suggests that this program had a long reaching negative impact by enforcing and cementing segregation, disinvestment, and poverty in these areas.

The definitive digital source for these maps is the Mapping Inequality : Redlining in New Deal America project created at the University of Richmond’s Digital Scholarship Lab. They provide a solid history and summary of these maps and a good bibliography. The main portal is an interactive map of the US that allows you to zoom in and preview maps in different cities. You can click on individually zoned areas and get the original assessor or evaluator’s notes (when available). If you switch to the Downloads page you get a list of maps sorted alphabetically by state and city that you can download as: a jpeg of the original scanned map, a georeferenced image that can be added to GIS software as a raster, and a GIS vector polygon file (shapefile or geojson). In many cases there is also a scanned copy of the evaluators description and notes. You also have the option for downloading a unified vector file for the entire US as a shapefile or geojson. All of the data is provided under a Creative Commons Attribution Sharealike License.

Providence Redlining Map
Redlining Map of Providence, RI with graded areas, from the Mapping Inequality Project

There are a few other sources to choose from, but none of them are as complete. I originally thought of the National Archives which I thought would be the likely holder of the original paper maps, but only a fraction have been digitized. The PolicyMap database has most (but not all) of the maps available as a feature you can overlay in their platform. If you’re doing a basic web search this Slate article is among the first resources you’ll encounter, but most of the links are broken (which says something about the ephemeral nature of these kinds of digital projects).

How many maps were made? Amy Hillier’s work was among the earlier studies that examined these maps, and her case study of Philadelphia includes a detailed summary of the history of the HOLC program with references to primary source material. According to her research, 239 of these maps were made and she provides a list of each of the cities in the appendix. I was trying to discover how many maps were available in Rhode Island and found this list wasn’t complete; it only included Providence, while the Mapping Inequality project has maps for Providence, Pawtucket & Central Falls, and Woonsocket. I counted 202 maps based on unique names on Mapping Inequality, but some several individual maps include multiple cities.

She mentions that a population of 40,000 people was used as a cut-off for deciding which places to map, but noted that there were exceptions; Washington DC was omitted entirely, while there are several maps for urban counties in New Jersey as opposed to cities. In some case cities that were below the 40k threshold that were located beside larger ones were included. I checked the 1930 census against the three cities in Rhode Island that had maps, and indeed they were the only RI cities at that time that had more than 40k people (Central Falls had less than 40k but was included with Pawtucket as they’re adjacent). So this seemed to provide reasonable assurance that these were the only ones in existence for RI.

Finding the population data for the cities was another surprise. I had assumed this data was available in the NHGIS, but it wasn’t. The NHGIS includes data for places (Census Places) back to the 1970 census, which was the beginning of the period where a formal, bounded census place geography existed. Prior to this time, the Census Bureau published population count data for cities using other means, and the NHGIS is still working to include this information. It does exist (as you can find it in Wikipedia articles for most major cities) but is buried in old PDF reports on the Census Bureau’s website.

If you’re interested in learning more about the redlining maps beyond the documentation provided by Mapping Inequality, these articles provide detailed overviews of the HOLC and the residential security maps program, as well as their implications to the present day. You’ll need to access them through a library database:

Hillier, A.E. (2005). “Residential Security Maps and Neighborhood Appraisals: The Home Owners’ Loan Corporation and the Case of Philadelphia.” Social Science History, 29(2): 207-233.

Greer, J. (2012). “The Home Owners’ Loan Corporation and the Development of the Residential Security Maps“. Journal of Urban History, 39(2): 275-296.

OSM Web Feature Service

OpenStreetMap Data with ArcGIS Pro and QGIS

A couple years ago I wrote a post that demonstrated how to use the QuickOSM plugin for QGIS to easily extract features from the OpenStreetMap (OSM). The OSM is a great source for free and open GIS data, especially for types of features that are not captured in government sources, and for parts of the world that don’t possess a free or robust GIS data infrastructure. I’ve been using ArcGIS Pro more extensively in my new job and was wondering how I could do the same thing: query features from the OSM based on keys and values (denoting feature type) and geographic area and extract them as a vector layer. I’m looking for straightforward solutions that I could use for answering questions from students (so no command line tricks or database stuff). In this post I’ll cover three approaches for achieving this in ArcGIS Pro, with references to QGIS.

File Approach

The most straightforward method would be to export data directly from the main OSM page by zooming into an area and hitting the Export button. This is a pretty blunt approach, as you have to be zoomed in pretty close and you grab every possible feature in the view. The “native” file format of OSM is the osm / pbf format; .osm is an XML file while .pbf is a compressed binary version of the osm. QGIS is able to handle these files directly; you just add them as a vector layer. ArcGIS Pro cannot. You have to download and install a special Data Interoperability extension, which is an esoteric thing that’s not part of the standard package and requires a special license from your site license coordinator.

A better and more targeted approach is to download pre-created extracts that are provided by a number of organizations listed in the OSM wiki. I started with Geofabrik in Germany, as it was a source I recognized. They package OSM data by geographic area and feature type. On their main page they list files that contain all features for each of the continents. These are enormous files, and as such they are only provided in the osm pbf format as shapefiles can’t effectively handle data that size. Even if you downloaded the osm pbf files and added them to QGIS, the software will struggle to render something that big.

But all is not lost; Geofabrik and many other providers package data in a shapefile format for smaller areas, provided that the size and number of features is not too great. For instance, on Geofabrik’s download page if you click on North America you’re presented with country extracts for that continent (see images below). You can get shapefiles for Greenland and Mexico, but not Canada or the US as the files are still too big. Click on US, and you’re presented with files for each of the states. No luck for California (too big), but the rest of states are small enough that you can get shapefiles for all of them.

Geofabrik OSM data: download continents
Default Geofrabrik OSM download page for continents. Click on a continent name…
Geofabrik OSM data downloads: countries in North America
…to access files for countries. Click on a country name…
Geofabrik OSM data downloads: states of the US
…to access files for states / provinces / admin divisions

I downloaded and unzipped the file for Rhode Island. It contains a number of individual shapefiles classified by type of feature: buildings, land use, natural, places, places of worship (pofw), points of interest (pois), railways, roads, traffic, transport, water, and waterways. Many of the files appear twice: files with an “a” suffix represent polygons (areas) while files without that suffix are points or lines. Some OSM features are stored as polygons when such detail is available, while others are represented as points.

For example, if I add the two places of workship files to a map, for some features you have the outline of the actual building, while for most you simply have a point. After adding the layers to the map, you’ll probably want to use Select by Attribute to select the features you want based on OSM tags with keys and values, and Select by Location in conjunction with a separate boundary file to pull data out for a smaller area. The Geofabrik OSM attribute table is limited to basic attributes: an OSM ID, feature code and class, and name. It’s also likely that you’ll want to unify the point and polygon features of the same type into one layer, as they’re usually mutually exclusive. Use the Centroid (Polygon) tool in the toolbox to turn the polygons into points, and the Merge tool to meld the two point layers together. In QGIS the comparable tools under the Vector menu are Centroids and Merge Vector Layers. WGS 84 is the default CRS for the layers.

ArcGIS Pro with OSM Places of Worship from Geofabrik
OSM Places of Worship. Some features are stored as points while others are polygons

Geofabrik is just one option. There are several others and they take different approaches for structuring their extracts. For example, BBBike.org organizes their layers by city for over 200 cities around the world, and they provide a number of additional formats beyond OSM PBF and shapefiles, such as Garmin GPS, GeoJSON, and CSV. They divide the data into fewer files, and if they don’t compile data for the area you’re interested in you can use a web-based tool to create a custom extract.

Plugin Approach

It would be nice to use a plugin, as that would allow you to specify a custom geographic area and retrieve just the specific features you want. QuickOSM works quite nicely for QGIS. Fortunately there is a good ArcGIS Pro solution called OSMquery. It works for both Pro and Desktop, tested for Pro 2.2 and Desktop 10.6. I’m using Pro 2.7 and the basic tool worked fine. It’s well documented, with good instructions for installation and use.

The plugin is written in Python and you add it as a tool to your ArcToolbox. Download the repo from the OSMquery GitHub as a ZIP file (click the green code button and choose Download ZIP). Save it in or near your ArcGIS project folders, and unzip it. In Pro, go into a project and open a Catalog Pane in the View ribbon. Right click on Toolbox to add a new one, and browse to the folder you unzipped to add the tool. There are two scripts in the box, a basic and an advanced version. The basic tool functioned without trouble for me. The advanced tool threw an error, probably some Python dependency issue (I didn’t investigate as the basic tool met my needs).

In the basic tool you choose the key and value for the features you want to extract; the dropdown menu is automatically populated with these options. For the geographic extent you can enter a place name, or you can use the extent of the current map window or of a layer in the project, or you can manually type in bounding box coordinates. Another nice option is you can transform the CRS of the extracted features from WGS 84 to another system, so it matches the CRS of layers in your existing project. Run the tool, and the features are extracted. If the features exist as both points and polygons, you get two separate files for each. If you choose, you can merge them together as described in the previous section; this is a bit tougher as the plugin approach yields a much wider selection of fields in the attribute table, and not all of the point and polygon attributes align. With the Merge tool in Pro you can select which attributes you want to hold on to, and common ones will be merged. QGIS is a bit messier in this regard, but in my earlier post I outlined a work-around using a spatial database.

OSMquery tool in ArcGIS Pro
The basic OSMquery tool in an ArcGIS Pro toolbox

Web Feature Service

This initially seemed to be the most promising route, but it turned out to be a dud. Like QGIS, Pro allows you to add OSM as a tiled base map. But ESRI also offers OSM as a web feature service: by hitting Add Data on the Map ribbon and searching the Living Atlas for “OpenStreetMap” you can select from a number of OSM web feature services, organized by continent and feature type. Once you add them to a map, you can select and click on individual features to see their name and feature type. The big problem is that you are not allowed to extract features from these layers, which leaves you with an enormous and heterogeneous mix of features for an entire continent. You can interact with the features, selecting by attribute and location in reference to other spatial layers, but that’s about it.

OSM web feature service in ArcGIS Pro
OSM web feature service in ArcGIS Pro

In Summary

I would recommend taking the step of downloading the OSMquery plugin for ArcGIS Pro if you want to take a highly targeted approach to OSM feature extraction (for QGIS users, enable the QuickOSM plugin). This approach is also best if you can’t download a pre-existing extract for your area because it’s too large or has too many features, and if you want to access the fullest possible range of attribute values. Otherwise, you can simply download one of the pre-created extracts, and use your software to winnow it down to what you need (or if you do need everything, the file approach makes more sense). Since the file-based option includes fewer attributes, converting polygon features to points and merging them with the other point features is a bit simpler.