Python API Code

Geocoding with the NYC Geoclient API and Python

Even though I’ve left New York, there are still occasions where I refer back to NYC resources in order to help students and faculty here with NYC-based research. Most recently I’ve revisited NYC DOITT’s Geoclient API for geocoding addresses, and I discovered a number of things have changed since I’ve last used it a few years ago. I’ll walk through my latest geocoding script in this post.

First and foremost: if you landed on this page because you’re trying to figure out how to get your Geoclient API key to work, the answer is:

&subscription-key=YOURKEYHERE

This replaces the old format that required you to pass an app id and key. I searched through two websites and scanned through hundreds of pages of documentation, only to find this solution in a cached Google search result, as the new docs don’t mention this change and the old docs still have the previous information and examples of the application ID and key. So – hopefully this should save you some hours of frustration.

I was working with someone who needed to geocode a subset of the city’s traffic violation data from the open data portal, as the data lacks coordinates. It’s also missing postal city names and ZIP Codes, which precludes using most geocoders that rely on this information. Even if we had these fields, I’ve found that many geocoders struggle with the hyphenated addresses used throughout Queens, and some work-around is needed to get matches. NYC’s geoclient is naturally able to handle those Queens addresses, and can use the borough name or code for locating addresses in lieu of ZIP Codes. The traffic data uses pseudo-county codes, but it’s easy to replace those with the corresponding borough codes.

The older documentation is still solid for illustrating the different APIs and the variables that are returned; you can search for a parsed or non-parsed street address, street intersections, places of interest or landmarks, parcel blocks and lots, and a few others.

I wrote some Python code that I’ve pasted below for geocoding addresses that have house numbers, street, and borough stored in separate fields using the address API, and if the house number is missing we try again by doing an intersection search, as an intersecting street is stored in a separate field in the traffic data. In the past I used a thin client for accessing the API, but I’m skipping that as it’s simpler to just build the URLs directly with the requests module.

The top of the script has the standard stuff: the name of the input file, the column locations (counting from zero) in the input file that contain each of the four address components, the base URL for the API, a time function for progress updates, reading the API key in from a file, and looping through the input CSV with the addressees to save the header row in one list and the records in a nested list. I created a list of fields that are returned from the API that I want to hold on to and add them to the header row, along with a final variable that records the results of the match. In addition to longitude and latitude you can also get xCoordinate and yCoordinate, which are in the NY State Plane Long Island (ft-US) map projection. I added a counts dictionary to keep track of the result of each match attempt.

Then we begin a long loop – this is a bit messy and if I had more time I’d collapse much of this into a series of functions, as there is repetitive code. I loop through the index and value of each record beginning with the first one. The loop is in a try / except block, so in the event that something goes awry it should exit cleanly and write out the data that was captured. We take the base url and append the address request, slicing the record to get the values for house, street, and borough into the URL. An example of a URL after passing address components in:

https://api.nyc.gov/geo/geoclient/v1/address.json?houseNumber=12345&street=CONEY ISLAND AVE&borough=BROOKLYN&subscription-key=KEYGOESHERE

Pass that URL to the requests module and get a response back. If an address is returned, the JSON resembles a Python dictionary, with ‘address’ as the key and the value as another dictionary with key value pairs of several variables. Otherwise, we get an error message that something was wrong with the request.

An address dictionary with sub-dictionaries returned by the NYC Geoclient
A successful address match returns an address dictionary, with a sub-dictionary of keys and values

The loop logic:

  • If the package contains an ‘address’ key, flatten to get the sub-dictionary
    • If ‘longitude’ is present as a key, a match is returned, get the relevant fields and append to the record
    • Else if the dictionary contains a ‘message’ key with a value that the house number was missing, do an intersection match
      • If the package contains an ‘intersection’ key, flatten to get the sub-dictionary
        • If ‘longitude’ is present as a key, a match is returned, get the relevant fields and append to the record
        • If not, there was no intersection match, just get the messages and append blanks for each value to the record
      • If not, an error was returned, capture the error and append blanks for each value to the record, and continue
    • If not, there was no address match, just get the messages and append blanks for each value to the record
  • If not, an error was returned, capture the error and append blanks for each value to the record, and continue

The API has limits of 2500 matches per minute and 500k per day, so after 2000 records I built in a pause of 15 seconds. Once the process finishes, successfully or not, the records are written out to a CSV file, header row first followed by the records. If the process bailed prematurely, the last record and its index are printed to the screen. This allows you to rerun the script where you left off, by changing the start index in the variables list at the top of the script from 0 to the last record that was read. When it comes time to write output, the previous file is appended rather than overwritten and the header row isn’t written again.

It took about 90 minutes to match a file of 25,000 records. I’d occasionally get an error message that the API key was bad for a given record; the error would be recorded and the script continued. It’s likely that there are illegal characters in the input fields for the address that end up creating a URL where the key parameter can’t be properly interpreted. I thought the results were pretty good; beyond streets it was able to recognize landmarks like large parks and return matched coordinates with relevant error messages (example below). Most of the flops were, not surprisingly, due to missing borough codes or house numbers.

Output from the NYC Geoclient
Output fields from the NYC Geoclient written to CSV

To use this code you’ll need to sign up for an NYC Developer API account, and then you can request a key for the NYC Geoclient service. Store the key in a text file in the same folder as the script. I’m also storing inputs and outputs in the same folder, but with a few functions from the os module you can manipulate paths and change directories. If I get time over the winter break I may try rewriting to incorporate this, plus functions to simplify the loops. An alternative to the API would be to download the LION street network geodatabase, and you could set up a local address locator in ArcGIS Pro. Might be worth doing if you had tons of matches to do. I quickly got frustrated with with the ArcGIS documentation and after a number of failed attempts I opted to use the Geoclient instead.

"""
Match addresses to NYC Geoclient using house number, street name, and borough
Frank Donnelly / GIS and Data Librarian / Brown University
11/22/2021 - Python 3.7
"""

import requests, csv, time

#Variables
addfile='parking_nov2021_nyc.csv' #Input file with addresses
matchedfile=addfile[:-4]+'_output.csv' #Output file with matched data
keyfile='nycgeo_key.txt' #File with API key
start_idx=0 #If program breaks, change this to pick up with record where you left off
#Counting from 0, positions in the CSV that contain the address info 
hous_idx=23
st_idx=24
boro_idx=21
inter_idx=25
base_url='https://api.nyc.gov/geo/geoclient/v1/'

def get_time():
    time_now = time.localtime() # get struct_time
    pretty_time = time.strftime("%m/%d/%Y, %H:%M:%S", time_now)
    return pretty_time

print('*** Process launched at', get_time())

#Read api key in from file
with open(keyfile) as key:
    api_key=key.read().strip()

records=[]

with open(addfile,'r') as infile:
    reader = csv.reader(infile)
    header = next(reader) # Capture column names as separate list
    for row in reader:
        records.append(row)

# Fields returned by the API to capture
# https://maps.nyc.gov/geoclient/v1/doc
fields=['message','message2','houseNumber','firstStreetNameNormalized',
        'uspsPreferredCityName','zipCode','longitude','latitude','xCoordinate',
        'yCoordinate']
header.extend(fields)
header.append('match_result')
datavals=len(fields)-2 # Number of fields that are not messages
counts={'address match':0, 'intersection match':0,
        'failed address':0, 'failed intersection':0,
        'error':0}

print('Finished reading data from', addfile)
print('*** Geocoding process launched at',get_time())

for i,v in enumerate(records[start_idx:]):
    try:
        data_url = f'{base_url}address.json?houseNumber={v[hous_idx]}&street={v[st_idx]}&borough={v[boro_idx]}&subscription-key={api_key}'
        response=requests.get(data_url)
        package=response.json()
        # If an address is returned, continue
        if 'address' in package:
            result=package['address']     
            # If longitude is returned, grab data
            if 'longitude' in result:
                for f in fields:
                    item=result.get(f,'')
                    v.append(item)
                v.append('address match')
                counts['address match']=counts['address match']+1
            # If there was no house number, try street intersection match instead
            elif 'message' in result and result['message']=='INPUT CONTAINS NO ADDRESS NUMBER' and v[inter_idx] not in ('',None):
                try:
                    data_url = f'{base_url}intersection.json?crossStreetOne={v[st_idx]}&crossStreetTwo={v[inter_idx]}&borough={v[boro_idx]}&subscription-key={api_key}'
                    response=requests.get(data_url)
                    package=response.json()
                    # If an intersection is returned, continue
                    if 'intersection' in package:
                        result=package['intersection']
                        # If longitude is returned, grab data
                        if 'longitude' in result:
                            for f in fields:
                                item=result.get(f,'')
                                v.append(item)
                            v.append('intersection match')
                            counts['intersection match']=counts['intersection match']+1
                        # Intersection match fails, append messages and blank values
                        else:
                            v.append(result.get('message',''))
                            v.append(result.get('message2',''))
                            v.extend(['']*datavals)
                            v.append('failed intersection')
                            counts['failed intersection']=counts['failed intersection']+1
                    # Error returned instead of intersection
                    else:
                        v.append(package.get('message',''))
                        v.append(package.get('message2',''))
                        v.extend(['']*datavals)
                        v.append('error')
                        counts['error']=counts['error']+1
                        print(package.get('message',''))
                        print('Geocoder error at record',i,'continuing the matching process...')
                except Exception as e:
                     print(str(e))
            # Address match fails, append messages and blank values
            else:
                v.append(result.get('message',''))
                v.append(result.get('message2',''))
                v.extend(['']*datavals)
                v.append('failed address')
                counts['failed address']=counts['failed address']+1
        # Error is returned instead of address
        else:
            v.append(package.get('message',''))
            v.append(package.get('message2',''))
            v.extend(['']*datavals)
            v.append('error')
            counts['error']=counts['error']+1
            print(package.get('message',''))
            print('Geocoder error at record',i,'continuing the matching process...')
        if i%2000==0:
            print('Processed',i,'records so far...')
            time.sleep(15)         
    except Exception as e:
        print(str(e))

# First attempt, write to new file, but if break happened, append to existing file
if start_idx==0:
    wtype='w' 
else:
    wtype='a'

end_idx=start_idx+i

with open(matchedfile,wtype,newline='') as outfile:
    writer = csv.writer(outfile, delimiter=',', quotechar='"',
                        quoting=csv.QUOTE_MINIMAL)
    if wtype=='w':
        writer.writerow(header)
        writer.writerows(records[start_idx:end_idx])
    else:
        writer.writerows(records[start_idx:end_idx])
print('Wrote',i+1,'records to file',matchedfile)
print('Final record written was number',i,':\n',v)
for k,val in counts.items():
    print(k,val)
print('*** Process finished at',get_time())

2020 Resident Population Map

2020 Census Updates

In late summer and early fall I was hammering out the draft for an ALA Tech Report on using census data for research (slated for release early 2022). The earliest 2020 census figures have been released and there are several issues surrounding this, so I’ll provide a summary of what’s happening here. Throughout this post I link to Census Bureau data sources, news bulletins, and summaries of trends, as well as analysis on population trends from Bill Frey at Brookings and reporting from Hansi Lo Wang and his colleagues at NPR.

Count Result and Reapportionment Numbers

The re-apportionment results were released back in April 2020, which provided the population totals for the US and each of the states that are used to reallocate seats in Congress. This data is typically released at the end of December of the census year, but the COVID-19 pandemic and political interference in census operations disrupted the count and pushed all the deadlines back.

Despite these disruptions, the good news is that the self-response rate, which is the percentage of households who submit the form on their own without any prompting from the Census Bureau, was 67%, which is on par with the 2010 census. This was the first decennial census where the form could be submitted online, and of the self-responders 80% chose to submit via the internet as opposed to paper or telephone. Ultimately, the Bureau said it reached over 99% of all addresses in its master address file through self-response and non-response follow-ups.

The bad news is that the rate of non-response to individual questions was much higher in 2020 than in 2010. Non-responses ranged from a low of 0.52% for the total population count to a high of 5.95% for age or date of birth. This means that a higher percentage of data will have to be imputed, but this time around the Bureau will rely more on administrative records to fill the gaps. They have transparently posted all of the data about non-response for researchers to scrutinize.

The apportionment results showed that the population of the US grew from approximately 309 million in 2010 to 331 million in 2020, a growth rate of 7.35%. This is the lowest rate of population growth since the 1940 census that followed the Great Depression. Three states lost population (West Virginia, Mississippi, and Illinois), which is the highest number since the 1980 census. The US territory of Puerto Rico lost almost twelve percent of its population. Population growth continues to be stronger in the West and South relative to the Northeast and Midwest, and the fastest growing states are in the Mountain West.

https://www.census.gov/library/visualizations/2021/dec/2020-percent-change-map.html

Public Redistricting Data

The first detailed population statistics were released as part of the redistricting data file, PL 94-171. Data in this series is published down to the block level, the smallest geography available, so that states can redraw congressional and other voting districts based on population change. Normally released at the end of March, this data was released in August 2021. This is a small package that contains the following six tables:

  • P1. Race (includes total population count)
  • P2. Hispanic or Latino, and Not Hispanic or Latino by Race
  • P3. Race for the Population 18 Years and Over
  • P4. Hispanic or Latino, and Not Hispanic or Latino by Race for the Population 18 Years and
    Over
  • P5. Group Quarters Population by Major Group Quarters Type
  • H1. Occupancy Status (includes total housing units)

The raw data files for each state can be downloaded from the 2020 PL 94-171 page and loaded into stats packages or databases. That page also provides infographics (including the maps embedded in this post) and data summaries. Data tables can be readily accessed via data.census.gov, or via IPUMS NHGIS.

The redistricting files illustrate the increasing diversity of the United States. The number of people identifying as two or more races has grown from 2.9% of the total population in 2010 to 10.2% in 2020. Hispanics and Latinos continue to be the fastest growing population group, followed by Asians. The White population actually shrank for the first time in the nation’s history, but as NPR reporter Hansi-Lo Wang and his colleagues illustrate this interpretation depends on how one measures race; as race alone (people of a single race) or persons of any race (who selected white and another race), and whether or not Hispanic-whites are included with non-Hispanic whites (as Hispanic / Latino is not a race, but is counted separately as an ethnicity, and most Hispanics identify their race as White or Other). The Census Bureau has also provided summaries using the different definitions. Other findings: the nation is becoming progressively older, and urban areas outpaced rural ones in population growth. Half of the counties in the US lost population between 2010 and 2020, mostly in rural areas.

https://www.census.gov/library/visualizations/2021/dec/percent-change-county-population.html

2020 Demographic and Housing Characteristics and the ACS

There still isn’t a published timeline for the release of the full results in the Demographic and Housing Characteristics File (DHC – known as Summary File 1 in previous censuses, I’m not sure if the DHC moniker is replacing the SF1 title or not). There are hints that this file is going to be much smaller in terms of the number of tables, and more limited in geographic detail compared to the 2010 census. Over the past few years there’s been a lot of discussion about the new differential privacy mechanisms, which will be used to inject noise into the data. The Census Bureau deemed this necessary for protecting people’s privacy, as increased computing power and access to third party datasets have made it possible to reverse engineer the summary census data to generate information on individuals.

What has not been as widely discussed is that many tables will simply not be published, or will only be summarized down to the county-level, also for the purpose of protecting privacy. The Census Bureau has invited the public to provide feedback on the new products and has published a spreadsheet crosswalking products from 2010 and 2020. IPUMS also released a preliminary list of tables that could be cut or reduced in specificity (derived from the crosswalk), which I’m republishing at the bottom of this post. This is still preliminary, but if all these changes are made it would drastically reduce the scope and specificity of the decennial census.

And then… there is the 2020 American Community Survey. Due to COVID-19 the response rates to the ACS were one-third lower than normal. As such, the sample is not large or reliable enough to publish the 1-year estimate data, which is typically released in September. Instead, the Census will publish a smaller series of experimental tables for a more limited range of geographies at the end of November 2021. There is still no news regarding what will happen with the 5-year estimate series that is typically released in December.

Needless to say, there’s no shortage of uncertainty regarding census data in 2020.

Tables in 2010 Summary File 1 that Would Have Less Geographic Detail in 2020 (Proposed)

Table NameProposed 2020 Lowest Level of Geography2010 Lowest Level of Geography
Hispanic or Latino Origin of Householder by Race of HouseholderCountyBlock
Household Size by Household Type by Presence of Own ChildrenCountyBlock
Household Type by Age of HouseholderCountyBlock
Households by Presence of People 60 Years and Over by Household TypeCountyBlock
Households by Presence of People 60 Years and Over, Household Size, and Household TypeCountyBlock
Households by Presence of People 75 Years and Over, Household Size, and Household TypeCountyBlock
Household Type by Household SizeCountyBlock
Household Type by Household Size by Race of HouseholderCountyBlock
Relationship by Age for the Population Under 18 YearsCountyBlock
Household Type by Relationship for the Population 65 Years and OverCountyBlock
Household Type by Relationship for the Population 65 Years and Over by RaceCountyBlock
Family Type by Presence and Age of Own ChildrenCountyBlock
Family Type by Presence and Age of Own Children by Race of HouseholderCountyBlock
Age of Grandchildren Under 18 Years Living with A Grandparent HouseholderCountyBlock
Household Type by Relationship by RaceCountyBlock
Average Household Size by AgeTo be determinedBlock
Household Type for the Population in HouseholdsTo be determinedBlock
Household Type by Relationship for the Population Under 18 YearsTo be determinedBlock
Population in Families by AgeTo be determinedBlock
Average Family Size by AgeTo be determinedBlock
Family Type and Age for Own Children Under 18 YearsTo be determinedBlock
Total Population in Occupied Housing Units by TenureTo be determinedBlock
Average Household Size of Occupied Housing Units by TenureTo be determinedBlock
Sex by Age for the Population in HouseholdsCountyTract
Sex by Age for the Population in Households by RaceCountyTract
Presence of Multigenerational HouseholdsCountyTract
Presence of Multigenerational Households by Race of HouseholderCountyTract
Coupled Households by TypeCountyTract
Nonfamily Households by Sex of Householder by Living Alone by Age of HouseholderCountyTract
Group Quarters Population by Sex by Age by Group Quarters TypeStateTract

Tables in 2010 Summary File 1 That Would Be Eliminated in 2020 (Proposed)

Population in Households by Age by Race of Householder
Average Household Size by Age by Race of Householder
Households by Age of Householder by Household Type by Presence of Related Children
Households by Presence of Nonrelatives
Household Type by Relationship for the Population Under 18 Years by Race
Household Type for the Population Under 18 Years in Households (Excluding Householders, Spouses, and Unmarried Partners)
Families*
Families by Race of Householder*
Population in Families by Age by Race of Householder
Average Family Size by Age by Race of Householder
Family Type by Presence and Age of Related Children
Family Type by Presence and Age of Related Children by Race of Householder
Group Quarters Population by Major Group Quarters Type*
Population Substituted
Allocation of Population Items
Allocation of Race
Allocation of Hispanic or Latino Origin
Allocation of Sex
Allocation of Age
Allocation of Relationship
Allocation of Population Items for the Population in Group Quarters
American Indian and Alaska Native Alone with One Tribe Reported for Selected Tribes
American Indian and Alaska Native Alone with One or More Tribes Reported for Selected Tribes
American Indian and Alaska Native Alone or in Combination with One or More Other Races and with One or More Tribes Reported for Selected Tribes
American Indian and Alaska Native Alone or in Combination with One or More Other Races
Asian Alone with One Asian Category for Selected Groups
Asian Alone with One or More Asian Categories for Selected Groups
Asian Alone or in Combination with One or More Other Races, and with One or More Asian Categories for Selected Groups
Native Hawaiian and Other Pacific Islander Alone with One Native Hawaiian and Other Pacific Islander Category for Selected Groups
Native Hawaiian and Other Pacific Islander Alone with One or More Native Hawaiian and Other Pacific Islander Categories for Selected Groups
Native Hawaiian and Other Pacific Islander Alone or in Combination with One or More Races, and with One or More Native Hawaiian and Other Pacific Islander Categories for Selected Groups
Hispanic or Latino by Specific Origin
Sex by Single Year of Age by Race
Household Type by Number of Children Under 18 (Excluding Householders, Spouses, and Unmarried Partners)
Presence of Unmarried Partner of Householder by Household Type for the Population Under 18 Years in Households (Excluding Householders, Spouses, and Unmarried Partners)
Nonrelatives by Household Type
Nonrelatives by Household Type by Race
Group Quarters Population by Major Group Quarters Type by Race
Group Quarters Population by Sex by Major Group Quarters Type for the Population 18 Years and Over by Race
Total Races Tallied for Householders
Hispanic or Latino Origin of Householders by Total Races Tallied
Total Population in Occupied Housing Units by Tenure by Race of Householder
Average Household Size of Occupied Housing Units by Tenure
Average Household Size of Occupied Housing Units by Tenure by Race of Householder
Occupied Housing Units Substituted
Allocation of Vacancy Status
Allocation of Tenure
Tenure by Presence and Age of Related Children
* Counts for these tables are available in other proposed DHC tables. For example, the count of families is available in the Household Type table, which will be available at the block level in the 2020 DHC. 
Dewey Beach, Delaware

Summer Break

Upon receiving a reminder from WordPress that it’s time to renew my subscription, I looked back and realized I’ve been pretty consistent over the years. Since rebooting this blog in Sept 2017, with few exceptions I’ve fulfilled my goal to write one post per month.

Unfortunately, due to professional and personal constraints I’m going to have to break my streak and put posting on pause for a while. Hopefully I can return to writing in the fall. Until then, enjoy the rest of summer.

Best – Frank

QGIS Example

QGIS 3.16 Tutorial Workbook

I just released a new edition of my introductory QGIS manual for QGIS 3.16 Hannover (the current long term release), and as always I’m providing it under Creative Commons for classroom use and self-directed learning. I’ve also updated my QGIS FAQs handout, which is useful for new folks as a quick reference. This material will eventually move to a Brown University website, but when that happens I’ll still hold on to my page and will link to the new spot. I’m also leaving the previous version of the tutorial written for QGIS 3.10 A Coruna up alongside it, but will pull that down when the fall semester begins.

The new edition has a new title. When I first wrote Introduction to GIS Using Open Source Software, free and open source (FOSS) GIS was a novelty in higher ed. QGIS was a lot simpler, and I had to pull in several different tools to accomplish basic tasks like CRS transformations and calculating natural breaks. Ten years later, many university libraries and labs with GIS services either reference or support QGIS, and the package is infinitely more robust. So a name change to simply Introduction to GIS with QGIS seemed overdue.

My move from Baruch CUNY to Brown prompted me to make several revisions in this version. The biggest change was swapping the NYC-based business site selection example with a Rhode Island-based public policy one in chapters 2 and 3. The goal of the new hypothetical example is to identify public libraries in RI that meet certain criteria that would qualify them to receive funding for after school programs for K-12 public school students (replacing the example of finding an optimal location for a new coffee shop in NYC). In rethinking the examples I endeavored to introduce the same core concepts: attribute table joins, plotting coordinates, and geoprocessing. In this version I do a better job of illustrating and differentiating between creating subsets of features by: selecting by attributes and location, filtering (a new addition), and deleting features. I also managed to add spatial joins and calculated fields to the mix.

Changes to chapter 4 (coordinate reference systems and thematic mapping) were modest; I swapped out the 2016 voter participation data with 2020 data. I slimmed down Chapter 5 on data sources and tutorials, but added an appendix that lists web mapping services that you can add as base maps. Some material was shuffled between chapters, and all in all I cut seven pages from the document to slim it down a bit.

As always, there were minor modifications to be made due to changes between software versions. There were two significant changes. First, QGIS no longer supports 32 bit operating systems for Windows; it’s 64 bit or nothing, but that seems to be fairly common these days. Second, the Windows installer file is much bigger (and thus slower to download), but it helps insure that all dependencies are there. Otherwise, the differences between 3.16 and 3.10 are not that great, at least for the basic material I cover. In the past there was occasionally a lack of consistency regarding basic features and terminology that you’d think would be well settled, but thankfully things are pretty stable this time around.

If you have any feedback or spot errors feel free to let me know. I imagine I’ll be treading this ground again after the next long term release take’s 3.16’s place in Feb / Mar 2022. For the sake of stability I always stick with the long term release and forego the latest ones; if you’re going to use this tutorial I’d recommend downloading the LTR version and not the latest one.

Philadelphia Redlining Map

Redlining Maps for GIS

I received several questions during the spring semester about redlining maps; where to find them, and how many were made. Known officially as Residential Security Maps, they were created by the Home Owners Loan Corporation in the 1930s to grade the level of security or risk for making home loans in residential portions of urban areas throughout the US. This New Deal program was intended to help people refinance mortgages and prevent foreclosures, while increasing buying opportunities to expand home ownership.

Areas were evaluated by lenders, developers, and appraisers and graded from A to D to indicate their desirability or risk level. Grade A was best (green), B still desirable (blue), C definitely declining (yellow), and D hazardous (red). The yellow and red areas were primarily populated by minorities, immigrants, and low income groups, and current research suggests that this program had a long reaching negative impact by enforcing and cementing segregation, disinvestment, and poverty in these areas.

The definitive digital source for these maps is the Mapping Inequality : Redlining in New Deal America project created at the University of Richmond’s Digital Scholarship Lab. They provide a solid history and summary of these maps and a good bibliography. The main portal is an interactive map of the US that allows you to zoom in and preview maps in different cities. You can click on individually zoned areas and get the original assessor or evaluator’s notes (when available). If you switch to the Downloads page you get a list of maps sorted alphabetically by state and city that you can download as: a jpeg of the original scanned map, a georeferenced image that can be added to GIS software as a raster, and a GIS vector polygon file (shapefile or geojson). In many cases there is also a scanned copy of the evaluators description and notes. You also have the option for downloading a unified vector file for the entire US as a shapefile or geojson. All of the data is provided under a Creative Commons Attribution Sharealike License.

Providence Redlining Map
Redlining Map of Providence, RI with graded areas, from the Mapping Inequality Project

There are a few other sources to choose from, but none of them are as complete. I originally thought of the National Archives which I thought would be the likely holder of the original paper maps, but only a fraction have been digitized. The PolicyMap database has most (but not all) of the maps available as a feature you can overlay in their platform. If you’re doing a basic web search this Slate article is among the first resources you’ll encounter, but most of the links are broken (which says something about the ephemeral nature of these kinds of digital projects).

How many maps were made? Amy Hillier’s work was among the earlier studies that examined these maps, and her case study of Philadelphia includes a detailed summary of the history of the HOLC program with references to primary source material. According to her research, 239 of these maps were made and she provides a list of each of the cities in the appendix. I was trying to discover how many maps were available in Rhode Island and found this list wasn’t complete; it only included Providence, while the Mapping Inequality project has maps for Providence, Pawtucket & Central Falls, and Woonsocket. I counted 202 maps based on unique names on Mapping Inequality, but some several individual maps include multiple cities.

She mentions that a population of 40,000 people was used as a cut-off for deciding which places to map, but noted that there were exceptions; Washington DC was omitted entirely, while there are several maps for urban counties in New Jersey as opposed to cities. In some case cities that were below the 40k threshold that were located beside larger ones were included. I checked the 1930 census against the three cities in Rhode Island that had maps, and indeed they were the only RI cities at that time that had more than 40k people (Central Falls had less than 40k but was included with Pawtucket as they’re adjacent). So this seemed to provide reasonable assurance that these were the only ones in existence for RI.

Finding the population data for the cities was another surprise. I had assumed this data was available in the NHGIS, but it wasn’t. The NHGIS includes data for places (Census Places) back to the 1970 census, which was the beginning of the period where a formal, bounded census place geography existed. Prior to this time, the Census Bureau published population count data for cities using other means, and the NHGIS is still working to include this information. It does exist (as you can find it in Wikipedia articles for most major cities) but is buried in old PDF reports on the Census Bureau’s website.

If you’re interested in learning more about the redlining maps beyond the documentation provided by Mapping Inequality, these articles provide detailed overviews of the HOLC and the residential security maps program, as well as their implications to the present day. You’ll need to access them through a library database:

Hillier, A.E. (2005). “Residential Security Maps and Neighborhood Appraisals: The Home Owners’ Loan Corporation and the Case of Philadelphia.” Social Science History, 29(2): 207-233.

Greer, J. (2012). “The Home Owners’ Loan Corporation and the Development of the Residential Security Maps“. Journal of Urban History, 39(2): 275-296.

OSM Web Feature Service

OpenStreetMap Data with ArcGIS Pro and QGIS

A couple years ago I wrote a post that demonstrated how to use the QuickOSM plugin for QGIS to easily extract features from the OpenStreetMap (OSM). The OSM is a great source for free and open GIS data, especially for types of features that are not captured in government sources, and for parts of the world that don’t possess a free or robust GIS data infrastructure. I’ve been using ArcGIS Pro more extensively in my new job and was wondering how I could do the same thing: query features from the OSM based on keys and values (denoting feature type) and geographic area and extract them as a vector layer. I’m looking for straightforward solutions that I could use for answering questions from students (so no command line tricks or database stuff). In this post I’ll cover three approaches for achieving this in ArcGIS Pro, with references to QGIS.

File Approach

The most straightforward method would be to export data directly from the main OSM page by zooming into an area and hitting the Export button. This is a pretty blunt approach, as you have to be zoomed in pretty close and you grab every possible feature in the view. The “native” file format of OSM is the osm / pbf format; .osm is an XML file while .pbf is a compressed binary version of the osm. QGIS is able to handle these files directly; you just add them as a vector layer. ArcGIS Pro cannot. You have to download and install a special Data Interoperability extension, which is an esoteric thing that’s not part of the standard package and requires a special license from your site license coordinator.

A better and more targeted approach is to download pre-created extracts that are provided by a number of organizations listed in the OSM wiki. I started with Geofabrik in Germany, as it was a source I recognized. They package OSM data by geographic area and feature type. On their main page they list files that contain all features for each of the continents. These are enormous files, and as such they are only provided in the osm pbf format as shapefiles can’t effectively handle data that size. Even if you downloaded the osm pbf files and added them to QGIS, the software will struggle to render something that big.

But all is not lost; Geofabrik and many other providers package data in a shapefile format for smaller areas, provided that the size and number of features is not too great. For instance, on Geofabrik’s download page if you click on North America you’re presented with country extracts for that continent (see images below). You can get shapefiles for Greenland and Mexico, but not Canada or the US as the files are still too big. Click on US, and you’re presented with files for each of the states. No luck for California (too big), but the rest of states are small enough that you can get shapefiles for all of them.

Geofabrik OSM data: download continents
Default Geofrabrik OSM download page for continents. Click on a continent name…
Geofabrik OSM data downloads: countries in North America
…to access files for countries. Click on a country name…
Geofabrik OSM data downloads: states of the US
…to access files for states / provinces / admin divisions

I downloaded and unzipped the file for Rhode Island. It contains a number of individual shapefiles classified by type of feature: buildings, land use, natural, places, places of worship (pofw), points of interest (pois), railways, roads, traffic, transport, water, and waterways. Many of the files appear twice: files with an “a” suffix represent polygons (areas) while files without that suffix are points or lines. Some OSM features are stored as polygons when such detail is available, while others are represented as points.

For example, if I add the two places of workship files to a map, for some features you have the outline of the actual building, while for most you simply have a point. After adding the layers to the map, you’ll probably want to use Select by Attribute to select the features you want based on OSM tags with keys and values, and Select by Location in conjunction with a separate boundary file to pull data out for a smaller area. The Geofabrik OSM attribute table is limited to basic attributes: an OSM ID, feature code and class, and name. It’s also likely that you’ll want to unify the point and polygon features of the same type into one layer, as they’re usually mutually exclusive. Use the Centroid (Polygon) tool in the toolbox to turn the polygons into points, and the Merge tool to meld the two point layers together. In QGIS the comparable tools under the Vector menu are Centroids and Merge Vector Layers. WGS 84 is the default CRS for the layers.

ArcGIS Pro with OSM Places of Worship from Geofabrik
OSM Places of Worship. Some features are stored as points while others are polygons

Geofabrik is just one option. There are several others and they take different approaches for structuring their extracts. For example, BBBike.org organizes their layers by city for over 200 cities around the world, and they provide a number of additional formats beyond OSM PBF and shapefiles, such as Garmin GPS, GeoJSON, and CSV. They divide the data into fewer files, and if they don’t compile data for the area you’re interested in you can use a web-based tool to create a custom extract.

Plugin Approach

It would be nice to use a plugin, as that would allow you to specify a custom geographic area and retrieve just the specific features you want. QuickOSM works quite nicely for QGIS. Fortunately there is a good ArcGIS Pro solution called OSMquery. It works for both Pro and Desktop, tested for Pro 2.2 and Desktop 10.6. I’m using Pro 2.7 and the basic tool worked fine. It’s well documented, with good instructions for installation and use.

The plugin is written in Python and you add it as a tool to your ArcToolbox. Download the repo from the OSMquery GitHub as a ZIP file (click the green code button and choose Download ZIP). Save it in or near your ArcGIS project folders, and unzip it. In Pro, go into a project and open a Catalog Pane in the View ribbon. Right click on Toolbox to add a new one, and browse to the folder you unzipped to add the tool. There are two scripts in the box, a basic and an advanced version. The basic tool functioned without trouble for me. The advanced tool threw an error, probably some Python dependency issue (I didn’t investigate as the basic tool met my needs).

In the basic tool you choose the key and value for the features you want to extract; the dropdown menu is automatically populated with these options. For the geographic extent you can enter a place name, or you can use the extent of the current map window or of a layer in the project, or you can manually type in bounding box coordinates. Another nice option is you can transform the CRS of the extracted features from WGS 84 to another system, so it matches the CRS of layers in your existing project. Run the tool, and the features are extracted. If the features exist as both points and polygons, you get two separate files for each. If you choose, you can merge them together as described in the previous section; this is a bit tougher as the plugin approach yields a much wider selection of fields in the attribute table, and not all of the point and polygon attributes align. With the Merge tool in Pro you can select which attributes you want to hold on to, and common ones will be merged. QGIS is a bit messier in this regard, but in my earlier post I outlined a work-around using a spatial database.

OSMquery tool in ArcGIS Pro
The basic OSMquery tool in an ArcGIS Pro toolbox

Web Feature Service

This initially seemed to be the most promising route, but it turned out to be a dud. Like QGIS, Pro allows you to add OSM as a tiled base map. But ESRI also offers OSM as a web feature service: by hitting Add Data on the Map ribbon and searching the Living Atlas for “OpenStreetMap” you can select from a number of OSM web feature services, organized by continent and feature type. Once you add them to a map, you can select and click on individual features to see their name and feature type. The big problem is that you are not allowed to extract features from these layers, which leaves you with an enormous and heterogeneous mix of features for an entire continent. You can interact with the features, selecting by attribute and location in reference to other spatial layers, but that’s about it.

OSM web feature service in ArcGIS Pro
OSM web feature service in ArcGIS Pro

In Summary

I would recommend taking the step of downloading the OSMquery plugin for ArcGIS Pro if you want to take a highly targeted approach to OSM feature extraction (for QGIS users, enable the QuickOSM plugin). This approach is also best if you can’t download a pre-existing extract for your area because it’s too large or has too many features, and if you want to access the fullest possible range of attribute values. Otherwise, you can simply download one of the pre-created extracts, and use your software to winnow it down to what you need (or if you do need everything, the file approach makes more sense). Since the file-based option includes fewer attributes, converting polygon features to points and merging them with the other point features is a bit simpler.