Even though I’ve left New York, there are still occasions where I refer back to NYC resources in order to help students and faculty here with NYC-based research. Most recently I’ve revisited NYC DOITT’s Geoclient API for geocoding addresses, and I discovered a number of things have changed since I’ve last used it a few years ago. I’ll walk through my latest geocoding script in this post.
First and foremost: if you landed on this page because you’re trying to figure out how to get your Geoclient API key to work, the answer is:
&subscription-key=YOURKEYHERE
This replaces the old format that required you to pass an app id and key. I searched through two websites and scanned through hundreds of pages of documentation, only to find this solution in a cached Google search result, as the new docs don’t mention this change and the old docs still have the previous information and examples of the application ID and key. So – hopefully this should save you some hours of frustration.
I was working with someone who needed to geocode a subset of the city’s traffic violation data from the open data portal, as the data lacks coordinates. It’s also missing postal city names and ZIP Codes, which precludes using most geocoders that rely on this information. Even if we had these fields, I’ve found that many geocoders struggle with the hyphenated addresses used throughout Queens, and some work-around is needed to get matches. NYC’s geoclient is naturally able to handle those Queens addresses, and can use the borough name or code for locating addresses in lieu of ZIP Codes. The traffic data uses pseudo-county codes, but it’s easy to replace those with the corresponding borough codes.
The older documentation is still solid for illustrating the different APIs and the variables that are returned; you can search for a parsed or non-parsed street address, street intersections, places of interest or landmarks, parcel blocks and lots, and a few others.
I wrote some Python code that I’ve pasted below for geocoding addresses that have house numbers, street, and borough stored in separate fields using the address API, and if the house number is missing we try again by doing an intersection search, as an intersecting street is stored in a separate field in the traffic data. In the past I used a thin client for accessing the API, but I’m skipping that as it’s simpler to just build the URLs directly with the requests module.
The top of the script has the standard stuff: the name of the input file, the column locations (counting from zero) in the input file that contain each of the four address components, the base URL for the API, a time function for progress updates, reading the API key in from a file, and looping through the input CSV with the addressees to save the header row in one list and the records in a nested list. I created a list of fields that are returned from the API that I want to hold on to and add them to the header row, along with a final variable that records the results of the match. In addition to longitude and latitude you can also get xCoordinate and yCoordinate, which are in the NY State Plane Long Island (ft-US) map projection. I added a counts dictionary to keep track of the result of each match attempt.
Then we begin a long loop – this is a bit messy and if I had more time I’d collapse much of this into a series of functions, as there is repetitive code. I loop through the index and value of each record beginning with the first one. The loop is in a try / except block, so in the event that something goes awry it should exit cleanly and write out the data that was captured. We take the base url and append the address request, slicing the record to get the values for house, street, and borough into the URL. An example of a URL after passing address components in:
https://api.nyc.gov/geo/geoclient/v1/address.json?houseNumber=12345&street=CONEY ISLAND AVE&borough=BROOKLYN&subscription-key=KEYGOESHERE
Pass that URL to the requests module and get a response back. If an address is returned, the JSON resembles a Python dictionary, with ‘address’ as the key and the value as another dictionary with key value pairs of several variables. Otherwise, we get an error message that something was wrong with the request.

The loop logic:
- If the package contains an ‘address’ key, flatten to get the sub-dictionary
- If ‘longitude’ is present as a key, a match is returned, get the relevant fields and append to the record
- Else if the dictionary contains a ‘message’ key with a value that the house number was missing, do an intersection match
- If the package contains an ‘intersection’ key, flatten to get the sub-dictionary
- If ‘longitude’ is present as a key, a match is returned, get the relevant fields and append to the record
- If not, there was no intersection match, just get the messages and append blanks for each value to the record
- If not, an error was returned, capture the error and append blanks for each value to the record, and continue
- If the package contains an ‘intersection’ key, flatten to get the sub-dictionary
- If not, there was no address match, just get the messages and append blanks for each value to the record
- If not, an error was returned, capture the error and append blanks for each value to the record, and continue
The API has limits of 2500 matches per minute and 500k per day, so after 2000 records I built in a pause of 15 seconds. Once the process finishes, successfully or not, the records are written out to a CSV file, header row first followed by the records. If the process bailed prematurely, the last record and its index are printed to the screen. This allows you to rerun the script where you left off, by changing the start index in the variables list at the top of the script from 0 to the last record that was read. When it comes time to write output, the previous file is appended rather than overwritten and the header row isn’t written again.
It took about 90 minutes to match a file of 25,000 records. I’d occasionally get an error message that the API key was bad for a given record; the error would be recorded and the script continued. It’s likely that there are illegal characters in the input fields for the address that end up creating a URL where the key parameter can’t be properly interpreted. I thought the results were pretty good; beyond streets it was able to recognize landmarks like large parks and return matched coordinates with relevant error messages (example below). Most of the flops were, not surprisingly, due to missing borough codes or house numbers.

To use this code you’ll need to sign up for an NYC Developer API account, and then you can request a key for the NYC Geoclient service. Store the key in a text file in the same folder as the script. I’m also storing inputs and outputs in the same folder, but with a few functions from the os module you can manipulate paths and change directories. If I get time over the winter break I may try rewriting to incorporate this, plus functions to simplify the loops. An alternative to the API would be to download the LION street network geodatabase, and you could set up a local address locator in ArcGIS Pro. Might be worth doing if you had tons of matches to do. I quickly got frustrated with with the ArcGIS documentation and after a number of failed attempts I opted to use the Geoclient instead.
"""
Match addresses to NYC Geoclient using house number, street name, and borough
Frank Donnelly / GIS and Data Librarian / Brown University
11/22/2021 - Python 3.7
"""
import requests, csv, time
#Variables
addfile='parking_nov2021_nyc.csv' #Input file with addresses
matchedfile=addfile[:-4]+'_output.csv' #Output file with matched data
keyfile='nycgeo_key.txt' #File with API key
start_idx=0 #If program breaks, change this to pick up with record where you left off
#Counting from 0, positions in the CSV that contain the address info
hous_idx=23
st_idx=24
boro_idx=21
inter_idx=25
base_url='https://api.nyc.gov/geo/geoclient/v1/'
def get_time():
time_now = time.localtime() # get struct_time
pretty_time = time.strftime("%m/%d/%Y, %H:%M:%S", time_now)
return pretty_time
print('*** Process launched at', get_time())
#Read api key in from file
with open(keyfile) as key:
api_key=key.read().strip()
records=[]
with open(addfile,'r') as infile:
reader = csv.reader(infile)
header = next(reader) # Capture column names as separate list
for row in reader:
records.append(row)
# Fields returned by the API to capture
# https://maps.nyc.gov/geoclient/v1/doc
fields=['message','message2','houseNumber','firstStreetNameNormalized',
'uspsPreferredCityName','zipCode','longitude','latitude','xCoordinate',
'yCoordinate']
header.extend(fields)
header.append('match_result')
datavals=len(fields)-2 # Number of fields that are not messages
counts={'address match':0, 'intersection match':0,
'failed address':0, 'failed intersection':0,
'error':0}
print('Finished reading data from', addfile)
print('*** Geocoding process launched at',get_time())
for i,v in enumerate(records[start_idx:]):
try:
data_url = f'{base_url}address.json?houseNumber={v[hous_idx]}&street={v[st_idx]}&borough={v[boro_idx]}&subscription-key={api_key}'
response=requests.get(data_url)
package=response.json()
# If an address is returned, continue
if 'address' in package:
result=package['address']
# If longitude is returned, grab data
if 'longitude' in result:
for f in fields:
item=result.get(f,'')
v.append(item)
v.append('address match')
counts['address match']=counts['address match']+1
# If there was no house number, try street intersection match instead
elif 'message' in result and result['message']=='INPUT CONTAINS NO ADDRESS NUMBER' and v[inter_idx] not in ('',None):
try:
data_url = f'{base_url}intersection.json?crossStreetOne={v[st_idx]}&crossStreetTwo={v[inter_idx]}&borough={v[boro_idx]}&subscription-key={api_key}'
response=requests.get(data_url)
package=response.json()
# If an intersection is returned, continue
if 'intersection' in package:
result=package['intersection']
# If longitude is returned, grab data
if 'longitude' in result:
for f in fields:
item=result.get(f,'')
v.append(item)
v.append('intersection match')
counts['intersection match']=counts['intersection match']+1
# Intersection match fails, append messages and blank values
else:
v.append(result.get('message',''))
v.append(result.get('message2',''))
v.extend(['']*datavals)
v.append('failed intersection')
counts['failed intersection']=counts['failed intersection']+1
# Error returned instead of intersection
else:
v.append(package.get('message',''))
v.append(package.get('message2',''))
v.extend(['']*datavals)
v.append('error')
counts['error']=counts['error']+1
print(package.get('message',''))
print('Geocoder error at record',i,'continuing the matching process...')
except Exception as e:
print(str(e))
# Address match fails, append messages and blank values
else:
v.append(result.get('message',''))
v.append(result.get('message2',''))
v.extend(['']*datavals)
v.append('failed address')
counts['failed address']=counts['failed address']+1
# Error is returned instead of address
else:
v.append(package.get('message',''))
v.append(package.get('message2',''))
v.extend(['']*datavals)
v.append('error')
counts['error']=counts['error']+1
print(package.get('message',''))
print('Geocoder error at record',i,'continuing the matching process...')
if i%2000==0:
print('Processed',i,'records so far...')
time.sleep(15)
except Exception as e:
print(str(e))
# First attempt, write to new file, but if break happened, append to existing file
if start_idx==0:
wtype='w'
else:
wtype='a'
end_idx=start_idx+i
with open(matchedfile,wtype,newline='') as outfile:
writer = csv.writer(outfile, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
if wtype=='w':
writer.writerow(header)
writer.writerows(records[start_idx:end_idx])
else:
writer.writerows(records[start_idx:end_idx])
print('Wrote',i+1,'records to file',matchedfile)
print('Final record written was number',i,':\n',v)
for k,val in counts.items():
print(k,val)
print('*** Process finished at',get_time())
You must be logged in to post a comment.