data sources

Info about any spatial or attribute data sources

Map of Ukraine Wheat Exports in 2021

Import and Export Data for Countries: Grain from Ukraine

I’ve been receiving more questions about geospatial data sources as the semester draws to a close. I’ll describe some sources that I haven’t used extensively before in the next couple of posts, beginning with data on bilateral trade: imports and exports between countries. We’ll look at the IMF’s Direction of Trade Statistics (DOTS) and the UN’s COMTRADE database. Both sources provide web-based portals, APIs, and bulk downloading. I’ll focus on the portals.

IMF Direction of Trade Statistics

IMF DOTS provides monthly, quarterly, and annual import and export data, represented as total dollar values for all goods exchanged. The annual data goes back to 1947, while the monthly / quarterly data goes back to 1960. All countries that are part of the IMF are included, plus a few others. Data for exports are published on a Free and On Board (FOB) price basis, while imports are published on a Cost, Insurance, Freight (CIF) price basis. Here are definitions for each term, quoted directly from the OECD’s Statistical Glossary:

The f.o.b. price (free on board price) of exports and imports of goods is the market value of the goods at the point of uniform valuation, (the customs frontier of the economy from which they are exported). It is equal to the c.i.f. price less the costs of transportation and insurance charges, between the customs frontier of the exporting (importing) country and that of the importing (exporting) country.

The c.i.f. price (i.e. cost, insurance and freight price) is the price of a good delivered at the frontier of the importing country, including any insurance and freight charges incurred to that point, or the price of a service delivered to a resident, before the payment of any import duties or other taxes on imports or trade and transport margins within the country.

OECD Glossary of Statistical Terms

There are a few different ways to browse and search for data. Start with the Data Tables tab at the top, and Exports and Imports by Areas and Countries. The default table displays monthly exports by region and country for the entire world (you could switch to imports by selecting the Imports CIF tab beside the Export sFOB tab). Hitting the Calendar dropdown allows you to change the date range and frequency. Hitting the Country dropdown lets you select a specific region or country. In the example below, I’ve changed the calendar from months to years, and the country to Ukraine. By doing so, the table now depicts the total US dollar value of exports and imports between Ukraine and all other countries. The Export button at the top allows you to save the report in a number of formats, Excel being the most data friendly option.

IMF DOTS Basic Report – Total Value of Exports from Ukraine, Last Five Years

While this is the quickest option, it comes with some downsides; the biggest one is that there are no unique identifiers for the countries, which is important if you wanted to join this table to a GIS vector file for mapping, or another country-level table in a database.

A better approach is to return to the home page and use the Query tab, which allows you to get a unique identifier and filter out countries and regions that are not of interest.

DOTS Query Tab
  1. Under Columns, select the time frame and interval. For example, check Years for Frequency at the top, and change the dropdowns at the bottom from Months to Years. From -5 to 0 would give you the last five years in ascending order.
  2. Rows allows you to filter out countries or regions that you don’t want to see in the results. You can also change the attribute that is displayed. Once the menu is open, right click in an empty area and choose Attribute. Here you can choose a variant country name, or an ISO country code. ISO codes are commonly used for uniquely identifying countries.
  3. Indicator lets you choose Exports (FOB), Imports (CIF or FOB), or Trade Balance, all in US dollars.
  4. Counterpart country is the country or region that you want to show trade for, such as Ukraine in our previous example.
  5. The tabs along the top allow you to produce graphs instead of a table (View – Table), to pivot the table (Adjust), and calculate summaries like sums or averages (Advanced).
  6. Export to produce an Excel file. By choosing the ISO codes you’ll lose the country names, but you can join the result to another country data table or shapefile and grab the names from there.
Modify Time
Modify Rows – Country – Change Attribute
DOTS Modified Table to Export: Total Value of Exports from Ukraine Last Five Years

UN COMTRADE

If you want data on the exchange of specific goods and services, quantities in addition to dollar values, and exchanges beyond simple imports and exports, then the UN’s COMTRADE database will be your source. You need to register to download data, but you can generate previews without having to log in. There is an extensive wiki that describes how to use the different database tools, and summaries of technical terms that you need to know for extracting and interpreting the data. You’ll need some understanding of the different systems for classifying commodities and goods. Your options (the links that follow lead to documentation and code lists) are: the Harmonized Classification System (HC), the Standard Industrial Trade Classification (SITC), and the Broad Economic Categories (BEC). What’s the difference? Here are some summaries, quoted directly from a UN report on the BEC:

The HS classification is maintained by the World Customs Organization. Its main purpose is to classify goods crossing the border for import tariffs or for application of some non-tariff measures for safety or health reasons. The HS classification is revised on a five-year cycle (p. 18)

The original SITC was designed in the 1950s as a tool for collection and dissemination of international merchandise trade statistics that would help in establishing internationally comparable trade statistics. By its introduction in 1988, the HS took over as collection and dissemination tool, and SITC was thereon used mostly as an analytical tool. (p. 19)

The Classification by Broad Economic Categories (BEC) is an international product classification. Its main purpose is to provide a set of broad product categories for the analysis of trade statistics. Since its adoption in 1971, statistical offices around the world have used BEC to report trade statistics in a concise and meaningful way (p. iii). The broad economic categories of BEC include all subheadings of the HS classification. Therefore, the total trade in terms of HS equals the total trade of the goods side of BEC. (p. 18)

Classification of Broad Economic Categories Rev 5 (2018)

In short, go with the BEC if you’re interested in high-level groupings, or the HS if you need detailed subdivisions. The SITC would be useful if you need to go further back in time, or if it facilitates looking at certain subdivisions or groupings that the other systems don’t capture.

From COMTRADE’s homepage, I suggest leaving the defaults in place and doing a basic, preliminary search for all global exports for the most recent year, so you can see basic output on the next screen. Then you can apply filters for a narrower search.

For example, let’s look at annual exports of wheat from Ukraine to other countries. Under the HS filter, remove the TOTAL code. Start typing wheat, and you’ll see various product categories: 6-digit codes are the most specific, while 4-digit codes are broader groups that encapsulate the 6-digit categories. We’ll choose wheat and meslin 1001. We’ll select Ukraine as the Reporter (the country that supplied the statistics and represents the origin point), and for the 1st partner we’ll choose All to get a list of all countries that Ukraine exported wheat to. The 2nd partner country we’ll leave as World (alternatively, you would add specific countries here if you wanted to know if there were intermediary nations between the origin and destination).

UN COMTARDE Refine Search with Filters

Hit Preview to see the results. You can click on a heading to sort by dollar value, weight, or country name. Like IMF DOTS, UN COMTRADE measures dollar amounts of exports as FOB and imports as CIF. At this point, you would need to log in to download the data as a CSV (creating an account is free). You would also need to be logged in if you generated an extract that has more than 500 records, otherwise the results will be truncated. You could always copy and paste data for shorter extracts directly from the screen to a spreadsheet, but you wouldn’t get any of the extra metadata fields that come with download, like ISO Country Codes and the classification codes for goods and merchandise.

COMTRADE Filtered Results – Exports of Wheat and Meslin from Ukraine 2021
Data Exported from COMTRADE to CSV with Identifiers

Mapping

For data from either source, if you wanted to map it you’d need to have a data table where there is one row for each country with columns of attributes, and with one column that has the ISO country code to serve as a unique identifier. Save the data table in an Excel file or as a table in a database. Download a country shapefile from Natural Earth. Add the shapefile and data table to a project and join them using the ISO code. Natural Earth shapefiles have several different ISO code columns that represent nations, sovereigns, and parent – child relationships; be sure you select the right one. Data table records that represent regions or groupings of countries (i.e. the EU, ASEAN, sum of smaller countries per continent not enumerated, etc.) will fall out of the dataset, as they won’t have a matching feature in the country shapefile. The map at the top of this post was created in QGIS, using COMTRADE and Natural Earth.

Topo Claymont

Digital USGS Historic Topographic and Scientific Investigation Maps

This semester we launched a project to inventory our USGS topographic map collection. Our holdings include tens of thousands (probably over a 100,000) of these maps that depict the nation’s physical terrain and built environment in great detail. One of my former students wrote a Python program using the tkinter module to create a GUI, which we’re using to filter a list of published maps in a SQLite database to match ones that we have in hand. Here’s a short guide that documents our process.

The list we’re using as our base table is what powers USGS topoView, which allows you to browse and download over 200,000 historic topos (1880 to 2006) that have been digitized and georferenced. The application also includes maps produced from 2009 forward that are part of the newer US Topo project; these maps are created on an on-going basis by pulling together a number of existing government data sources (unlike the historic maps, which were created by manual field surveys and updated over time using aerial photographs and satellite imagery).

You can search topoView using the name of a location or quadrangle (the grid cell that represents the area of each map, named after the most prominent feature in that area) to find all available maps for that location. There’s a set of filters that allows you to focus on the Historic Topographic Map Collection (HTMC) versus the US Topo Collection (2009 to present), or a specific scale. Choose a scale and zoom in, and you’ll see the grid cells for that series so you can identify map coverage. The 24k scale is the most familiar series; as the largest scale / smallest area maps that the USGS produced, it provides the most detail and covers every state and US territory. Each map covers an area of 7.5 x 7.5 minutes (think of a degree as 60 mins) and an inch on these maps represents 2,000 feet. This scale was introduced in the late 1940s, and replaced both the 63k scale map (a 15 x 15 minute map where 1 inch = 1 mile) that was the previous standard, and the less common 48k scale.

USGS topoView application

There are also smaller scale maps, which cover larger areas. The 100k series was introduced in the mid 1970s and covers the lower 48 states and Hawaii. Each map covers an area of 30 x 60 minutes and uses metric units (1 inch = 1.6 miles). The 250k series was introduced in the 1940s by the US Army Map Service and was eventually taken over by the USGS. These maps include all 50 states, cover an area of 1 x 2 degrees, and use imperial units (1 inch = 4 miles). There are about 1,800 quads for the 100k series and only 900 or so for the 250k, versus over 60,000 for the 24k series.

Once you search for an area or click on a quad, you’ll see all the maps available in that area over time. Applying the scale filter shows you just maps at that scale, plus some similar but odd scale maps that are not numerous enough to get their own filter. The predominate year listed for each record is the “map year”, which is when field work was done to either create the map or substantively update it. There’s also an edition or “print year” that indicates when the map was printed. If you look at the metadata (use the info button) or preview the map, there may be an edit or photo revision year, indicating if the map was updated back at headquarters using air photos or imagery. The image below illustrates where you can find this information on a standard 24k scale map.

Collar of USGS 24k Topo Map
1: Map Scale 2: Quad Name 3: Map Year and Revision Year 4: Print Year

Clicking on the thumbnail of the map in the results gives you a quick full screen preview. There are several download options, including a JPEG if you want a small compressed image, or a GeoTiff if you want a lossless format with the best resolution, and if you want to use it in GIS software as a raster layer.

The changes you can see over time on these maps can be striking, illustrating the suburban sprawl of the 20th century. Consider the snippets from a 24k map of the Orlando West, Florida quadrangle below.

Orland West 1957
Orlando West 1956
Orlando West 1980
Orlando West 1980

While many people are familiar with the topographic series, the USGS also publishes a number of other map and report series that cover topics like hydrography, oil and gas exploration, mining, land use and land cover, and special scientific investigations. They have digitized (but not georeferenced) many of these maps, from the 1950s to present. You can browse through a list of all these publications, or you can search across them in the Publications Warehouse. If you search, try the Advanced Search and specify publication type and subtype as filters. Most of the maps are classified as publication type: Report, and subtype: USGS Numbered Series.

For example, the IMAP series includes special investigation maps that cover tectonic, geologic, mineral, topographic, and bathymetric maps of specific small or regional areas in the US. They also include maps of Antarctica, special investigations in other countries, the moon, and other planets and moons. Every report / map has a landing page with a permanent URL and doi that uses the series number of the map, with links to a PDF of the map as well as a Dublin Core metadata record. For example, here’s a Geologic Map of Io from 1992, part of the IMAP series.

Portion of a Geologic Map of the Jovian Moon Io

This is great, as you can use these records and metadata for building other interactive finding aids, and can link directly to individual maps. The USGS has created different portals for accessing subsets of these materials, such as this special topics page for identifying different planetary maps in the SIM and IMAP series.

Some other gems I’ve discovered stashed away in the publications warehouse: a poster of map projections (with a flip side portrait of Gerardus Merctor) which should be familiar to most 1990s university geography students; it was often hung in classrooms and provided as an insert in cartography textbooks. Also, a digitized copy of the book Maps for America. Originally published for the USGS centenary in 1979, this book provides a comprehensive history and overview of the topographic map series. The scanned copy is the 3rd edition, printed in 1987. If you suddenly find yourself in the position of having to curate a hundred thousand 20th century topo maps, there is no better guide than this book.



IPUMS CPS Table

Creating Geographic Estimates with the IPUMS CPS Online Data Analysis System

Introduction

In this post I’ll demonstrate how to use the IPUMS CPS Online Data Analysis System to generate summary data from the US Census Bureau’s Current Population Survey (CPS). The tool employs the Survey Documentation and Analysis system (SDA) created at UC Berkeley.

The CPS is a monthly stratified sample survey of 60k households. It includes a wide array of statistics, some captured routinely each month, others at various intervals (such as voter registration and participation, captured every November in even-numbered years). The same households are interviewed over a four-month period, then rotated out for four months, then rotated back in for a final four months. Given its consistency, breadth, high response rate and accuracy (interviews are conducted in-person and over the phone), researchers use the CPS microdata (individual responses to surveys that have been de-identified) to study demographic and socio-economic trends among and between different population groups. It captures many of the same variables as the American Community Survey, but includes a fair number that are not.

I think the CPS is used less often by geographers, as the sample size is too small to produce reliable estimates below the state or metropolitan area levels. I find that students and researchers who are only familiar with working with summary data often don’t use it; generating your own estimates from microdata records can be time consuming.

The IPUMS project provides an online analyzer that lets you generate summary estimates from the CPS without having to handle the individual sample records. I’ve used it in undergraduate courses where students want to generate extracts of data at the regional or state level, and who are interested in variables not collected in the ACS, such as generational households for immigrants. The online analyzer doesn’t include the full CPS, but only the data that’s collected in March as part of the core CPS series, and the Annual Social & Economic Supplement (ASEC). It includes data from 1962 to the present.

To access any of the IPUMS tools, you must register and create an account, but it’s free and non-commercial. They provide an ample amount of documentation; I’ll give you the highlights for generating a basic geographic-based extract in this post.

Creating a Basic Geographic Summary Table

Once you launch the tool, the first thing you need to do is select some variables. You can use the drill-down folder menus at the bottom left, but I find it’s easier to hit the Codebook button and peruse the alphabetical list. Let’s say we want to generate state-level estimates for nativity for a recent year. If we go into the codebook and look-up nativity, we see it captures foreign birthplace or parentage. Also in the list is a variable called statefip, which are the two-digit codes that uniquely identify every state.

Codebook for Nativity – Foreign Birthplace or Parentage

Back on the main page for the Analyzer in the tables tab, we provide the following inputs:

  1. Row represents our records or observations. We want a record for every state, so we enter the variable: statefip.
  2. Column represents our attributes or variables. In this example, it’s: nativity.
  3. Selection filter is used to specify that we want to generate estimates from a subset of all the responses. To generate estimates for the most recent year, we enter year as the variable and specify the filter value in parentheses: year(2020). If we didn’t specify a year, the program would use all the responses back to 1962 to generate the estimates.
  4. Weight is the value that’s used to weight the samples to generate the estimates. The supplemental person weight sdawt is what we’ll use, as nativity is measured for individual persons. There is a separate weight for household-level variables.
  5. Under the output option dropdown, we change the Percentaging option from column to row. Row calculates the percentage of the population in each nativity category within the state (row). The column option would provide the percentage of the population in each nativity category between the states. In this example, the row option makes more sense.
  6. For the confidence interval, check the box to display it, as this will help us gauge the precision of the estimate. The default level is 95%; I often change this to 90% as that’s what the American Community Survey uses.
  7. At the bottom of the screen, run the table to see the result.
CPS Online Analyzer – Generate Basic Extract for Nativity by State for a Single Year

In the result, the summary of your parameters appears at the top, with the table underneath. At the top of the table, the Cells contain legend lists what appears in each of the cells in order. In this example, the row percent is first, in bold. For the first cell in Alabama: 91.0% of the total population have parents who were both born in the US, the confidence interval is 90.2% to 91.8% (and we’re 90% confident that the true value falls within this range), and the large number at the bottom is the estimated number of cases that fall in this category. Since we filtered our observations for one year, this represents the total population in that category for that year (if we check the totals in the last column against published census data, they are roughly equivalent to the total population of each state).

Output Table – Nativity by State in 2020

Glancing through the table, we see that Alabama and Alaska have more cases where both parents are born in the US (91.0% and 85.1%) relative to Arizona (68.7%). Arizona has a higher percentage of cases where both parents are foreign born, or the persons themselves are foreign-born. The color coding indicates the Z value (see bottom of the table for legend), which indicates how far a variable deviates from the mean, with dark red being higher than the expected mean and dark blue being lower than expected. Not surprisingly, states with fewer immigrants have higher than average values for both parents native born (Alabama, Alaska, Arkansas) while this value is lower than average for more diverse states (Arizona, California).

To capture the table, you could highlight / copy and paste the screen from the website into a spreadsheet. Or, if you return to the previous tab, at the bottom of the screen before running the table, you can choose the option to export to CSV.

Variations for Creating Detailed Crosstabs

To generate a table to show nativity for all races:

Input Parameters – Generate Tables for Nativity by State for each Race

In the control box, type the variable race. The control box will generate separate tables in the results for each category in the control variable. In this case, one nativity table per racial group.

To generate a table for nativity specifically Asians:

Input Parameters – Generate Table for Nativity by State for Asians

Remove race from the control box, and add it in the filter box after the year. In parentheses, enter the race code for Asians; you can find this in the codebook for race: year(2020), race(651).

Now that we’re drilling down to smaller populations, the reliability of the estimates is declining. For example, in Arkansas the estimate for Asians for both parents foreign born is 32.4%, but the value could be as low as 22.2% or as high as 44.5%. The confidence interval for California is much narrower, as the Asian population is much larger there. How can we get a better estimate?

Output Table – Nativity by State for Asians in 2020

Generate a table for nativity for Asians over a five year period:

Input Parameters – Generate Table for Nativity by State for Asians for 5-year Period

Add more years in the year filter, separated by commas. In this version, our confidence intervals are much narrower; notice for Asians for both parents foreign born in Arkansas is now 19.2% with a range of 14.1% to 25.6%. By increasing the sample pool with more years of data, we’ve increased the precision of the estimate at the cost of accepting a broader time period. One big caveat here: the Weighted N represents the total number of estimated cases, but since we are looking at five years of data instead of one it no longer represents a total population value for a single year. If we wanted to get an average annual estimate for this 5-year time period (similar to what the ACS produces), we’d have to divide each of estimates by five for a rough approximation. You can download the table and do these calculations in a spreadsheet or stats program.

Output Table – Nativity by State for Asians between 2016-2020 (weighted N = estimate of total cases over 5 years, not an average annual value)

You can also add control variables to a crosstab. For example, if you added sex as a control variable, you would generate separate female and male tables for the nativity by state for the Asian population over a given time period,

Example of a Profile Table

If we wanted to generate a profile for a given place as opposed to a comparison table, we can swap the variables we have in the rows and columns. For example, to see nativity for all Hispanic subgroups within California for a single year:

Input Parameters – Generate a Profile Table for California of Nativity by Hispanic Groups in 2020

In this case, you could opt to calculate percentages by column instead of row, if you wanted to see percent totals across groups for the categories. You could show both in the same chart, but I find it’s difficult to read. In this last example, note the large confidence intervals and thus lower precision for smaller Hispanic groups in California (Cuban, Dominican) versus larger groups (Mexican, Salvadoran).

Output Table – Nativity by Hispanic Groups in California 2020 (confidence interval is much larger for smaller groups)

In short – this is handy tool if you want to generate some quick estimates and crosstabs from the CPS without having to download and weight microdata records yourself. You can create geographic data for regions, divisions, states, and metro areas. Just be mindful of confidence intervals as you drill down to smaller subgroups; you can aggregate by year, geography, or category / group to get better precision. What I’ve demonstrated is the tip of the iceberg; read the documentation and experiment with creating charts, statistical summaries, and more.

BEA Population Change Map

Population and Economic Time Series Data from the BEA

In this post, I’ll demonstrate how to access and download multiple decades of annual population data for US states, counties, and metropolitan areas in a single table. Last semester, I was helping a student in a GIS course find data on tuberculosis cases by state and metro area that stretched back several decades. In order to make meaningful comparisons, we needed to calculate rates, which meant finding an annual time series for total population. Using data directly from the Census Bureau is tough going, as they don’t focus on time series and you’d have to stitch together several decades of population estimates. Metropolitan areas add more complexity, as they are modified at least a few times each decade.

In searching for alternatives, I landed at the Bureau of Economic Analysis (BEA). As part of their charge is studying the economy over time, they gather data from the Census Bureau, Bureau of Labor Statistics, and others to build time series, and they update them as geography and industrial classification schemes change. They publish national, state, metropolitan area, and county-level GDP, employment, income, wage, and population tables that span multiple decades. Their economic profile table for metros and counties covers 1969 to present, while the state profile table goes back to 1958. Metropolitan areas are aggregates of counties. As metro boundaries change, the BEA normalizes the data, adjusting the series by taking older county-level data and molding it to fit the most recent metro definitions.

Finding the population data was a bit tricky, as it is embedded as one variable in the Economic Profile table (identified as CAINC30) that includes multiple indicators. Here’s the path to get it:

  • From the BEA website, choose Tools – Interactive Data.
  • From the options on the next page, choose Regional from the National, Industry, International or Regional data options. There’s also a link to a video that illustrates how to use the BEA interactive data tool.
  • From the Regional Data page, click “Begin using the data” (but note you can alternatively “Begin mapping the data” to make some basic web maps too, like the one in the header of this post).
  • On the next page are categories, and under each category are data tables for specific series. In this case, Personal Income and Employment by County and Metropolitan Area was what I wanted, and under that the Economic Profile CAINC30 table (states appear under a different heading, but there’s a comparable Economic Profile table for them, SAINC30).
  • On the multi-screen table builder, you choose a type of geography (county or different metro area options), and on the next tab you can choose individual places, hold down CTRL and select several, or grab them all with the option at the top of the dropdown. Likewise for the Statistic, choose Population (persons), or grab a selection, or take all the stats. Under the Unit of Measure dropdown, Levels gives you the actual statistic, but you can also choose percent change, index values, and more. On the next tab, choose your years.
  • On the final page, if your selection is small enough you can preview the result and then download. In this case it’s too large, so you’re prompted to grab an Excel or CSV file to download.

And there you have it! One table with 50+ years of annual population data, using current metro boundaries. The footnotes at the bottom of the file indicate that the latest years of population data are based on the most recent vintage estimates from the Census Bureau’s population estimates. Once the final intercensal estimates for the 2010s are released, the data for that decade will be replaced a final time, and the estimates from the 2020s will be updated annually as each new vintage is released until we pass the 2030 census. Their documentation is pretty thorough.

BEA 5-decade Time Series of Population Data by Metro Area

The Interactive Data table approach allows you to assemble your series step by step. If you wanted to skip all the clicking you can grab everything in one download (all years for all places for all stats in a given table). Of course, that means some filtering and cleaning post-download to isolate what you need. There’s also an API, and several other data series and access options to explore. The steps for creating the map that appears at the top of this post were similar to the steps for building the table.

UN ICSC Retail Price Index Map

UN Retail Price Index Time Series

We recently launched our fledgling geodata portal on GitHub for the open datasets we’ll create in our new lab. In the spring we carved out a space on the 11th floor of the Sciences Library at Brown which we’ve christened GeoData@SciLi, a GIS and data consultation and work space. We’ll be doing renovations on both the webspace and workspace over the summer.

Our inaugural dataset was created by Ethan McIntosh, a senior (now graduate) who began working with me this spring. The dataset is the United Nations International Civil Service Commission’s (UN ICSC) Retail Price Indices with Details (RPID). The index measures the cost of living based on several categories of goods and services in duty stations around the world. It’s used to adjust the salaries of the UN’s international staff relative to UN headquarters in New York City (index value of 100 = cost of living in New York). The data is updated six times a year, published in an Excel spreadsheet that contains a macro that allows you to look up the value of each duty station via a dropdown menu. The UN ICSC makes the data public by request; you register and are granted access to download the data in PDF and Excel format in files that are packaged in one month / year at a time.

We were working with a PhD student in economics who wanted to construct a time-series of this data. Ethan wrote a Python script to aggregate all of the files from 2004 to present into a single CSV; the actual values for each country / duty station were stored in hidden cells that the macro pulled from, and he was able to pull them from these cells. He parsed the data into logical divisions, and added the standard 3-letter ISO 3166 country code to each duty station so that each record now has a unique place identifier. His script generates three outputs: a basic CSV of the data in separate month / year files, a “long” (aka flat) time series file where each record represents a specific duty station and retail index category or weight for a given month and year, and a “wide” time series file where the category / weight has been pivoted to a column, so each record represents all values for a duty station for a given month / year. He’s written the program to process and incorporate additional files as they’re published.

While the primary intention was to study this data as a time series in a statistical analysis, it can also be used for geospatial analysis and mapping. Using the wide file, I created the map in the header of this post, which depicts the total retail index for February 2022 for each country, where the value represents the duty station within the country (usually the capital city). I grabbed some boundaries from Natural Earth and joined the data to it using the ISO code. I classified the data using natural breaks, but manually adjusted the top level category to include all countries with a value greater than or equal to the base value of 100.

There were only five duty stations that were more expensive than New York, with values between 102 and 124: Tokyo, Ashkhabad (Turkmenistan), Singapore, Beirut, and Hong Kong. Beijing and Geneva were equivalent in price at 100. The least expensive stations with values between 52 and 69 were: Caracas (Venezuela), Tripoli, Damascus, Ankara (Turkey), Bucharest (Romania), Mbabane (Eswatini – formerly Swaziland), and Sofia (Bulgaria). There appears to be regional clustering of like values, although I didn’t run any tests. The station in the US that’s measured relative to NYC is Washington DC (index value of 89).

The final datasets and code used to generate them are available on GitHub, and we’ll update it at least once, if not a couple times, a year. We are not providing the original month / year macro spreadsheets; if you want those you should register with the UN ICSC and access them there. If you’re using our data files, you should still register with them, as they would like to be aware of how their data is being used.

We will post additional projects, datasets, and code in individual repos as we create them, linked to from our main page. I’m working on creating a basic metadata profile for our lab, so we’ll provide structured metadata for each of our datasets in the near future.

Hurricanes 2021

GIS Data for US Coastal Storms and Floods

Over the course of this academic year I’ve helped many students find GIS data related to coastal storms and flooding in the US. There’s a ton of data available, particularly from NOAA, but there are so many projects and initiatives that it can be tough to find what you’re looking for. So I’ll share a few key resources here.

NOAA’s DigitalCoast is a good place to start; it’s a catalog of federal, state, and US territory projects and websites that provide both spatial and non-spatial datasets related to coastal storms and flooding. You can filter by place and data type; there are even a few global sources. Most of the projects I mention below are cataloged there.

Given the size of many of these datasets, the ArcGIS File Geodatabase is often used for packaging and distribution. Once you’ve downloaded and unzipped one, it looks like a folder with lots of subfolders and files. If you’re an ArcGIS user, use the Catalog pane to browse your file system and add a connection to the database / folder to access its contents. If you’re a QGIS user, use the Data Manager and on the Vector tab change the source type from File to Directory. In the Source Type dropdown you can choose OpenFileGDB, and browse and select the database, which appears as a folder. Once you hit the Add button, you’ll be prompted to choose the features in the DB that you wish to add to the project.

Adding a File Geodatabase in QGIS
Adding a File Geodatabase in QGIS

FEMA Flood Hazards and Disasters

The FEMA flood maps are usually the first thing that comes to mind when folks set out to find data on flooding, but good luck finding their GIS data. I’ve searched through their main program site for the National Flood Hazard Layer and followed every link, but can’t for the life of me find the connection to the page that has actual GIS data; there are map viewer tools, scanned paper maps, web mapping services, and everything else under the sun.

If you want FEMA flood data in a GIS format: GO HERE! This is the record in data.gov for the National Flood Hazard Layer. The links at the bottom include this one: Download Seamless Nationwide NFHL GIS data. The data is packaged in an ArcGIS File Geodatabase, with one polygon feature class for flood zones. They’re categorized into 100 and 500 year zones, open water bodies, areas outside of flood zones, and areas outside flood zones protected by levees. The pic below illustrates 100 and 500 year zones overlaid on the OpenTopoMap.

FEMA Flood Maps. Light blue areas are 500 year zones, dark blue are 100 year
FEMA Flood Hazard Layer, 100 year zones in dark blue, 500 year in light blue

FEMA also has a GIS data feed for current and historical emergencies and disasters, that are available in a variety of formats both spatial and non-spatial. These are county-level layers that indicate where disaster areas were declared and what kind of funding or assistance is / was available.

NOAA Sea Level Rise

The FEMA maps assess both past events and current conditions to model the likelihood of flooding in a 100 or 500 year period for a major storm event. A different way of looking at flooding is to consider sea level rise due to climate change, where the impact of sea level rise is measured in different increments. Instead of the impact of a one-shot event, this illustrates potential long term change. NOAA’s Sea Level Rise (SLR) viewer allows you to easily visualize the impact of sea level rise in 1 foot increments, between 1 and 10 feet. You can download the data by US state or territory for coastal areas. There are separate downloads for sea level rise, rise depth, the confidence intervals for the models, as well as DEMs and flood frequency. The sea level rise data is package in an ArcGIS file geodatabase, with two sets of files (a low estimate and high estimate) in one foot increments. An example of 6 feet in sea level rise is shown below.

NOAA Sea Level Rise 6ft Layer
NOAA Sea Level Rise. Areas in pink illustrate sea level 6 feet higher than present

NOAA National Hurricane Center

Beyond showing the general impact of flooding or sea level rise, you can also look at the track of individual hurricanes and tropical storms. The National Hurricane Center’s GIS data page provides historical forecasts – the projected path and cone of storms, windspeeds, storm surges, etc. You choose your year, then can choose a storm, and then a particular day. You can use this data to see how the forecasts evolved as the storm moved. When we’re in hurricane season, you can also see what the circumstances are day by day for tracking new storms.

If you want to see what actually happened (as opposed to a forecast), you can dig through the data page and browse the different options. There’s the Tropical Cyclone Report (TCR) which provides “information on each tropical cyclone, including synoptic history, meteorological statistics, casualties and damages, and the post-analysis best track (six-hourly positions and intensities). Tropical cyclones include depressions, storms and hurricanes.” The default page shows you the Atlantic, but you can swap to Eastern or Central Pacific using the link at the top. Storms are listed alphabetically (and thus by date) and your format options are shapefile or KML. There’s a map at the bottom that depicts and labels all the storms for that season. You actually get four shapefiles in a download; a point file that contains a number of measurements, a line file for the storm track, a polygon file for the radius of the storm, and another polygon with the wind swath. The layers for 2021’s Tropical Storm Henri are illustrated below.

NOAA Tropical Cyclone Report Layers
Layers from NOAA”s NHC Tropical Cyclone Report, Tropical Storm Henri 2021

GIS data for the storms begins in 2010 with KMZ files (which you’ll need to convert in ArcGIS or QGIS to make them useful beyond display purposes), and shapefiles appear in 2015. Further back in time are just PDF reports and map scans.

If you really want to go back and time and get all the tracks at once, there’s the HURDAT2 database; one for the Atlantic (1851 to present) and another for the Pacific (1949 to present). It’s a csv file that contains coordinates for the track of every storm, which you can process to create a geospatial file using a points to line tool. Or – you can grab a version where that’s already been created! The International Best Track Archive for Climate Stewardship (IBTrACS) keeps a running CSV and shapefile of all global storms. Scroll down and choose shapefile (CSV is another option). The download page is just a list of files – you can choose points or lines, storms by ocean (East Pacific, North Atlantic, North Indian, South Atlantic, South Indian, South Pacific, West Pacific), or grab everything in lists that are: active, everything (ALL), last 3 years, or since 1980. Below is an example of all storms in the North Atlantic – there are quite a lot (see below)! You get storm speed and direction, wind speed and direction, coordinates, and identifiers associated with the storm as points and lines. A subset of this data for the 2021 season is displayed in the feature image at the top of this post.

IBTrACS Historical Hurricane Tracks
Historical hurricane / storm tracks from 1851 to 2021 in the North Atlantic from IBTrACS

How About the Weather?

There are many places you can go for this and the best source depends on the use case. More often than not, I end up using the Local Climatological Database. Choose a geographic type, then a specific area, and you’ll see all the weather stations in this area. Add them to the cart, and then view the cart once you have all the stations you want. On the next screen choose an output format (CSV or TXT fixed width) and a date range. You submit an order and wait a bit for it to be compiled, and are notified by email when it’s ready for download. Mixed in this CSV are records that are monthly, daily, and hourly, so after downloading you’ll want to extract just the period you’re interested in. Data includes temperature, precipitation, dew point, wind speed and direction, humidity, barometric pressure, and cloud cover.

NOAA Local Climatological Data Map Tool
Map Tool search interface for NOAA Local Climatological Data

Some processing is required to make these files GIS ready. Each record represents an observation at a station at a given point in time, so if you plot these “as is” the likely idea is you’re making an illustrated time series of some sort, as you’ll have tons of observations plotted on a few spots (where the stations are). If this isn’t desirable, then you’ll filter records to create extracts for just a given point in time, maybe separate features for each time period. For monthly summaries you can pivot time to columns, to create a column for each month and indicator. This would be impractical for daily or hourly summaries, unless you’re focusing on a single month for the former or day / week for the latter (otherwise you’ll have a bazillion columns).

Annoyingly, the CSV option doesn’t include any of the station information in the download (like the standard WBAN ID, name, longitude, latitude, and elevation) except for one unique identifier. I know that this information was all included in the past, and am not sure why it was dropped. The TXT version includes the station info, but fixed-width files are a pain to work with. If you are working with a small number of stations, you can pull the station info individually by previewing the station on the download screen (click on the station title or little eye symbol). The five digit WBAN number is included as the last 5 digits of the identifier in the CSV, so you can identify and relate each one. If you don’t want to mess with copying and pasting, you can generate a second extract for all the stations for just a single day and download that in the TXT format, and then parse just the station columns and associate them with your main table.

There are multiple ways that you can create extracts for this data beyond the example I just provided, available from the main data tools page. For a more refined search you can select the summary period (yearly, monthly, daily, hourly) and targeted variables in advance. There are also FTP options for bulk downloads.

One thing that surprises folks who are new to working with this data, is that there aren’t many weather stations. For the LCD, my home state of Delaware only has three, one in each county. The entire City of New York only has three as well, at each of the airports and one in Central Park. If you’re not interested in points and want areas, then you would need to gather a significant number of stations and do interpolation. Or – use data that’s already modeled. I mentioned PRISM at Oregon State in a previous post, as a nice source for national US rasters of temperature and precipitation that you can generate for dailies, monthlies, and normals.