series – census book

Note Taking for Academic Research

I’ve been reviewing a lot of literature over the past year in preparation for writing my book, so note taking is at the forefront of my mind. Grad students occasionally ask me for suggestions on how to effectively take notes, so I’ll share some pointers here. I’ll begin with my quest to find the right note taking software, followed by my actual process for taking notes.

Finding the Right Tool

Ten years ago, I suddenly found myself back in a position where I needed to write academic papers, something I hadn’t done since I wrote my master’s thesis about eight years before that. At that time, I was still using the techniques I had learned in high school (a much longer time ago…). Back then, you were either an index card person or a binder person. The card people would write one note on each card, while binder people kept a ledger of notes and would add additional pages as needed. You’d classify your notes as summaries, paraphrases, or quotations.

I assumed that my high school methods must be outdated by now, so I cast around to see what note taking software was available. I knew I wanted to go open source, as I didn’t want my notes tethered to a specific tool and stored in proprietary format. There were a lot of options, and I quickly became bogged down and frustrated with trying them all. I felt that much of the software forced me to conform to it, and I was spending too much time fidgeting and figuring things out.

I abandoned the search and recorded my notes in a simple text (aka notepad) document. I had always been a binder person, so the single document approach appealed to me. I could copy and paste, use spell check, and search for keyword terms that I assigned (the Linux editors like gedit, leafpad, and xed are lightweight but more robust than MS Notepad). This worked fine for a stand-alone paper and I still use this approach for small projects. But as my research became on-going I needed to rely on these notes for many future projects. The single notepad document grew unwieldy and browsing and searching became difficult.

A few years later, I made a second attempt at searching for note taking software, and this time I broadened the search to include more general-purpose options. My solution: use a wiki! With a wiki, every single source can have it’s own page, the sources can be grouped together under thematic categories, you can assign tags, and you can search across all the pages. I could also add links between pages and out to the web, and could link the notes to the source documents. The wiki was so open ended that I didn’t feel constrained in writing my notes to fit a particular interface, nor did I have to waste a lot of time sifting though buttons and tools.

I opted for a desktop wiki called Zim, which has been actively maintained since 2008. All of the pages in Zim are saved as individual text files in a basic wiki mark-up, which insures that they can be accessed outside the program. Pages are stored in a notebook which is essentially just a folder. If you create hierarchies of pages, these categories become folders and sub-folders. Zim has a ton of extra plugins so you can do spell checking, concept mapping, you can create formulas, calendars, and more. You can also export your entire notebook or portions of it out as HTML or LaTeX files.

Most importantly, the wiki solved one of my most vexing problems. I found that a lot of the note taking software was geared towards just taking notes, and couldn’t handle keeping track of citations. Citation software is it’s own genre, and I found that those packages were poor for taking notes. With Zim, I create a page dedicated to each source, and at the top of each page I embed some BibTeX code for storing the citation data. BibTeX is a format that’s used for creating LaTeX bibliographies, but it has become a common standard and can be used by word processors too. I have a template page (see below) with several BibTeX document types that I just copy and paste when I have a new source to add. Since the pages are saved as plain text, I wrote a short Python script (appears at the end of this post) that loops through my note pages, scrapes out the BibTeX records, and creates a BibTeX file that I can use in LaTeX. Within the BibTeX record I store a link to the source: either to a PDF I have locally, or a web page (if it’s a site), or a WorldCat catalog record (if it’s a book). So all my notes, citations, and the source material are kept together in one place!

BibTeX templates

Zim is desktop software that you have to download and install locally. Since the notebook consists of text files in folders, it’s easy to back it up into Box or DropBox or whatever you use. Zim doesn’t save histories or have version control, but there’s a plugin that lets you sync your files with Git and other systems.

Relying on Tried and True Methods

While the right tool is important, it’s really the method that counts. I learned that I had to jettison the idea that the note taking process has to be 100% efficient. While you certainly don’t want to flail around and waste time, note taking is not supposed to be quick and easy. The only way you can truly learn new material is to spend time with it: reading, re-reading, taking notes, and reading the notes. The process of note taking is equally if not more important than the actual notes themselves, as the process is what helps you to synthesize and learn the material. While I left the binder and note cards behind, the actual note taking process was similar to what I did in high school.

I always download articles and bookmark websites or catalog records as I’m doing my searches. Once I complete a series of searches and have gathered material from the web and library databases, I sift through the files and rename them using the first author’s last name and the year of publication (i.e. Jones2017). I’ll also use this file name as the BibTeX key that uniquely identifies the article. I create a documents folder with sub-folders for articles, books, and reports, and I keep these folders in the same location as the ZIM notebook. There’s no reason to create lots of topical or thematic folders, as you can use the wiki to categorize and tag the notes, and the wiki becomes the vehicle for searching or browsing through the documents.

As I sort through the sources I identify what’s essential and what’s ancillary. High priority sources will be read thoroughly and covered in detail, while the low-priority stuff will be skimmed and summarized. High priority sources are critical to your research and include touchstone articles in your field, excellent case studies, relevant background material, and any past research that remotely resembles what you’re working on. Low priority sources may have one important fact or concept that you need to remember; these materials are more tangential to your work and ultimately you might cite them in passing, or even not at all.

I always print out the high-priority articles. I’ll read it first, and then I’ll go back and do a second read and mark passages with a high-lighter. Next, I’ll create and type notes directly into the wiki. I might read and mark up a couple articles before I start note taking, but I don’t wait too long as I want the articles fresh in my memory. For essential books, I’ll read a chapter or two at a time and mark passages with little sticky flags. Then I’ll go back and take notes in a paper notebook, and will keep doing that until I finish the book. Then I transcribe all the notes for the book onto my laptop. This takes longer, but once again it’s not all about efficiency. I get to spend more time with the material and it helps me absorb it. This approach also separates the computer from the reading, which cuts down distractions and provides more flexibility in terms of where I can work. Reading in a comfortable chair or outside is preferable to reading while sitting at a table with a laptop.

Note flags in book

I never print out or mark up low-priority articles; I skim through the digital copies and write a summary directly in the wiki. For books, I read the book in one go and may use sticky flags here and there, and when I’m done I type the notes directly into the wiki.

For the notes themselves, each note page has the title and author prominently at the top followed by a summary of the source, and then the BibTeX citation (see below). Low-priority materials usually get nothing but a summary and a citation. High-priority materials get detailed notes. Each note is written as a bullet point, and can represent one important fact or insight, or can be a summary of a paragraph or several pages, or even a summary of a chapter. It depends on how important the material is relative to my work.

Zim Wiki

Taking notes is not like writing a book report. I’m not writing an even or objective summary of the material in it’s entirety. Instead, I’m picking out the pieces that are of interest to me and to the work I’ll be doing, and I skip the rest. Sometimes I’ll editorialize (this is great, or this stinks) but I write in such a way that my thoughts are distinct from what the author is saying. This is where efficiency comes into the picture: identify sources that are high versus low priority, and summarize the source and identify just the specific details that are relevant to you. You’re writing these notes with specific research goals in mind, so don’t waste time writing a generic book report.

I always summarize or paraphrase the material as I take notes, putting concepts in my own words. Doing this forces you to wrestle with the concepts and internalize them, which improves your understanding of the material and your memory for it. It also helps guard against plagiarism; once you start writing the paper, you’ll know your notes are already in your own words and you can use them freely. If I do quote something directly, I always surround it with quotation marks. Lastly, at the end of my note I provide the page numbers to indicate what’s been summarized, so I can go back if need be.

Note taking is an idiosyncratic process. What works for you may not work for someone else and vice versa. The key is to figure out what works best for you; create a system, try it out, and once you’re happy go with it. You can always tweak things as you move along. The notes will help you when it comes time to pull your ideas together into a cohesive paper, but it’s the reading and note taking process that helps you to become proficient with the subject matter.

(Python code for scraping BibTeX records out of wiki notes to create a bibliography is posted below).

#Parse notes stored in zim wiki to extract all bibtex records and write them
#to a new bibtex file named with today's date.

#Script must be stored directly above the notes folder where the wiki data
#is stored. It will ignore the empty bibtex template files and will only
#read wiki files stored as .txt.

#Within the wiki, all bibtex records in the notes are enclosed in a bibtex tag.
#The script reads each line and ignores them until it finds the open <bibtex>
#tag. Then it starts writing each line until it reads the close </bibtex> tag.
#A line return is appended so records are separated in the output file.

#A list and count of extracted records is provided as a diagnostic

import os, datetime

now=datetime.date.today()
path='.'
outfile='sources_'+str(now)+'.bib'

writefile=open(outfile,'w')

counter=0
titles=[]

for (subdir,dirs,files) in os.walk(path):
    if 'Templates' in dirs:
        dirs.remove('Templates')
    if 'documents' in dirs:
        dirs.remove('documents')
    for file in files:
        if file[-4:]=='.txt':
            readfile=open(os.path.join(subdir,file),'r')
            for line in readfile:
                if line.startswith('<bibtex>'):
                    break
            for line in readfile:
                if line.startswith('<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span></bibtex'):
                    titles.append(file)
                    writefile.write('\n')
                    counter=counter+1
                    break
                else:
                    writefile.write(line)
            readfile.close()
writefile.close()

titles = [t.replace('_', ' ') for t in titles]
titles=[t.strip('.txt') for t in titles]
titles.sort()
print('Bibliographic records have been extracted for the following sources:','\n')
for title in titles:
    print('*',title)
print('\n')
print(counter,'bibilographic records have been parsed and written to',outfile)

Census Counts and Facebook Flubs

So I’m one month into my year-long sabbatical and I’m still cranking away at writing a book proposal for my guide to working with US Census data. I’m positioning the census as one of the original open datasets given its history as a transparent, public domain dataset that plays many key roles in our society, and I’m considering how it fits within the larger data universe.

I recently stumbled across this story, about Facebook telling advertisers and investors that Facebook could reach 41 million adults between the ages of 18 and 24 in the US this year with ads. They also claimed they could reach 60 million people between the ages of 25 and 34. Sounds impressive, right?

Well, an analyst at an equity research firm did some homework, and discovered a problem: according to the Census Bureau, there are currently (as of 2016) only 31 million adults between the ages of 18 and 24 in the United States. Where are Facebook’s extra 10 million people coming from? It gets worse: the Census says there are only 45 million people aged 25 to 34 in the US, 15 million fewer than Facebook’s “reach”. This isn’t just a local phenomena; Facebook says it can reach more people than actually exist in the UK, Australia, Ireland, and France.

When faced with these large discrepancies, Facebook responded with a canned statement that their tools for producing these estimates “…are designed to estimate how many people in a given area are eligible to see an ad a business might run. They are not designed to match population or census estimates. We are always working to improve our estimates.”

Ok, but that’s not how the information was originally presented to the advertising and investment firms, who were nonplussed to say the least. If you say you can reach 41 million adults between ages 18 and 24 in the US, the natural assumption is we’re talking about residents. You would also assume that this estimate, at best, could only represent a certain percentage of that population. You wouldn’t expect the ads could reach all of them, and certainly not more than what actually exists.

Here are a few take-aways from this story. First, while census data isn’t as timely or seemingly “hip” as the new social media data, it is produced by professional statisticians and geographers dedicated to high quality, transparent, demographic data. The census serves as a baseline that other population data can be measured against, and as a foundation on which other datasets can be built. If you are generating population estimates for an area, comparing your estimates to the census should be a matter of common sense. It was to the investment researcher, but apparently not to Facebook.

Second, all data sets have shortcomings and suffer from some degree of bias or error; the decennial census has historically had problems with under-counting, and the margins of error for American Community Survey data for small geographies and populations can be unacceptably large. But at least the process is transparent, so we can understand the limitations and account for them. Furthermore, the census officials know the shortcomings and they develop methodologies to amend and correct data (to the extent they can) before it’s published.

In contrast, the data produced by Facebook, Google, and the like is anything but transparent. Not only can’t we see how it’s collected, most of us don’t have access to the data itself unless the companies choose to feed us some crumbs or allow us to jump through hoops (ironic as their data is really our data in aggregate). Naturally it will suffer from limitations – people on Facebook can mis-represent their age, ad targeting is based on location data that includes residents and visitors, there can be (gasp) fake accounts! But there’s no way for end users to see, correct, or account for these shortcomings. There is little or no documentation. Users just have to hope that the company has studied and scrubbed their data, or that the company understands the data’s limitations and presents it to others appropriately.

Here are the latest population estimates for the US broken down by age and gender. There are also some neat population clocks, and a pyramid – click the pic to visit.

census population pyramid