One of the issues that I have been working on in my project is how best to capture such massive amounts of data and cleanly map it. From previous work manually filtering 30,000+ rows of accelerometer data from a previous project, it was a relief to see an example of how Eric successfully did so through code. As I continue to sift through data for my current project, this serves as inspiration for what's possible.
Check out the tutorial — his blog post includes examples of issues he ran into, like multiple tweets for the same locations, and mysteriously missing data on the prime meridian. The tools Fischer uses are open-sourced as well.