Dataharvest 19 – Interesting software take aways

We are currently attending the European Investigative Journalism Conference (EIJC19) in Mechelen, and will write about interesting software usable for research purposes.

This list of software is something we took a note of, and that you might find interesting.

  • Datashare – a cool piece of software for reading documents and turn unstructured data into structures.
  • Neo4j – Graph software, that turns relational data into just that – relations, that can be visualized in many ways. Using a language quite similar to SQL. Syntax is pretty complex though.
  • Anaconda – Once again the Anaconda platform seems to be the weapon of choice for coders around the world.
  • OSINT Framework – Framework focused on gathering information from free tools or resources. Also on Github.
  • Python Package Index – Great overview of libraries. Searchable, obviously.

And as a small bonus, we did a bit of coding while at the conference. So we have updated our scripts for converting handles and id’s for Twitter users (and vice versa). Now they output both to command line and CSV-files 🙂

Datashare – Interesting tool developed by ICIJ

Have you ever encountered the following?

You have read a bunch of documents. Think a lot of documents. But after reading, you are thinking of a specific detail. A detail you have forgotten. The name of a company or product for instance. And really, you are not interested in reading all the documents again. So you start scimming the texts for the missing detail.

Good news – now you won’t have to scim it your self. Let Datashare do it for you.

Will Fitzgibbon of ICIJ has written a pretty good guide for Datashare, you could start by reading.

ICIJ is the International Consortium of Investigative Journalists. The organization behind the Panama Papers, Lux Leaks, Off shore leaks, Implant files and many other large scale international collaborative (data) journalism projects.

We are currently attending the European Investigative Journalism Conference (EIJC19) in Mechelen, and will write about interesting software usable for research purposes.

Link: Datashare on ICIJ
Link: Dataharvest/EIJC

Scraping 101 and basic programming concepts

Credit: Screendump from http://sumsum.se/posts/scraping101-part2/ by Mikko Helsig

Ever wondered how scraping works? If you are pretty much blank when it comes to programming, this guide is probably not for you. However, if you have the basic concepts in place, in a few steps the author, Mikko Helsig, shows you how to scrape a site in Python (and also how to install Python in a Windows environment).

Prerequisites are a basic understanding of programming. But then you get a concept of how Python works, how scraping works and the really cool libraries requests, requests_cache, BeautifulSoup and Gender (the latter is a library used to guessing and parsing gender of names).

Link: Scraping 101

If you are totally new to programming, we encourage you to start by learning a little Python. There are numerous places to do this. For instance:

We also deeply encourage you to start with programming in an environment such as Anaconda. A short description of the Anaconda Navigator can be found here.

Get data from Instagram with Instaloader

We have added Instaloader to our External resources.

Instaloader is a tool to download pictures (or videos) along with their captions and other metadata from Instagram. You can either download profiles or hashtags, and it’s possible to set up filters (for instance datefilters, see below) to narrow your search.

To use Instaloader,  you should do the following.

  1. Download and set up a new Anaconda Environment with a Python version higher than 3.5.
  2. Install Jyputer Notebook on the environment and open a new terminal
  3. Do a pip (not pip3, as that does not work with Anaconda) install of the instaloader and dependencies
    pip install instaloader
  4. Create a new folder in your root-environment (typically documents-folder) called for instance Instaloader
  5. In terminal do
    cd instaloader
  6. This is to avoid that everything is saved in your base folder 🙂
  7. Run various command line commands in your terminal. Please do note that the interface is rudimentary but filters can be applied with the use of boolean expressions for instance:
    instaloader "#HASHTAG" --post-filter="date_utc >= datetime(2017,1,1) and date_utc <= datetime(2018,1,1)" --login=USERNAME
  8. We would love to implement this as a hosted service. However, it is not likely we will do so just now. Therefore, please experiment with it yourself. You can also ask our advice, and we will do our best to help. If you plan to use this tool on a regular basis or for larger datasets, you should probably be ready to use several user accounts and/or proxies to avoid being banned.

Pictures and presentations from the official launch

On 28/11 2018 we had our official launch with a seminar and get-together at our place.

We had a smashing time discussing the possibilities with you guys, and look forward to realizing ideas from the day. If you would like to relive the day (and who wouldn’t), you will find links to presentations and programme below.

Yourtwapperkeeper will soon be closed

Our hosted service for collecting Twitter data called Yourtwapperkeeper is currently retired and is having a lovely time at the old software home.

Currently the data can still be downloaded, but the server will be completely closed in January 2019. If you are using data from that server, you should download it by then.

If you need Twitter-data collection please use our TCAT-servers instead.

All data and ongoing collections from Yourtwapperkeeper has already been migrated to TCAT. Read more about the TCAT services on our Hosted resources.

The first couple of tools are online

We are alive and kicking!

We are starting up our toolbase with some rudimentary tools and scripts. They are divided by Hosted resources and External resources.

Hosted resources are things we are hosting ourselves on local servers. Or scripts, that we have made ourselves and can assist you with running on your own computer or in the lab by appointment.

External resources are things we think are interesting, but really have no ownership over. Tools you can use, but we don’t officially support. You are free to ask advice on the tools too, though.

We also have a list of links on where you can find data. You could browse that list, but it is far far far from complete. So if you know a good link, please send us an e-mail.