Saturday, June 25, 2016

New Paper: "Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources"

Andra Waagmeester published a paper on his work on a semantic web version of the WikiPathways (doi:10.1371/journal.pcbi.1004989). The paper outlines the design decisions, shows the SPARQL endpoint, and several examples SPARQL queries. These include federates queries, like a mashup with DisGeNET (doi:10.1093/database/bav028) and EMBL-EBI's Expression Atlas. That results in nice visualisations like this:

If you have the relevant information in the pathway, these pathways can help a lot in helping understanding of what is biologically going on. And, of course, used for exactly that a lot.

Press release
Because press releases have become an interesting tool in knowledge dissemination, I wanted to learn what it involved to get one out. This involved the people as PLOS Computational Biology and the press offices of the Gladstone Institutes and our Maastricht University (press release 1, press release 2 EN/NL). There is already one thing I learned in retrospect, and I am pissed with myself that I did not think of this: you should always have a graphics supporting your story. I have been doing this for a long time in my blog now (sometimes I still forget), but did not think of that in the press release. The press release was picked up by three outlets, though all basically as we presented it to them (thanks to

But what makes me appreciate this piece of work, and WikiPathways itself, is how it creates a central hub of biological knowledge. Pathway databases capture knowledge not easily embedded an generally structured (relational) databases. As such, expression this in the RDF format seems simple enough. The thing I really love about this approach, is that your queries become machine readable stories, particularly when you start using human readable variants of SPARQL for this. And you can share these queries with the online scientific community with, for example, myExperiment.

There are two applications how I have used SPARQL on WikiPathways data for metabolomics: 1. curation; 2. statistics. Data analysis is harder, because in the RDF world resources scientific lenses are needed to accommodate for the chemical structural-temporal complexity of metabolites. For curation, we have long used SPARQL for unit tests to support the curation of WikiPathways. Moreover, I have manually used the SPARQL end point to find curation tasks. But now that the paper is out, I can blog about this more. For now, many examples SPARQL queries can be found in the WikiPathways wiki. It features several queries showing statistics, but also some for curation. This is an example query I use to improve the interoperability of WikiPathways with Wikidata (also for BridgeDb):

  ?metabolite a wp:Metabolite .
  OPTIONAL { ?metabolite wp:bdbWikidata ?wikidata . }
  FILTER (!BOUND(?wikidata))

Feel free to give this query a go at!

This papers completes a nice triptych of three papers about WikiPathways in the past 6 months. Thanks to whole community and the very many contributors! All three papers are linked below.

Waagmeester, A., Kutmon, M., Riutta, A., Miller, R., Willighagen, E. L., Evelo, C. T., Pico, A. R., Jun. 2016. Using the semantic web for rapid integration of WikiPathways with other biological online data resources. PLoS Comput Biol 12 (6), e1004989+.
Bohler, A., Wu, G., Kutmon, M., Pradhana, L. A., Coort, S. L., Hanspers, K., Haw, R., Pico, A. R., Evelo, C. T., May 2016. Reactome from a WikiPathways perspective. PLoS Comput Biol 12 (5), e1004941+.
Kutmon, M., Riutta, A., Nunes, N., Hanspers, K., Willighagen, E. L., Bohler, A., Mélius, J., Waagmeester, A., Sinha, S. R., Miller, R., Coort, S. L., Cirillo, E., Smeets, B., Evelo, C. T., Pico, A. R., Jan. 2016. WikiPathways: capturing the full diversity of pathway knowledge. Nucleic Acids Research 44 (D1), D488-D494.

Sunday, June 05, 2016

Wikidata showing chemical properties with references

As you have seen in my blog, I'm a fan of Wikidata. Because of the Open nature, it's creating an enormous eco-system, in which many scientists are involved and with innovative visualizations. Data comes from many trusted databases, but the complexity of it all requires some hard decisions now and then. However, unlike many other databases, Wikidata has data provenance high on the agenda: all statements can be complemented with primary literature references, which I have been using when porting the pKa data.

SQID page for aspirin in Wikidata.
A new visualization of the data is provided by SQID Markus Krötzsch et al. And this interface propagates the references for each bit of fact, though by default hidden behind an arrow icon at the top right of the fact. Clicking that will show the provenance, though that is currently still often a database, rather than primary literature.

Section of the SQID page for aspirin, with references given for solubility, mass,
and a hazardous chemical exposure.
I really like where this is going! Why have publishers not been able to do something like this in the past 20 years?? This is knowledge dissemination as we want to see it.

Wednesday, May 18, 2016

Comparing sets of identifiers: the Bioclipse implementation

Source: Wikipedia
The problem
That sounds easy: take two collection of identifiers, put them in sets, determine the intersection, done. Sadly, each collection uses identifiers from different databases. Worse, within one set identifiers from multiple databases. Mind you, I'm not going full monty, though some chemistry will be involved at some point. Instead, this post is really based on identifiers.

The example
Data set 1:

Data set 2: all metabolites from WikiPathways. This set has many different data sources, and seven provide more than 100 unique identifiers. The full list of metabolite identifiers is here.

The goal
Determine the interaction of two collections of identifiers from arbitrary databases, ultimately using scientific lenses. I will develop at least two solutions: one based on Bioclipse (this post) and one based on R (later).

First of all, we need something that links IDs in the first place. Not surprisingly, I will be using BridgeDb (doi:10.1186/1471-2105-11-5) for that, but for small molecules alternatives exist, like the Open PHACTS IMS based on BridgeDb, the Chemical Translation Service (doi:10.1093/bioinformatics/btq476) or UniChem (doi:10.1186/s13321-014-0043-5, doi:10.1186/1758-2946-5-3).

The Bioclipse implementation
The first thing we need to do is read the files. I have them saved as CSV even though it is a tab-separated file. Bioclipse will now open it in it's matrix editor (yes, I think .tsv needs to be linked to that editor, which does not seem to be the case yet). Reading the human metabolites from WikiPathways is done with this code (using Groovy as scripting language):

file1 = new File(
    "/Compare Identifiers/human_metabolite_identifiers.csv"
set1 = new java.util.HashSet();
file1.eachLine { line ->
  fields = line.split(/\t/)
  def syscode;
  def id;
  if (fields.size() >= 2) {
    (syscode, id) = line.split(/\t/)
  if (syscode != "syscode") { // ok, not the first line
    set1.add(bridgedb.xref(id, syscode))

You can see that I am using the BridgeDb functionality already, to create Xref objects. The code skips the first line (or any line with "column headers"). The BridgeDb Xref object's equals() method ensures I only have unique cross references in the resulting set.

Reading the other identifier set is a bit trickier. First, I manually changed the second column, to use the BridgeDb system codes. The list is short, and saves me from making mappings in the source code. One thing I decide to do in the source code is normalize the ChEBI identifiers (something that many of you will recognize):

file2 = new File(
  bioclipse.fullPath("/Compare Identifiers/set.csv")
set2 = new java.util.HashSet();
file2.eachLine { line ->
  fields = line.split(/\t/)
  def name;
  def syscode;
  def id;
  if (fields.size() >= 3) {
    (name, syscode, id) = line.split(/\t/)
  if (syscode != "syscode") { // ok, not the first line
    if (syscode == "Ce") {
      if (!id.startsWith("CHEBI:")) {
        id = "CHEBI:" + id
    set2.add(bridgedb.xref(id, syscode))

Then, the naive approach that does not take into account identifier equivalence makes it easy to list the number of identifiers in both sets:

intersection = new java.util.HashSet();

println "set1: " + set1.size()
println "set2: " + set2.size()
println "intersection: " + intersection.size()

This reports:

set1: 2584
set2: 6
intersection: 3

With the following identifiers in common:

[Ce:CHEBI:30089, Ce:CHEBI:15904, Ca:25513-46-6]

Of course, we want to use the identifier mapping itself. So, we first compare identifiers directly, and if not matching, use BridgeDb and an metabolite identifier mapping database (get one here):

mbMapper = bridgedb.loadRelationalDatabase(

intersection = new java.util.HashSet();
for (id2 in set2) {
  if (set1.contains(id2)) {
    // OK, direct match
  } else {
    mappings =, id2)
    for (mapped in mappings) {
      if (set1.contains(mapped)) {
        // OK, direct match

This gives five matches:

[Ch:HMDB00042, Cs:5775, Ce:CHEBI:15904, Ca:25513-46-6, Ce:CHEBI:30089]

The only metabolite it did not find in any pathway is the KEGG identified metabolite, homocystine. I just added this compound to Wikidata. That means that in the next metabolite mapping database, it will recognize this compound too.

The R and JavaScript implementations
I will soon write up the R version in a follow up post (but got to finish grading student reports first).

Friday, April 29, 2016

Sci-Hub succeeds where publishers fail (open and closed)

Sci-Hub use in The Netherlands is not limited to
the academic research cities. Harlingen is a small
harbor town where at best a doctor lives and
one or two students who visit parents in the
weekend. The nature of the top downloaded
paper suggests it is not a doctor :)
Data from Bohannon and Elbakyan.
Knowledge dissemination is a thing. It's not easy. In fact, it's a major challenge. Traditional routes are not efficient anymore, where they were 200 years ago. The world has moved on; the publishing industry has not. I have written plenty in this blog about how the publishers could catch up, and while this is happening, progress is (too) slow.

The changes are not only technical, but also social. Several publishers still believe we live in a industrial area, where the world has moved on into a knowledge era. More people are mining and servicing data than there are making physical things (think about that!). Access to knowledge matters, and dealing with data and knowledge stopped being something specific for academic and other research institutes many, many years ago. Arguments that knowledge is only for the highly educated is simply contradicting and bluntly ignore our modern civilization.

This makes access to knowledge a mix of technological and social evolution, and on both end many publishers fail, fail hard, fail repeatedly. I would even argue that all the new publishers are improving things, but are failing to really innovate in knowledge dissemination. And not just the publishing industry, also many scientists. Preprint servers are helpful, but this is really not the end goal. If you really care about speeding up knowledge dissemination, stop worrying about things like text mining, preprints, but you have to start making knowledge machine readable (sorry, scientist) and release that along or before your article. Yes, that is harder, but just realize you are getting well-paid for doing your job.

So, by no means the success of Sci-Hub is unexpected. It is not really the end goal I have in mind, and in many ways contradicting what I want. But the research community thinks differently, clearly. Oh wait, not just the research community, but the current civilization. The results of the Bohannon analysis of the Sci-Hub access logs I just linked to clearly shows this. There are so many aspects, and so many interpretations and remaining questions. The article rightfully asks, is it need or convenience. I argued recently the latter is likely an important reason at western universities, and that it is nothing new.

This article is a must read if you care about the future of civilization. Bonus points for a citable data set!

Bohannon, J. Who's downloading pirated papers? everyone. Science 352, 508-512 (2016). URL
Elbakyan, A. & Bohannon, J. Data from: Who's downloading pirated papers? everyone. (2016). URL

Sunday, April 24, 2016

Programming in the Life Sciences #22: jsFiddle

My son pointed me to jsFiddle which allows you to edit JavaScript snippets and run them. I have heard of them before, but never really got time for it. But I'm genuinely impressed with the stuff he is doing, and finally wanted to try sharing JavaScript snippets online, particularly, because I had to update the course description of Programming in the Life Sciences. In this course the students work with JavaScript and there are a number of example, but that has a lot of HTML boiler plate code.

So, here's the first of those examples, but then stripped from most of the things you don't need, and with some extra documentation as comments: