Archive for July, 2011

Final blog post

Monday, July 25th, 2011

This is our final blog post for the JISC RDTF (now Discovery) SALDA project on the completion of the six month project. I’m sure there will be more related blog posts here in the coming months.

Things we have produced

The SALDA Project has produced the following:

The catalogue data of the Mass Observation Archive is now available on the Talis Platform licensed under ODC-PDDL.

Simple text search http://api.talis.com/stores/massobservation/items

Sparql interface at: http://api.talis.com/stores/massobservation/services/sparql

The SALDA XSLT stylesheet is here licensed under modified BSD licence

Download the data in RDF

Chris Keene has created pages for open data at the University of Sussex Library:

http://data.lib.sussex.ac.uk/

The direct link to the SALDA produced data  from the Mass Observation Archive is here:

http://data.lib.sussex.ac.uk/data/mass-observation/

Some human readable examples of the data:

http://data.lib.sussex.ac.uk/archive/doc/person/nra/harrissonthomas1911-1976anthropologist

http://data.lib.sussex.ac.uk/archive/id/archivalresource/gb181SxMOA1

The data references terms from (amongst others) the following RDF vocabularies (thanks to Pete Johnston at Eduserv):

http://purl.org/dc/terms/
http://xmlns.com/foaf/0.1/
http://www.w3.org/2004/02/skos/core#
http://www.openarchives.org/ore/terms/
http://linkedevents.org/ontology/
http://data.archiveshub.ac.uk/def/

Pete has also produced browse pages for concepts, people and places which offer other ways into the data and are great for showing the data. This is in addition to our core deliverables and are not live yet.

In-house cataloguing guidelines

An unexpected result of the SALDA project was a review of our cataloguing procedures and the following guides were produced by myself and a colleague Adam Harwood who is currently cataloguing the University of Sussex Collection.

CALM_ISADG_Collection level This document maps the required ISAD G fields to the CALM fields with guidelines on how to populate the fields. We have also included the fields required for export to EAD using the Archive Hub report on CALM.

cataloguing procedures component level This document provides guidelines for completing componant level records in CALM.

Next steps

Now the data is on the platform, we will advertise it at open data days. We are working on a leaflet which invites anyone to work with our data and see what they can do.

We are working with our partners at the Keep on the IT infrastructure for the new development. The SALDA project opened dialogue on Linked Data and has provided a useful skills and knowledge set of another route to take in order to share data between the partners.

At Sussex, we are going to look at our collections and make a prority list of ones where the catalogue data could be turned into Linked Data by considering:

  • If we can make the data available under ODC-PDDL
  • what changes/ additions we need to make to the data and it’s structure
  • What the potiential uses/ benefits are

A a personal goal, I would like to work with archivists and developers to find common ground about Linked Data, about the understanding, the uses and the benefits. And what words we use to describe it and finding examples of it in use because Linked Data is very behind the scenes so can be hard to “sell” without an example of its use in human readable format. I also attended a brilliant “legal update for information professionals” workshop led by Niaomi Korn and Professor Charles Oppenheim which really got me interested in risk management which relates to the licensing part of the project.

Evidence of reuse

We have registered the dataset on CKAN and hope to be part of the current UK discovery competition

Skills

This has been a steep learning curve for me as project manager to get my head around the world of Linked Data.  All praise to Pete Johnston who is able to write in a way that I understand, yet still convey the level of technical detail that is required.

Pete has provided the expertise on the project, working with scripts devised for the Locah project and adapting them for SALDA. He has been working with Chris to move the data to the platform and the scripts used to our data.lib.sussex.ac.uk URI. You can read more about this in Chris’s blog post

We are grateful to all the team at the Locah project for forging the path ahead and allowing us to follow in their footsteps.

Chris Keene has created webpages for open data at the University of Sussex Library to keep open data on the agenda. Openness is reflecting the the strategic goals of the Library e-strategy: Search and discovery 2011-2015

We’ve all learnt more about archival metadata and EAD during the project.

Most significant lessons

Now then, these might be a bit basic and from my own experience.  I’m sure my technical colleagues could add to them though the lessons we have learnt and the processes we have been through in technical areas are well documented on this blog.

  • At the beginning, no one (archive colleagues, library colleagues, friends, family) will know what you are talking about when you mention Linked Data. When you show an example or try and explain it they will look blank. You need to work out a way of explaining and demonstrating it that can be understood.
  • Keep in regular contact with technical consultants if they are not part of the in-house team. We had a face to face meeting, phone calls and regular (weekly) email contact.
  • Think long term about the sustainability and future uses of the data even if it’s only a six month project. We thought long and hard about our URI stem to make it as generic and sustainable as possible and try and re-use URIs rather than making lots of new ones.

Converting EAD data to RDF Linked Data

Monday, July 25th, 2011

In my last blog post I discussed how to setup our server to handle the URIs being created within our Linked Data, and said the next step was for us to turn our EAD/XML data from Calm in to RDF/XML Linked Data.

This is a big step, until now our process looked something like this: Export EAD data -> send it to someone else -> Magic -> Linked Data!

Pete Johnston provided us with details of the magic part. In essence much of the complexity is hidden in an XSLT script (XSLT is a language to process XML in to different schemas, such as here, or in to HTML and other formats). He’s blogged about some of the decisions and concepts that have gone in to it. However, here, we can treat it like a black box. It’s still magic, but we know how to use it.

Converting EAD to XSLT using XSLT and Saxon

We use the Saxon HE XSLT (Java) version to the do transformation. It’s simple to download and setup. The basic core step is very simple: run Saxon, passing it the name of the EAD/XML file and the XSLT file. An example command line looks like this:

java -jar 'saxon9he.jar' -s:ead/ -xsl:xslt/ead2rdf.xsl -o:rdf/ root=http://data.lib.sussex.ac.uk/archive/

And there you have it, your EAD data is now RDF!

Before the data is loaded in to the Talis Platform store, there’s a couple more things we do.

Triples and Turtle

The first is the conversion of the RDF/XML in to the alternative RDF format N-Triples (and also Turtle) using the Raptor RDF parser.

RDF can be written and presented in a number of ways. Probably the most common method is using XML, partly due to the XML language being so ubiquitous, however it is very verbose and can be difficult to read by us humans.

Not only is N-Triples considered easier to read. but each line contains a fully complete and self-contained Triple (a Triple contains a subject, predicate and object, mostly expressed as URIs). While it isn’t too much of an issue here, this allows us to split up the data in to smaller chunks/files which can be POSTED to the Talis Platform.

Talis Platform

The Talis Platform is a well established Triple Store (think of a SQL database but with three part triples rather than records and tables). While you can run your own Triple Store using software such as ARC2, the Talis Platform provides a stable, robust and quick solution.

You interact with the Platform with standard HTTP Requests; GET, POST, DELETE etc. However for simplicity an interactive command prompt front end has been developed in Python called Pynappl. This allows you to simply specify the store you wish to work with, authenticate, and then use commands such as ‘store filename.rdf’ to upload data.

A simple script can upload our data to the Platform, uploading each n-triple file created above.

The final step is to try our the Sparql interface at:

http://api.talis.com/stores/massobservation/services/sparql

Here’s one to try:

SELECT * WHERE {
?a ?b <http://data.lib.sussex.ac.uk/archive/id/concept/moa/religion>
}

Summary

To take our EAD from Calm and turn it in to Linked Data we used a XSLT script written by Pete Johnston, used Saxon to transform the EAD/XML in to RDF/XML using the XSLT script. Then we converted the RDF/XML to RDF/N-Triples using Raptor. And finally we used Pynappl to upload this to the Talis Platform.

The XSLT scripts mentioned here can be found at:

http://data.lib.sussex.ac.uk/files/massobservation/xslt/

The RDF Linked Data is available for download, in addition to the SPARQL interface above:

http://data.lib.sussex.ac.uk/files/massobservation/rdf/

My Thanks to Pete Johnston of Eduserv for providing the process (with documentation) described above.

This page has been translated into Spanish by Maria Ramos from http://www.webhostinghub.com/support/edu

Cost/benefits of the open data approach

Monday, July 18th, 2011

We have been asked to assess how much it has cost us in terms of time and resources to make our data openly available, so here goes.

Our approach to the project was to have a dedicated project manager (me) working 0.5 FTE, using the skills of Pete Johnston for the transformation to Linked Data and the skills of Chris Keene (Technical development Manager for the Library) when required. This meant we were all dedicated to our tasks and  that someone was on top of the administration part of the project, as well as researching the licence and talking/presenting to groups and stakeholders whilst the technical transformation was taking place. This was a good use of time and resources and provided a bridge between the two sides.

We made a decision early on that we did not have time within the project allocation to re-structure the MOA data prior to tranformation as we would like  but we did work through 75% of it expanding name and organisation abbreviations to allow ways into the data. If we have re-structured the data within the CALM database putting dates in the date field, separating out title and description, this would have added at least another month to the project. It prehapes would have meant that there would have been less tweaking to the stylesheet that Pete made for the Locah project, but all worked out in the end as we approached it from a different angle, using lookup lists of keywords and people (See earlier blog posts here and here)

Benefits

The benefits of open data are harder to quantify. We are excited by the potential uses of our data ourside of the archive searchroom and one of the reasons we have used the ODC-PDDL  is so that we can be as open as possible and see what happens. The success of this project also means that open data is on the agenda in the Library (see Chris’s blog post).

Benefits for the Keep : cataloguing guidelines

I have reported back to stakeholders from the Keep as we need to look into how we can share our data and provide resource discovery of all our collections for visitors to the Keep. Having had a close look at our catalogue data for the project we are able to provide recommendations that will hopefully make it easier to export, share and transfer our data to existing or new systems. We have created some in-house cataloguing guidelines and the following guides were produced by myself and a colleague Adam Harwood who is currently cataloguing the University of Sussex Collection.

  • CALM_ISADG_Collection level This document maps the required ISAD G fields to the CALM fields with guidelines on how to populate the fields. We have also included the fields required for export to EAD using the Archive Hub report on CALM.

Our priority in this area to to concentrate on our existing collection level descriptions and any new catalogue componant records that we create. We will share these guidelines with colleagues from the Keep in the next few months.

Setting up our URIs and the Talis Platform

Wednesday, July 13th, 2011

Time to set up our URIs and upload our data to one of our Talis Platform stores.

In a previous post we discussed which URIs to use. We settled on http://data.lib.sussex.ac.uk/archive/ – we felt this should be stable, and allow for integration with other Special Collection records in the future (while not conflicting with other Library data).

We now needed those URIs to do something, at the moment they all just returned a 404 message (albeit a 404 message with a Rick  roll link).

As so often the case in this project this is where Pete Johnston came in. He had already set up the required code on his test server, and similar things had been put in place for the LOCAH project.

In total, all that is required is a few php/html files and a .htaccess file to handle rewrites (i.e. taking a URI and calling the script in question with the righthand bit of the URI as a parameter). The main script is an index.php file which on our server lives at www/data/archive/doc (which corresponds to http://data.lib.sussex.ac/archive/doc/).

Along with these files were a few dependencies, PHP libraries: paget, moriarty and ARC.

However this code needs to access data from somewhere, and to do this we need to put our data in to our new shiny Talis Platform store…

Talis Platform

The second part of this work was to upload our data to the Talis Platform. Talis had kindly created to stores for us: massobservation and massobservation-dev1, as part of their Connect Commons scheme.

Pete ran a set of scripts he had developed to upload our data to the dev1 store. We’re currently installing these on our own server so we can do this ourselves, and we’ll report more on them soon.

So that was that, without much fuss, we now had our data in our publicly available, Sparql query-able, RDF store. There probably should have been champagne.

Back to our server

So with our data now in a RDF store, libraries installed on server, files copied and in place, config edited to point to our store, it was time to point a browser at one our URIs and start debugging the first error message (which once resolved will lead to the next error message, and so forth). But… for the first time in my life, it just worked. This never happens. It left me confused, I had set aside hours of my diary for endless frustrations and here it was working. I felt cheated. But, once over the shock, I (and you too) could visit examples such as this: http://data.lib.sussex.ac.uk/archive/doc/archivalresource/gb181SxMOA1 (RDF/XML, JSON, Turtle)

Look! It’s our data… as Linked Data… live on the internet!

data.lib.sussex.ac.uk

I would like to see data.lib.sussex.ac.uk become more than just the Mass Observation Archive, and with that in mind I created a front end for the top level URL: http://data.lib.sussex.ac.uk/

This uses WordPress as the CMS (life’s to short to code the html/css files by hand).

For those interested, the .htaccess mod_rewrite looks like this:

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^archive/id/(.*)$ /archive/doc/$1 [R=303,L]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

The rule for the URIs is at the top (simply redirecting archive/id/* to archive/doc/*) if this rule is ‘matched’ then processing ends and the rest of the rules are ingnored ( [L] ), otherwise process the standard WordPress rules.

Next steps… (for this strand of work)

Install scripts on our server so that we can:

  • take a file of EAD date from Calm and transform in to a file of RDF/XML
  • convert this to a set of N-Triples files (which are easier to upload to the Platform store as each statement/triple (or if you prefer, fieldname and field value from a record) is complete and able to standalone, so the data can be uploaded in stages without complications.
  • Upload the files to the store.

Following in our footsteps

Wednesday, July 6th, 2011

Question: If others wanted to take a similar approach to your project, what advice would you give them.

Our advice at the start would be:

1. Get your data ready. We are working on our catalogue data to make it more structured so that we can be ready to export to other formats and make it more portable. Regardeless of whether it becomes Linked Data in the future, we are getting ourselves ready. This is also probably the most time consuming aspect. From personal experience, once you start looking at your catalogue data, you’ll find lots of things that you want to change or are missing or don’t make sense so the work starts to grow…

2. Are you in a position to licence your data? We chose the catalogue data of the Mass Observation Archive as we were confident of its provenance so we could make it fully open and available under ODC-PDDL. This hopefully will allow the greatest flexibility for people wanting to use the data and fits with the ethos of the project and the JISC Discovery strand

3. Find out about other similar projects! We at SALDA realise the value of these blog posts to anyone wanting to do a similar project to SALDA. We followed in the footsteps of the LOCAH project and were able to use their stylesheet and experience in tranforming archival data into Linked Data. We are working with the Pete Johnston from Eduserv whose knowledge and experience is invaluable. You can see his contribution to the blog here

4. Find examples of Linked Data in use, in human readable format so that you can show stakeholders, colleagues, friends what it is that you are on about. I use the BBC wildlife pages and how they link to Animal Diversity Web