There’s no reason for a librarian to understand RDF

Given a properly functioning workflow, there’s no reason for a librarian to understand RDF.

In the same way, given a properly functioning workflow, there’s no reason for a librarian to understand MARC — or any other data serialisation.

In cases where we don’t have a functioning workflow, it is absolutely essential to understand these things. In the case of MARC, the serialisation has been so embedded in/as the workflow — along with other annotations that are typically done from memory — that I’d dare to say that the serialisation is the workflow†.

You might argue that any expert system has this kind of oddity, but you’d be wrong — bad system design introduces this kind of oddity. Good system design abstracts this kind of thing so that they are applied uniformly by all expert operators. You might think that I’m saying that such interfaces are uniformly awful — and harmful to data quality — and you’d be right.

In the case of cataloguing interfaces, too much focus is placed on the expert nature of understanding annotation details, and too little on knowing what a thing is and what is important for a user to know about that thing in order to do what they want to do.

A linked-data-based system should have a workflow that works abstract to the technology; there is no reason why, for example, you can’t have basically a MARC-alike interface with a linked-data under the hood. I don’t say that this is a good idea, but it isn’t a problem (as long as it is assumed that you only need to represent what you represent in MARC).

The current system we’re working on is a radical departure from traditional systems, but for all the radical-ness of the linked data (not very radical in terms of systems), the really radical part of the system is the data-entry interface (not very radical in terms of systems, but a radical departure from most of the other library systems we’ve looked at).

This isn’t something we could have come up with without a lot of help from an interaction designer, and I’m beginning to understand why. We were blinded by tradition. Field-by-field entry is a common methodology in metadata (c.f. every metadata interface in every image editing package).

Further, the belief that the fields that form an RDF description are a record is also problematic; I’d say that it has become clear to us that the record is very much a workflow concept — analogue to this in the data model is the RDF description, but the two aren’t really related; I’ll get back to this in the next post.

So, given an actual, functional workflow, the motivation for understanding RDF is akin to the motivation for understanding the logical model/physical model of your LMS — specialist knowledge for those librarians doing database tinkering. And it really should be this way.

But, what about data I/O? Well, if that isn’t part of the workflow in a linked-data-based system, I’m going to have to say that that isn’t a linked-data-based system.

†I really didn’t realise that people out there use external MARC-editing tools as their workflow; editing records outside the system workflow wasn’t a thing in Norway…until Alma came along. But even so, in the workflows of all of the systems I have been exposed to, understanding MARC is still a thing (kudos here to Koha, where the explicit in-situ documentation of MARC is really good), even when it doesn’t need to be (looking at you BIBSYS Bibliotekssystem, where meaningful field names were eschewed in favour of MARC field codes).

 

 

Advertisements
Posted in Uncategorized

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s