Following the previous week’s work, where I single-handedly had to trudge backwards through the mess I had created, I came to a couple of realisations.
The first of my realisations was that, as I was doing use-case driven development, working on a user scenario, I should have been working on the system from the outside towards its centre; working my way through each layer writing appropriate tests and ensuring that those tests passed. This meant working from what the user needs in order to get their thing done, while I implemented the things necessary for this to happen — and only those things — through the rest of the system.
I had, however, started at the wrong end: I worked from the data at the centre towards the outside. It never occurred to me that this was a bad idea, even though I really know it is. I have pondered why, and I think that it has something to do with the fact that I have worked with systems where I know the structure of the data even before I have had time to analyse it. Sure, I don’t know the minutiae, but I do know in broad strokes that the design will be a familiar one.
In libraries, we have a lot of givens when it comes to data; we know that the data will largely be formatted in one of a few ways (realistically, we know that it will be formatted in core systems in one and only one way), we also know that it will be a bit of a mess with some odd workarounds.
There’s a comfort in being an expert in something arcane that is a highly dependable mess; you’re rarely fighting a competent generalist for a position; you’re rarely going to be surprised. You know what can and can’t be done.
Library data is sort of like a battered old sofa with a sweet spot, it is difficult to get out of, both because it is broken and because it is comfortable in an odd way.
Looking at things from a developer’s point of view, it is tempting to work from the know towards the unknown. It’s easiest to start with what you know best and let that flesh out the plan. Unfortunately, making assumptions at the core of the system colours the outcome for the user rather badly. It makes it very hard to encode the necessary qualities of the user scenario through the system and maps badly to actual needs. (Note also that really, the use case should be the known!)
It’s quite obvious in our various data initiatives that we’re a structure-first crowd that is heavily influenced by a proud tradition of solid, dependable data structures, even if we know that these are devoid of connection to real use beyond the production of monospace-font library cards — in paper or digital formats. We imagine we can do something new, never realising that our problem stems not from our format, but from how we view data and its place in our work.
It’s odd that we think that we can guess our way to a relevant data structure before we know how a user will react to the features we’re developing; it’s arrogance and laziness. We spend an awful lot of time in our committees and initiatives worrying about details of things that affect nothing useful. We think in terms of technology rather than usage; we think in terms of our professional community rather than the community of users we serve.
I note to myself that I write above about how data is at the centre of things — this is a familiar mantra from the library community and some might view me as saying that this isn’t the case. Let me address that: I’m firmly of the belief that data is at the very core of what we do and what we need to do — we can’t do anything without it; I just don’t believe that we can usefully talk about data structure independent of use cases and the specific systems that they bring into being.
Library systems will always be wrong until we stop working from the data out and start working from the outside towards the core.