Because good research needs good data

How can we evaluate data repositories? Pointers from DryadUK

How can researchers, publishers and others interested in a data repository tell if it is any good or not? The Dryad UK project has released a 'Draft Framework for Assessing the Dryad Data Repository'. This was was among the final outputs recently published from Dryad UK, funded last...

A Whyte | 01 February 2012

How can researchers, publishers and others interested in a data repository tell if it is any good or not? The Dryad UK project has released a 'Draft Framework for Assessing the Dryad Data Repository'. Does this answer the question? It goes part of the way, as it offers a set of criteria that can be applied to Dryad and other data repositories. It also proposes measures to judge whether a data repository meets those criteria. What it did not do was evaluate Dryad using the framework. It did not set out to do that, aiming instead to guide further work.

For those involved in digital preservation and repositories this may seem odd. Substantial work that has been done to establish standards for 'Trusted Digital Repositories' (TDRs), including formal audit processes. Important though these are, they are only part of the story. They are value is in setting the bar for 'good practice' in terms of policies that a repository should have and the procedures it needs in order to comply with community expectations. I return to them at the end of this item.

In the draft framework we have three sets of criteria; quality of interaction, take-up and impact, and thirdly 'policy and process' including whether or not the repository meets TDR standards. The first set considers questions that prospective users are also interested in, such as its usability, and the second 'take-up and impact' looks at factors that motivate people to use a data repository in the first place; what can I put in it? Is anyone else using it? Will others be able to find stuff deposited in it? Is the repository linked to other data repositories so I don't have to search tehre as well? Can anyone reuse the data? Can others cite the data, and will depositing boost citations to related papers?

How do we know these questions are important? In Dryad UK we held stakeholder workshops that brought together publishers, learned societies, researchers and funders. I suggested a range of criteria, asked people to rate their importance, and took part in breakout sessions on the value of Dryad to them. These informed revisions, we carried out online surveys, discussed how to measure them. No doubt these did not go far enough. The criteria capture the views of biomedical journals and publishers whose 'buy-in' DryadUK was seeking. But I can't say confidently that we captured a broad range of researchers' views. A survey directed at researchers depositing in Dryad and using it could have been better timed, if time had allowed. We could have tried other ways, if the budget had allowed, and arrived at more concrete measures to apply to Dryad.

We took a close look at one of the above questions; is there a potential boost to citation rates from data deposit? That has been the subject of work by Heather Piwowar, itself much cited, from a small study of microarray datasets deposited in Genbank and ArrayExpress. In DryadUK Heather Piwowar carried out a much larger reanalysis of similar datasets originally collected for her doctoral dissertation, taking into account a broader range of factors known to affect citation rates, such as the number of authors (and hence the 'network effect' of large collaborations). The results should be submitted for publication in the near future.

The metrics that the draft framework proposes are borrowed from other areas. With notable examples like Peter Murray-Rust's excellent blog post, I had trouble finding work that adapts metrics and methods for use in a data repository context (not that PMR's blog post is specific to data repositories). Access, usability, discoverability... there is no shortage of metrics and methods we can use to assess repositories. However they generally assume a context in which the item to be retrieved is a document that contains 'information' relating to our query, not a collection of data and related material asserted as evidence to support that 'information'. The usage scenarios are different, which makes this a qualitatively different kind of setting for assessing relevance, usability, discoverability and so on. 

To return to Trusted Digital Repositories, I believe any data repository could benefit from a look down the path that begins with self -assessment on the Data Seal of Approval, and ends with an external audit on the new standard ISO16363. The Draft Assessment Framework falls short of recommending where exactly Dryad ought to be on that path. That's partly because it wasn't aiming to assess Dryad, but more importantly because it is not clear how far down that path Dryad's stakeholders and user community want it to be. Perhaps it is clearer for other data repositories. If not, maybe there's a need to clarify how the costs and benefits trade-off for whoever pays to get the repository to a particular level of TDR accreditation.

Drafting this framework was a small part of a small project and it would be great if it could be continued in some form.  Some questions I have are: -

-Are the criteria listed in the framework broad enough to apply to any data repository that links datasets to articles?

-Can you suggest any example of work not cited in the report, that identifies metrics used to assess data repositories?

-What would improve this draft framework?

Comments will be gratefully acknowledged and summarised somewhere, at least in a follow-up post.