You are here
Using RISE, the Research Infrastructure Self Evaluation Framework
By Jonathan Rans and Angus Whyte, Published: 27 January 2017
Please cite as: Rans, J and Whyte, A. (2017). ‘Using RISE, the Research Infrastructure Self-Evaluation Framework’ v.1.1 Edinburgh: Digital Curation Centre. Available online: www.dcc.ac.uk/guidance/how-guides
This work is licensed under Creative Commons Attribution 4.0 International: creativecommons.org/licenses/by/4.0/
- Introduction
- Issues with sevice development
- Using capability models for self-assessment and planning
- Using the RISE framework in service planning
- Lessons from using RISE
- Future development of the RISE framework
- Contribute to the development of RISE
- Acknowledgements
Introduction
Delivering effective institutional support for research data management (RDM) is a challenge for any HE institution, regardless of size or research intensity. Typically, support should include both technical and human infrastructure, with ownership of individual elements distributed across the institution. Ensuring that RDM service development takes as comprehensive a view as possible and engages effectively with relevant stakeholders is key to successful research data support.
The Research Infrastructure Self-Evaluation (RISE) framework is a benchmarking tool designed to facilitate RDM service planning and development at the institutional level. The tool provides a framework for discussion, enabling relevant stakeholders to contribute their experience to all aspects of a holistically envisioned service. It distils the DCC’s experience working with the HE sector through the Institutional Engagement programme and builds on existing tools such as the DCC’s CARDIO[i].
RISE was created primarily for higher education institutions to help them to take stock of their current RDM support provision and identify areas of focus for future development. This process is typically administered by someone from within the institution with significant experience of the local research support infrastructure, and a good understanding of the wider issues associated with supporting data management. Regardless of who manages the service review, input is likely to come from representatives of the Library, Research Office and IT, and may include input from other areas. One of the advantages of using RISE is that it provides a means of engaging these stakeholders in productive discussion about service development and allows them to reach a shared vision of where the RDM service aims to be.
Issues with service development
When planning or refreshing institutional support services, there are a number of common issues that the RISE model aims to address:
Firstly, there is a need to adopt a holistic approach to service design, as delivering effective RDM support typically requires people and processes to work together across professional service units. This coordinated result is very difficult to achieve when individual RDM service elements are developed in isolation.
Secondly, it can be difficult to foster discussion across services to ensure that all stakeholders contribute to relevant discussions. Although there should be a clear institutional owner for any distinct service that is part of RDM support, there will be several stakeholders who have a useful contribution to make to service planning and implementation. Free-form discussion can be unproductive where people have varying levels of experience with systems or underlying issues.
Thirdly, organisations can develop their capabilities through different combinations of technologies and products, which evolve rapidly. Using a capability model enables stakeholders to take a step back and consider the question ‘what do we need to provide?’, informed by current practice and developments in skills and technology. This helps keep more detailed discussion of requirements on track, by avoiding the risk that discussion of service development drills down into specific technologies and products too far and too soon. This can prove isolating for those who are not familiar with the technical details of implementation but have valuable views and experience to contribute to the discussion.
Using capability models for self-assessment and planning
Self-assessment is an essential early step towards constructing a service development plan which responds to the individual needs of an institution. It is a key concept in service management that implemented services should be continually reviewed, and there are a variety of capability and maturity models that provide a framework to facilitate these processes. Capability models offer different perspectives from which to systematically review key RDM process areas and activities (capabilities), and assess their level on a scale such as ‘maturity’ or ‘organizational readiness’. It is most important to choose a capability model that matches the aims of the assessment, offers a relevant set of processes, and a useful scale to rate them on.
A common feature of capability models is a list of process areas, representing a domain such as digital preservation, software development, or open access service provision. These are used to construct a rubric, in tabular form, whose rows list the relevant capabilities. A number of columns (e.g. 3 to 5) are labelled to describe a scale of attainment, and each cell provides a statement describing a level for each capability.
Probably the most widely established model for service development is the Capability Maturity Model (CMM) developed at the Software Engineering Institute (SEI) at Carnegie Mellon University (Paulk et al., 1993)[ii]. In the CMM approach, the scale represents maturity, i.e. the level of organizational capability to reliably perform the process. This maturation reflects the extent to which each process is institutionalized and managed, ideally with quantified measures enabling continuous process improvement.
An influential example of a capability model focusing on the preservation context is the ‘Five Organizational Stages of Digital Preservation’ developed at Cornell University (known as the ‘Cornell model’). This evaluates five stages of ‘preservation readiness’ for institutions, drawing on the OAIS standard. The Cornell Model has influenced a number of recent capability models, including:
- CARDIO (Collaborative Assessment of Research Data Infrastructure and Objectives) developed by the Digital Curation Centre as a tool for institutional stakeholders to self-assess service elements.
- AOR (Assessing Organisational Readiness) developed by University of London Computing Centre as a tool for organisations to measure their readiness for managing digital content.
In many capability models the scale represents maturity, i.e. the level of organizational capability to reliably perform the process. This maturation reflects the extent to which each process is institutionalized and managed, ideally with quantified measures enabling continuous process improvement.
The DCC’s RISE model adopts a different approach, in that the maturity of capabilities are not explicitly assessed. Instead, the model uses an everyday concept of ‘capability’ as ‘the ability to generate an outcome’, in this case the ability to provide service value. RISE describes three levels of capability that correspond with different levels of service value in each area of capability. Broadly speaking, the focus for level one is compliance, for level two is providing locally-tailored services and for level three is sector-leading activity. The model does not include a level corresponding to a complete lack of support activity, in those cases capability is described as being at level zero.
The RISE framework describes 21 capabilities, distributed across ten research data support service areas. These were identified through the DCC’s institutional engagement programme and applications of the CARDIO tool, and were the basis for a generic RDM service model.[iii] The current version updates that model in keeping with the increasing maturity of RDM support. It draws on DCC surveys of institutional RDM services in the UK, and trends reported in surveys of academic libraries in the UK, US and Europe. [iv],[v],[vi] A DCC workshop in which RDM service practitioners tested an earlier draft of RISE was also extremely useful in validating the model.
The model reflects the high level of diversity among UK research institutions, ranging from the highly research-intensive to those conducting little research, or which specialise in certain disciplines. This is not unique to the UK. Institutions worldwide are addressing funder and community expectations of broader research data sharing, but the appropriate level of response is largely defined by the institutional context. It would be unrealistic to expect every institution to provide the same level of service capability across every element of RDM support. RISE aims to help institutions identify which capabilities are appropriate for them and therefore which areas to prioritise in their service improvement planning.
The level of service capability that it is feasible or desirable to deliver will depend on the institutional context. Where it might be considered essential that a large research intensive institution provide an in-house data publishing platform, an institution with a modest research capacity may be better to consider outsourcing or sharing aspects of the service, including the repository platform itself, with other institutions or an external provider. The RISE capability model aims to recognise this contextual difference by providing three possible levels of service capability, using compliance with the main policy expectations of research funders, and legal requirements as a starting point. It should be noted that while the levels offer a progression in terms of service capability, RISE does not assume that more is better. The level of capability offered should be proportionate to costs that are justifiable, considering local research strategy, available resources, and likely demand for the relevant services.
Using the RISE framework in service planning
Aiming to ensure that institutions could use RISE in a variety of contexts, the DCC engaged with 16 UK HE institutions to test its relevance and utility. Applications ranged from using the tool as a framework for a semi-structured interview with RDM service managers and selected central support staff, to using it in a group workshop session to discuss data publication needs. The RISE outputs from this session informed a more detailed assessment of shortlisted platforms, based on capabilities set out in ReCap, a sister DCC model for evaluating data repositories. [vii]
The method for working through the RISE model is relatively straightforward, though achieving consensus across the institution may add layers of complexity. Broadly speaking, there are four stages to using the RISE model:
- Setting the scope and identifying context
- Classifying current RDM support provision
- Identifying feasible levels of service provision based on what is desirable
- Reporting and recommendations
We describe these steps in more detail below.
1. Setting the scope and identifying context
It is important when starting a RISE assessment to have a clear goal in mind, this helps to define the scope of the assessment and identify the relevant stakeholders to engage in discussions. Generally speaking, this phase will involve quite a small subset of institutional stakeholders, often those with overall responsibility for providing institutional RDM support. Examples of requirements that have been addressed using the RISE framework are:
- ensuring that RDM infrastructure meets RCUK funder requirements
- providing a clear overview of current provision
- helping establish a roadmap for further development.
2. Classifying current support provision
In this phase, engagement is widened to include a range of institutional stakeholders. RISE should be tailored to suit the number of people that need to be involved at this point. If only one or two stakeholders need to be involved in the assessment, RISE can be used as a basis for a discussion. If more, then a service or project manager can use RISE as a basis for preparing semi-structured interviews or a workshop.
Whichever approach is adopted, relevant stakeholders should work through the tool to identify which statement for each capability best suits the current institutional support provision. This phase can usefully draw on relevant contextual information, such as user surveys or consultations. This information can subsequently be summarised during the reporting phase of the assessment to frame and qualify implementation decisions.
3. Designing the future service
The next stage in the process is to identify, for each relevant capability in the RISE model, what level is considered feasible and desirable. This decision should take into account institutional philosophy, the resourcing outlook, and the benefits and risks associated with moving to a higher level of capability. In practise, phases two and three have been performed concurrently, although it is conceivable that this step may be omitted if the purpose of the exercise is to solely assess the current status of the service.
4. Reporting and recommendations
Producing a formal report as part of the RISE process is optional; some users of the tool have simply found it a useful tool for initiating conversations between RDM stakeholders to reach a consensus about the service. For others, RISE has proved a useful tool for identifying gaps in support provision and aiding prioritisation decisions, contributing to the development of roadmap documents. Working through the RISE framework uncovers useful information about the case for service development that can be incorporated into business plans. Use of the RISE framework alongside its sister model ReCap can also help scope high-level requirements for data repository platforms to help progress to more detailed discussions around platform selection.
Lessons from using RISE
We have drawn a number of lessons from working through the RISE framework with HE institutions:
- Have a fixed goal in mind, and communicate this clearly to all participants
- Allow sufficient time to work through the model, the amount of time taken will depend on:
- Number of participants (ideally no more than eight at once)
- The level of engagement in the organisation with RDM issues
- The scope of the assessment – will all or part of the framework be used?
- The level of detail required – in addition to the main capabilities, RISE offers suggestions of associated questions to consider during discussion
- The extent to which current and/or future service provision is already defined
- Stakeholders should familiarise themselves with the framework individually before coming together to discuss collectively.
- This is particularly true for the individual administering the process, who will need to guide discussion and potentially resolve situations in which participants’ interpretation of capabilities differs.
- Ensure discussion is captured to allow useful information to be summarised and fed back
By and large, the length of time needed to complete a RISE assessment depends on the level of detail required from discussion. As an indication, a full assessment in a workshop group of 6-8 people would take half a day, assuming participants are familiar with the framework.
Future development of the RISE framework
Development of the RISE framework to date has drawn on the Digital Curation Centre's experience of working with the RDM community. It was conceived as an outcome of the DCC programme of engagement with UK universities, and with the input of RDM professionals from a range of Higher Education institutions, who provided feedback and validation. We aim to keep developing it, and future iterations will respond to user feedback and advances in sector best practice.
A more detailed verison of this guide will include more discussion about applying the framework. We aim to publish selected case-studies from institutions that use RISE with or without DCC support. These case studies will discuss the pros and cons of different approaches, present lessons learned and, where possible, include examples of outputs from the process.
Further guidance is intended on how to use the outputs of RISE with ReCap, its companion model. The models are intended to be compatible, so that broad service development discussions can extend to more in-depth consideration of data repository platform options. This should better enable non-technical services staff to help shape requirements, and address difficulties in getting researchers’ input. ReCap is the first of a two-part guide, the second part of which describes the data repository workflows and contexts that inform the model [vii].
Further work is also intended in a number of areas, including community consultation on specific benefits to the organisation and to service users that may be realised as a service increases in capability from one level to the next. Over the next few years we aim to integrate RISE with career development frameworks. Like ReCap and other capability models, RISE is focused on improvements at the level of the service and its host organisation. Delivering these improvements means having the right skills in place for individuals in RDM service provider roles. The uptake of new services also depends on researchers having the right skills to use the services offered to them.
It makes sense therefore to use capability models in conjunction with competence frameworks. Competence (or ‘competency’) frameworks define what people with specific roles should be able to do. These frameworks are typically used by human resource professionals to analyse training needs, and manage career development and performance review/reward.[viii] They have also been used by the European Commission to support its ‘new skills agenda’, for example through DigiComp, a common reference framework for public authorities to implement digital skills development for citizens, [ix] and the European e-Competence Framework, which addresses the education of IT professionals to meet workplace competence needs.[x]. The EDISON project takes a similar approach to develop a competence framework for Data Science.[xi]
In the European Open Science Cloud (EOSC) pilot project, DCC is working with partners to develop an integrated competence and capability framework, informed by shared experience in supporting service development and validating EOSC services. The framework aims to help organisations plan for the effective deployment of services that European Research Infrastructures offer researchers to better enable data science. Institutional research data services will also need to be aware of EOSC services and help researchers use them. Joining up competence frameworks in data science and data management, ,[xii],[xiii] the EOSC framework will help organisations ensure the right training is included in service development roadmaps, and in the career development plans for relevant staff.
There is also scope for further work to better integrate RISE with other capability models, such as the Community Capability Model for Data Intensive Research (CCM-DIR)[i] and the Capability Maturity Model for RDM (CMM-RDM).[ii] Integration could make it easier for individuals and organisations to apply a model best suited to their needs.
Contribute to the development of RISE
RISE has been tried and tested through practical application with the RDM community. We would welcome any feedback on the model to contribute to its continued improvement. Equally, we would be really interested to hear any accounts of how it has been used to shape service development. Please contact Jonathan Rans (j.rans@ed.ac.uk) or Angus Whyte (a.whyte@ed.ac.uk ).
Acknowledgements
We gratefully acknowledge support from Jisc, which funded some elements of this work carried out prior to August 2016. We also thank Kerry Miller (University of Edinburgh), Lisa Haddow (University of Stirling), David Young (University of Northampton), Georgina Parsons (Cranfield University) and all the participants in the DCC’s RISE workshop in June 2016 for invaluable testing and feedback on the framework.
[i] Digital Curation Centre (n.d.) CARDIO, available at: /resources/tools/cardio
[ii] Paulk, M. C., B. Curtis, M. B. Chrissis, and C. V. Weber. (1993) Capability Maturity Model for Software, version 1.1. CMU/SEI-93-TR-24. Pittsburgh: Carnegie Mellon University, Software Engineering Institute.
[iii] Jones, S., Pryor, G. & Whyte, A. (2013). ‘How to Develop Research Data Management Services - a guide for HEIs’. DCC How-to Guides. Edinburgh: Digital Curation Centre. Available online: /guidance/how-guides
[iv] Cox, AM and Pinfield, S (2013) Research data management and libraries: Current activities and future priorities. Journal of Librarianship and Information Science, 46(4), 2014, Available: https://dx.doi.org/10.1177/0961000613492542
[v] Corrall, S., Kennan, M. A., & Afzal, W. (2013). Bibliometrics and Research Data Management Services: Emerging Trends in Library Support for Research. Library Trends, 61(3), 636–674. Available: https://doi.org/10.1353/lib.2013.0005
[vi] Cox, A. M., Kennan, M. A., Lyon, L., & Pinfield, S. (2017). Developments in research data management in academic libraries: Towards an understanding of research data service maturity. Journal of the Association for Information Science and Technology. Available: http://eprints.whiterose.ac.uk/101389/
[vii] Whyte, A. et al. (2017) How to Evaluate Data Repository Platform Options. DCC How-to Guides. Edinburgh: Digital Curation Centre. Available online: /guidance/how-guides
[ix] DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: the Conceptual Reference Model. - EU Science Hub - European Commission. (2016, June 1). Available at: https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-res...
[x] European Committee for Standardization. (2016). e-CF overview | European e-Competence Framework. Available at: http://www.ecompetences.eu/e-cf-overview/
[xi] EDISON (2016) Competence framework for data science, Available at: https://edison-project.eu/data-science-competence-framework-cf-ds
[xiii] Confederation of Open Access Repositories (2016) Librarians’ Competencies for E-Research and Scholarly Communication. Available at: https://www.coar-repositories.org/activities/support-and-training/task-f...
[i] Lyon, L (2012) Community Capability Model for Data-Intensive Research. Available at: https://communitymodel.sharepoint.com/Documents/CCMDIRWhitepaper-v1-0.pdf
[ii] Crowston, K. & J. Qin. (2011). A capability maturity model for scientific data management: Evidence from the literature. In: Proceedings of the American Society for Information Science and Technology, 48: 1–9. doi:10.1002/meet.2011.14504801036