Because good research needs good data

Web 2.0

By Daisy Abbott, The Glasgow School of Art

Published: 27 April 2009

Please cite as: Abbott, D. (2009). "Web 2.0". DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3324. Available online: /resources/briefing-papers/introduction-curation

Browse the paper below or download the pdf.

1. Introduction

The term 'Web 2.0' refers to a way of thinking about networked communications; a collaborative and social way of working underpinned by the key concept of the Web (rather than the desktop) as a platform. Web 2.0 is characterised by data sharing, personalised data consumption, Web-based communities, and Web-hosted services and applications. Examples of typical Web 2.0 technologies are social networks, media-sharing sites, wikis, blogs, social bookmarking, folksonomies, syndicated content (RSS) and Rich Internet Applications (RIAs).

The term was coined in 2004 by Dale Dougherty, a vice-president of O'Reilly Media.1 As the concept has matured, along with the development of Web technologies and standards, it is having a significant impact on distributed infrastructure and applications, and on the way users and developers interact.2 Web 2.0 maximises the potential of various ideas about networked communication including contribution (user-generated content), collaboration over networks, community participation ('the power of the crowd'), data on an enormous scale, combining datasets (e.g. mashups), and open sharing.3

The concepts behind Web 2.0 can change approaches to research; increasing access to data and expertise, creatively combining data to produce new knowledge, and encouraging large scale participation and personalisation. However, alongside these benefits to researchers, goes a challenge for the curation of large, complex, and dynamic data.

Scholarship is becoming more technologically mediated, even if researchers are not necessarily interested in how the technology works. Web 2.0's emphasis on concept and process rather than the release of a 'product' leads to new roles within research: those who develop applications/software/standards are now less separated from those who use these tools for research, many of whom now take up a 'hacker' role i.e. building upon or subverting an existing technology to fit a new purpose.4 This concept of 'generativity' and of ideas, data, or code development by non-developers is a key characteristic of Web 2.0. It allows tools to be customised more closely to the needs of users, and to fit purposes for which they were never originally intended.

Back to top

2. Examples of Web 2.0 in Practice

There are many examples of Web 2.0 services and systems of use to academia, some of which are already making an impact on learning and research. Web 2.0 technologies specifically for curation are less prevalent, although some initiatives are emerging.5 A few of the most relevant examples are noted below:

Cloud computing

Based on the concept of Software as a Service (SaaS), cloud computing delivers virtualised resources across the network, i.e. storage, platforms, software and so on are accessed remotely over a network of computers, known as the 'cloud'.6 It is scalable, allows the 'outsourcing' of IT support, provides services on-demand, and is attractive to users who require a 'pay as you go' cost model with a low start-up cost. However there are clear concerns for curators, given the possibility that anything in 'the cloud' is outside the control of users and could potentially disappear, or be curated to different standards or methodologies. The cloud computing community are developing a Bill of Rights for users.7 An example of the use of cloud computing in Higher Education is the CARMEN project (http://www.carmen.org.uk/) which uses a private cloud for neurophysiology research.8

Ajax technologies

Ajax (which originally stood for Asynchronous JavaScript and XML) comprises a group of Web technologies for programming Rich Internet Applications that communicate with servers in the background, without interfering with the current state of the page or the user's interaction with it.9 For example, a text box in a search interface can look up and provide suggestions for journal titles based on the first few letters a user types in, allowing auto-completion from a validated list.10

A few selected tools for researchers are:

  • Mendeley — software for managing a library of research papers
  • MyExperiment and Taverna — for designing and sharing workflows
  • Slideshare and SciVee — for sharing presentation slides, posters, and videos of your research
  • Geographic information tools/projects such as GeoVUE, MapTube, and other software provided by the UCL Centre for Advanced Spatial Analysis11
  • The Journal of Virtualised Experiments (JOVE) — biological research published in video format12
  • iGoogle and Pipes — personalised mashups of various content, including syndicated feeds
  • Googlegadgets — for disseminating personalised, dynamic content as miniature objects on web pages

Back to top

3. HE/FE Perspective

"The open nature of Web 2.0, its easy-to-use support for collaboration and communities of practice, its ability to handle metadata in a lightweight manner and the nonlinear nature of some of the technology […] are all attractive in the research environment."

— Anderson, P. (2007). "What is Web 2.0? Ideas, technologies and implications for Education" JISC Technology and Standards Watch.

Back to top

4. e-Science Perspective

"Web services, interfaces that simplify access to both data and software, can enable scientists to access multiple existing interoperable databases and mesh together relevant data from each to create a new, richer, data collection. The limitation on the potential of such services is not technological but cultural — they can work successfully only with real-time access to the source databases and of course can work only on publicly available content."

— Swan, A. (2006). "Overview of scholarly communication" in Jacobs, N. (ed.) Open Access: Key Strategic, Technical and Economic Aspects, (Chandos Publishing: Oxford, UK) p. 7.

Back to top

5. Issues to be Considered

  • What are the privacy implications for data that is being openly shared, repurposed, and mashed up? There are also ongoing concerns surrounding the security of data in Web 2.0 platforms and applications, another very important issue.13
  • "The development of the Web has seen a wide range of legal, regulatory, political and cultural developments surrounding the control, access and rights of digital content. However, the Web has also always had a strong tradition of working in an open fashion and this is also a powerful force in Web 2.0: working with open standards, using open source software, making use of free data, re-using data and working in a spirit of open innovation." (Anderson, 2007) Who owns data when it has evolved into different formats and combinations? There is a tension between 'freeing' data and protecting the rights (where they exist) of data owners.
  • In terms of data curation do the characteristics of Web 2.0 make it challenging to collect and preserve? Is curation of Web 2.0 data fundamentally different to previous Web archiving efforts? Web 2.0 services, mashups, and tools typically use structured layers of data with APIs providing access to very large dynamic datasets. This can make access to the underlying data problematic.
  • Much Web 2.0 data is constantly evolving — how should the curation of dynamic resources be managed?
  • How transient are current Web 2.0 applications and technologies? Will they be superseded by Web3.0 in the next five years?
  • Is there a difficulty with information overload?
  • Should researchers be concerned about their digital identities? Web 2.0 encourages mass participation in the forms of tags, comments, and linkages. Does the loss of control over defining one's own identity online pose a threat to academic reputation?14
  • How can digital contributions, products, or processes be credited or cited appropriately?15
  • Is Web 2.0 design being dominated by the technological issues, without giving due care to social design? It is often opined that 'subvertible' or generative design is the key to building a successful Web 2.0 application.

Back to top

6. Additional Resources

Back to top

Notes

  1. O'Reilly (2005).
  2. Anderson (2007).
  3. O'Reilly (2005) lists seven principles of Web 2.0: the Web as platform; harnessing collective intelligence; data is the next 'Intel inside'; the end of the software release cycle; lightweight programming models; software above the level of a single device; and rich user experience.
  4. Eric Meyer at The Influence and Impact of Web 2.0 on e-Research Infrastructure, Applications and Users, 2009.
  5. E.g. McCown et al. (2009).
  6. Examples include: Amazon Web Services and GoogleApps.
  7. Cloud Computing Bill of Rights.
  8. For more information, see Watson (2008).
  9. More about Ajax is available from the W3Schools Ajax tutorial.
  10. See Romeo look-up demo.
  11. See GeoVUE, Maptube.com, and CASA.
  12. See also Open Wetware.
  13. Cf. Brodkin (2008).
  14. For more, see Solove (2007).
  15. Cf. the MESUR project.

Back to top