Highlighted Selections from:

Citizen Science as Seen by Scientists: Methodological, Epistemological and Ethical Dimensions


DOI: 10.1177/0963662513497324

Riesch, H, and C Potter. “Citizen Science as Seen by Scientists: Methodological, Epistemological and Ethical Dimensions.” Public Understanding of Science 23.1 (2014): 107–120. Web.

p.108: In this paper we present the results of a series of qualitative interviews with scientists who participated in the ‘OPAL’ portfolio of citizen science projects that has been running in England since 2007: What were their experiences of participating in citizen science? We highlight two particular sets of issues that our participants have voiced, methodological/epistemological and ethical issues. While we share the general enthusiasm over citizen science, we hope that the research in this paper opens up more debate over the potential pitfalls of citizen science as seen by the scientists themselves. -- Highlighted mar 15, 2014

p.108: CS is a contested term with multiple origins, having been coined independently in the mid-1990s by Rick Bonney in the US (see Bonney et al., 2009) to refer to public-participation engagement and science communication projects, and in the UK by Alan Irwin (1995) to refer to his developing concepts of scientific citizenship which foregrounds the necessity of opening up science and science policy processes to the public. -- Highlighted mar 15, 2014

p.109: Mainly referring to the US understanding of the term, CS is often seen by scientists as an idea that successfully combines public engagement and outreach objectives with the scientific objectives of scientists themselves (Silvertown, 2009) -- Highlighted mar 15, 2014

p.109: Accompanying the scientists’ general enthusiasm towards CS, there has been an increasing interest within social science in analysing the CS concept more thoroughly, through empirical case studies of the public–expert relationship (Cornwell and Campbell, 2012), the experience and motivations of participants (Mankowski et al., 2011; Raddick et al., 2010), or the learning outcome of public participants either in terms of traditional science knowledge or in terms of knowledge of the scientific method (Crall et al., 2012; Cronje et al., 2011; Jordan et al., 2011; Trumbull et al., 2000). These studies are looking at CS in a more analytically critical light than the often very enthusiastic and optimistic assessments from scientists, for example Trumbull et al. (2000) demonstrate that despite early hopes the CS projects they looked at were not particularly successful in increasing knowledge of scientific method. -- Highlighted mar 15, 2014

p.109: While this literature is certainly insightful about scientists’ experiences, there is still a need for independent social scientists to have a look behind the curtains and find out what scientists working on CS think about it, and where they see the real challenges that CS needs to address if we want it to be successful. One particular reason for this need is that we can expect ‘publication bias’ from scientists’ writing about their own experiences. Projects that for whatever reason did not work very well, will provide little motivation for participating scientists to publish their experience. -- Highlighted mar 15, 2014

p.112: For example the Birmingham group is running bird-ringing training sessions leading to accredited status for participants in their study. The difference between the national scale OPAL CS projects and the regional ones is important here, because this influences the strategies the scientists have developed in order to deal with issues such as data quality and validation as well as potential problems they encounter in recruiting and relying on individual participants; both these themes will be further examined in the following section. -- Highlighted mar 15, 2014

p.112: This section will present two particular issues that were raised by the interviewees; these are the methodological and epistemological dimensions (arising from concerns over the interpretation and gathering of data) and ethical dimensions (arising from concerns about how the public is potentially being treated as well as ethical implications for scientists). -- Highlighted mar 15, 2014

p.113: What is interesting, however, is the range of methods and approaches used between and even within the different OPAL sub-projects. We can roughly divide these into

  • (a) providing training/close supervision: By giving participants extensive training and/or supervising them in the gathering and interpreting of the data they collect, scientists have in a sense mimicked the way their own expertise has developed. This type of reassurance over the quality of the data was of course only available in those sub-projects of OPAL that had the time and resources available to provide individual training; this simultaneously also restricted the number of public participants that the CS project could reach as well.
  • (b) cross-checking for consistency with existing literature: Used more in some of the national projects and in conjunction with other methods, this involved looking at the data that came back from the public and comparing it with what would be expected based on previous research in the area.
  • (c) cross-checking for consistency with their own observations: Similar to the above, this involved the scientists going out themselves to observe the public, do the survey themselves and look at how the public data compares.
  • (d) quiz-style questionnaire at the end of surveys: One survey that asked members of the public to identify a series of species included a ‘quiz’ at the end which the scientists used to gauge the reliability of the public data.
  • (e) simplifying the tasks asked of the public and/or adapting the research questions: Most frequently the way the scientists tried to deal with data quality was by trying to make sure the questions are simple and easy enough that there is little that can go wrong. This was used in tandem with the other methods. Because OPAL was conceived as a public education and engagement exercise that should be accessible to anyone, regardless of age and ability, simplicity of the task was part of every project from the start.

-- Highlighted mar 15, 2014

p.114: Issues over data quality, whether real or anticipated, also led to secondary considerations over how the perception of public data affects the science being produced. -- Highlighted mar 15, 2014

p.115: even some of the very positive and enthusiastic OPAL scientists were doubtful whether this type of science could ever lead to revolutionary results. ‘Whether or not they contribute to a better science? You know I can see how that can, but ... it hasn’t ... I don’t think. ... you get some ideas, but you don’t get, you don’t get eureka moments’ (32 national). -- Highlighted mar 15, 2014

p.116: From its inception, OPAL management have consciously sought to avoid the term ‘volunteer’ to describe public participants in order to make clear that the project is aiming to give something back to the community rather than just take their free labour. -- Highlighted mar 15, 2014

p.117: Difficulties CS projects can have with recruiting participants were also noted by Evans et al. (2005) in their assessment of their own projects, noting especially that well-educated and middle class people tended to dominate therefore limiting the potential for outreach to deprived groups that is central to projects like OPAL. -- Highlighted mar 15, 2014

p.118: The first ethical issue identified by the interviewees, public access to (their) data and science, is probably more easily dealt with. It, however, leads on to the related issue of explicit acknowledgement of the public contribution on the same terms as that of the scientists, through for example coauthorship in scientific publications. While there is of course a practical issue here that can and usually is being solved on a case-by-case basis by individual scientists, we also believe that this issue might need addressing through a more abstracted debate. This is particularly relevant if we see the value of CS as a collaborative enterprise, where lay–expert boundaries are being broken down and the public become fully equal contributors to scientific knowledge. Conceiving CS in this way, attributing authorship is only one of a series of ethical problems that need addressing, and probably the least urgent. The real remuneration for professional scientists’ work is not just authorship but of course their pay. If public participation and expertise is to be thought of as equal in status to the professional contribution then it is legitimate to ask how we can justify that one part of a project is being done by people who are getting paid for it, and another is being done by people doing it for free. -- Highlighted mar 15, 2014

p.118: on the contrary, through actively targeting deprived communities, OPAL and similar CS projects actively try to be a force for social mobility. -- Highlighted mar 15, 2014

p.119: we overlook the ethical problems that CS raises we may end up unintentionally strengthening the lay–expert boundaries that CS was thought to overcome, through potentially fostering a sense of resentment by junior scientists who might feel their jobs are being outsourced, or by leaving us open to accusations of unintentionally exploiting free labour. -- Highlighted mar 15, 2014