top of page
Search
  • Writer's pictureLamprou Lab

WHAT MAKES GOOD SCIENCE?

At a philosophical level, our most noble aims are the pursuit and furthering of knowledge for the betterment of humanity / the world. Big data sets and the computational capability to assess them are increasingly accessible (Leonelli 2019), which drives a need for us, as researchers, to understand what we are going to do with all these emerging technologies / resources. Particularly to ask good questions about what a "good science" is.

Putting numbers to this goal and objectively measuring this concept is challenging, but beloved by STEM and management personnel alike. These can be expressed as Journal Impact Factors (JIFs), which look at the "weight" and quality of a publication (Science-Direct 2023), Author Impact Factors (AIFs) such as their H-index (an expression of productivity and impact) (University_of_Pittsburg 2022) and more metrics still for Principal Investigators (PIs) or labs and their associated research groups (Ioannidis 2022). Whilst a measurement is intended to be neither good or bad, how they are used or how can be influenced or "gamed" can be.


Nassi-Calò (Coordinator of Scientific Communication in Health for the World Health Organization) reviewed what these metrics mean and reminded scientists that it is important to ask: “what is more important, what is being said, or where it is being said” (Nassi-Calò 2017). The University of Birmingham have produced a table describing many of these metrics along with their strengths and weaknesses (University_of_Birmingham 2022)


Journal Impact Factor assumes that publication in ‘high impact’ journals is synonymous with quality.” (University_of_Birmingham 2022)


Having determined how "good science" is measured, the inverse must be considered. There are many negative examples from p-hacking to plagiarism and from the unknown and un-caught to the now infamous case of Crick & Watson winning a Nobel prize for the discovery of DNA’s double helix, without giving due credit to Rosalind Franklin or Maurice Wilkins (Science_History_Institute 2022).

(Munroe)


Other elements of quality are harder to externally measure, such as the drafting of a hypothesis to be confirmed or refuted by evidence, rather than collecting data and writing (and publishing) a fitting hypothesis or doing so to only publish positive results. The related trend of non-reporting negative results and what that does for the scientific community is worthy of its own blog post and is discussed by the "Embassy of Good Science" (Mezinska 2020). Beyond the many statistical, credit-based, and other desk-based examples of bad science, there are many ethical issues and incidents to consider in experiment design which drive institutional policies and frameworks (Resnik 2022).

We have seen how to measure the "good" and what makes the "bad". Under what conditions can someone produce good science?

  • Adequate supervision – both from internal bodies such as a research governance programme or a supervisor / PI, but also from suitable external bodies like the Human Tissue Authority (HTA) (Human_Tissue_Authority) or the United Kingdom (UK) Accreditation Service (UKAS).

  • Proper experimental design – to be taken through the whole process, from a well-developed hypothesis, through any physical experiments with a specific and expected outputs and how that data will be assessed to determine if the hypothesis is refuted or not. “To consult the statistician after an experiment is finished is often merely to ask them to conduct a post-mortem examination. They can perhaps tell you what the experiment died of.” – Ronald Fisher. A well-designed experiment will have controls set throughout, e.g., adoption of General Data Protection Regulation (GDPR) driven data-protection protocols.

  • To be carried out by Suitably Qualified & Experienced Personnel (SQEP) – the skills required to conduct lab-bench or desk-based elements of any scientific study need to be developed by training, hours spent with the tools or ideally; both. All too often one or the other is taken as enough, but both are needed to be truly effective. This term originated in the UK Nuclear Power industry but is becoming widely adopted “to be regarded as SQEP, one requires a professional qualification and several years of experience, with recognition that one’s skills and understanding can be relied upon to resolve a technical problem.

  • Know your bias, we all have them, however actively (or not) we fight them. Some are more overt, such as funding bodies and the outcomes expecting to see. Some less obvious such as Reporting Bias; if researchers find their hypothesis is wrong, they don’t publish, Selection Bias; where a sample does not represent a population, Measurement Bias; where the research design does not match the research question, and more.

A considerable amount of the concerns and controls for creating good science rely on the morale integrity of the researcher and their ability to keep an open mind. “A key issue we have is students in university wanting to get the right answer from their experiments” Dr David Smith, University of York. Do the work, ask the questions, to find and improve scientific knowledge, not to find a publishable answer.


Good luck!


by Edward Mihr


References

62 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page