Writer: Naomi Chung
Editor: Savina Hui
In 2018, an inquiry led by Professor Iyiola Solanke from the University of Leeds to examine UCL’s complicity in the development of eugenics led to the renaming of the Galton and Pearson Lecture Theatre, a public apology and a three-year Eugenics Legacy Education Project (ELEP) in 2021. UCL initiated the project with the aim to address and educate the public on the university’s history with eugenics. The university has successfully emphasised its past in eugenics and the negative impacts of scientific discrimination through the project, which engages different departments in teaching the topic and displays related materials that most students and members of the public who have walked through the student centre would have noticed.
But what of the dangers in biased biometric data collection? With the university being scrutinised for its investments in the development of AI surveillance technology by genocide-complicit companies like Nice Ltd., which sells to Israeli military companies and is therefore accused by both the media and the student body of being complicit in the genocide committed against Palestinians, this begs the question: have we really learnt our lesson?
The inseparability of society and science
There are plenty of resources on eugenics history available both in UCL and in academic literature, so I will not be reiterating that in this article, but I encourage you to give them a read if this article sparks your interest. One of the less commonly known but no less alarming facts is that both Galton and Pearson started off as mathematicians and statisticians, and Galton’s Laboratory for National Eugenics at UCL started as a founding ground for statistical studies. Galton, passionate about data collection and interpretation, invented the Galton board, which visualises the central limit theorem (the normal distribution curve of a standard distribution). He also developed the criminal identification system with Parisian policeman Alphonse Bertillon, which records an individual’s anthropometric data (like fingerprinting), and founded the new discipline of eugenics, the study of selectively breeding desirable traits in humans.
Although seemingly objective and innocent, contemporary social attitudes towards race, gender, and intelligence were incorporated into Galton’s scientific research. Moreover, eugenics attributes characteristics such as criminality, intelligence, mental illness (or ‘feeblemindedness’), and poverty to genetics rather than social conditions. In turn, these pseudo-scientific theories shaped society to fit into these theories by influencing governing policies to achieve population control . The most notorious example was the Holocaust under the Nazi regime, where ordinary people with ‘undesirable’ traits of Jewish or Roma heritage, disability, and/or homosexuality were eliminated in the name of eugenics. Dehumanisation becomes easy when human differences are interpreted as data points, and individuals are reduced into a generalisation to justify scientific racism.
Biometric Identification and Surveillance
The photographic portraits, phrenology head casts, and biometric measurements of Galton’s archive represents a form of surveillance and social deviant regulation through the collection of biometric data that still persists in present times. Nowadays, the average person would be familiar with the normalised daily usage of fingerprint and iris scanning, facial and voice recognition, or keystroke recognition. Heat sensors and surveillance cameras are often used in public spaces to monitor population flow, just like the Passive Infrared (PIR) sensors used to monitor occupancy patterns for more efficient energy management on the UCL campus back in 2018-19. As Pramod K. Nayar states, surveillance has less to do with an individual’s identity than identification; it authorises who we claim to be. It turns the individual into a set of data that becomes inseparable from the body. Whilst surveillance technologies benefit us in deterring crimes and verifying identities, concerns about breaches of privacy and data security are reflected by public sentiment towards the non-consensual use of facial recognition software at the King’s Cross Estate in 2019 and the recently proposed mandatory digital ID in December 2025, which was quickly withdrawn a month later.
The undeniable fact is biometric data and identification have become intrinsic to social systems and infrastructure, whether it is at the local scale of the UCL campus or the global scale of international border control. Some fear that the progressive increase in control and invasion of privacy is an alarm bell for an Orwellian surveillance state. However, whilst the individual might feel a lack of control over one’s biometrics, it is our moral duty as the future generation of policy makers and society stakeholders to regulate and safeguard the ethics of data usage. UCL’s past complicity cannot be changed, but it does not have to repeat its own history.
