When a machine knows you better than you know yourself…
Written By: Jure Majnik
Art by: Lisa
Imagine if we could identify future criminals in order to prevent crimes before they even have a chance to manifest. Suppose you could track your partner at any given time to verify that they are being honest and loyal to you. Picture a world where a higher authority could also possess such abilities and use them to manipulate our society. Recent advances in the field of ‘machine learning’ are opening doors to all of these scenarios, thus bridging the gap between science fiction and technological reality.
The world’s top universities and businesses are increasingly focusing their research on the rapidly developing field of machine learning due to its remarkable ramifications. Even though we often see headlines addressing the field of artificial intelligence (AI), we sometimes struggle to understand the related buzzwords. Simply put, machine learning is the ability of a computer to teach and reprogram itself purely based on its previous and ongoing experience, without any help from programmers. Leading examples include Google’s machine learning project, which taught itself to walk, and a Stanford University programme with the ability to statistically determine sexual orientation. Despite the fascinating and exciting benefits such technology can bring, we may need to take a step back to properly evaluate the potential implications of such technology.
Target’s pregnancy-determining mechanism is one of the most recent and controversial results of ‘big data’ analysis. The system was developed to identify the behavioural patterns of pregnant women to better ‘target’ their specific needs and desires. The algorithm caused much controversy after a father of a high-schooler complained that the company was encouraging his daughter to become pregnant by sending her baby clothes and food coupons. As it turned out, the man’s daughter was indeed pregnant, which the algorithm correctly suggested. Such examples might seem extreme, yet we are being exposed to similar instances on a daily basis. Google and Facebook’s targeted advertising is constantly studying our behavioural habits online, which are used to promote products we may find appealing. The specificity of the ads can often be disturbingly correlated with our personal interests and reveals just how much sensitive information these systems possess.
The optimistic slogan of Google’s machine learning project: ‘solve intelligence, use it to make the world a better place’ seems to overlook the serious negative consequences and the potential for abuse of such programmes. A group of researchers from the University of Washington recently outlined how they were able to track a user’s location by utilising so-called ‘demand-side’ platforms. These interfaces allow an advertiser to outline precise locations and other specifications to better target the customer. One of the biggest concerns is the relatively low cost of such tracking, about $1000. This affordability allows for a wide range of possible (ab)users ranging from a single angry ex-partner, to large companies or even states.
While a furious ex might pose a problem on the level of the individual, other scenarios open up the possibility of the formation of a truly Orwellian dystopia. In his classic novel, ‘Nineteen Eighty Four’, Orwell introduces the ‘telescreen’ – a machine that serves as a media device which also constantly records its surroundings. We can draw many parallels between ‘telescreens’ and our modern-day mobile devices, which even come with the added ‘benefit’ of location tracking. Access to data produced through our usage could easily allow machines to learn about our behavioural patterns through location tracking. Data obtained through these methods could easily be used to expose our personal traits, in a similar way to shopping habits being used to determine pregnancy.
The data we produce when using our devices in the West is ‘fortunately’ rarely accessible to entities other than private firms operating the online platforms we use; these corporations rarely choose to share or disclose said information to governmental institutions. On the other hand, collaboration of this nature is currently being tested in China, with plans of implementing a governmental ‘social credit’ system. The system would rank its citizens based on their behavioural patterns. This could lead to a nation-wide reward system whereby ‘good citizens’ would be rewarded for their loyalty whilst ‘bad citizens’ would be penalised. Social ranking systems could lead to even more discriminatory outcomes when combined with facial recognition mechanisms able to detect an individual’s social or ethnic background.
In another passage from 1984, George Orwell states that ‘with the technical advance which made it possible to receive and transmit simultaneously on the same instrument, private life came to an end.’ Arguably, Orwell’s prophecy has not yet realised itself. What has prevented his prediction from coming true is mostly the extent to which we value our privacy and the way we ensure it through privacy-protection laws. With exponential advances in the field of machine learning, we should further enhance our privacy-protection policies, as they might as well be the last thing protecting us from a dystopian ‘end of private life’. Taking into account the implications of recent technological developments, we should embrace the opportunities they offer, while striving for a political resolution of the issues they bring about.