Big Brother Is Watching, And Counting: Law and Order In The Age Of Big Data

0

Technological tools aiming to equalise sentencing and policing decisions are having the opposite effect. Just ask the 16-year-old kid stopped and searched 23 times in 10 months. Ensuring democracy in data, and technological governance is urgent, writes Georgia Reid.

Technology will leave our democracy threadbare. In an era which gives primacy to Big Data, with Australian politicians touting data as a national resource, we should be sceptical of the democracy of technology.

Data is collected, aggregated and categorised with minimal governance; messiness of data is extolled as a virtue, and outliers are dismissed as abnormal. Data is cleaned, and sold; the rough edges are shucked like shells from an oyster. It seems to be forgotten that these rough edges represent people who are often marginalised and vulnerable in society, as they are in technology.

People from disadvantaged socio-economic backgrounds often can’t afford to generate much data. The data collected from people with disabilities using assistive technology sits perpetually on the margins and is cut from the bell curve. Women and men in abusive relationships create abnormal data. Their experiences are cleaned and removed. In data we see a frightening tyranny of the majority and an erasure of lived experience. With data increasingly used as the input for Artificial Intelligence (‘AI’), machine learning, policy decisions, and judicial decisions, democracy of data must be ensured.

Technology is not impartial, as many people believe it to be. We quasi-biblically create technology in our image. It’s algorithms, it’s 0s and 1s perpetuate and promulgate division and bias. Microsoft’s AI-powered chatbot, Tay, spouted racist, anti-semitic and sexist propaganda in its first day of life. Facebook’s translation software converts “good morning” to “kill them” when written in Arabic, resulting in the imprisonment and interrogation of a salutatory Palestinian farmer by Israeli authorities.

In simulations, self-driving cars begin to recognise wheel-chair users as outlying data, and systematically attempt to clean the data by deliberately driving into them. Technology is not impartial, and yet is increasingly integrated into executive, legislative and judicial functions.

We should take heed, and develop scepticism regarding the impartiality of technology following discoveries of racial bias in COMPAS, the sentencing software used by American judges. Despite never explicitly gathering data about an offender’s ethnic background, COMPAS was found to be twice as likely to rate Black, Asian and Minority (BAME) offenders as a recidivism risk than white offenders with commensurate charges and criminal histories.

In the case of COMPAS, disadvantage is equated with danger, with people from disadvantaged socio-economic backgrounds more likely to receive a higher sentence. Which begs the question, how just is a judicial system relying on a computer program to generate sentencing guidelines? Technology is incapable of discretion, and arguably incapable of fairness. It can never produce individualised justice.

More frighteningly still is the placement of COMPAS within its context. The police force in the US rely on Big Data and AI to allocate patrolling resources. Predpol, an AI program, distributes police officers to geographic locations with a high historic rate of crime. If technology were impartial, this system would likely be efficient. However, every arrest and report is added to the same database Predpol draws from. Ethnic minority neighbourhoods are over policed, and there is a resultant high arrest rate. Predpol feeds off this data and sends more resources into those areas, which results in yet more arrests, and the cycle continues. Technology creates a feedback loop which results in mass-incarceration of people of colour. At their trials, COMPAS rates them as a recidivism risk, the prison population grows, and continues to be grossly overpopulated by people of colour serving disproportionately long sentences.

This reliance on data, to the detriment of democracy and justice cannot be isolated to the US. In Australia, where Indigenous Australians are the most incarcerated people on earth, a reliance on Big Data and AI threatens to increase these already abhorrent remand rates. In NSW, the Suspect Target Management Plan (‘STMP’) uses data in a similar way to COMPAS, with subjective data masquerading as objective, resulting in a list of individual suspects with over 50% Indigenous representation.

The use of this risk assessment technology resulted in the stop and search of a 16-year-old with a history of minor graffiti offences 23 times in 10 months. Suspects generated by the program are as young as 10-years-old; one 14-year-old was stopped and searched 28 times – he was once found with a small amount of cannabis and charged with possession.

The ubiquity of technology and data in the judiciary, and increasingly in the legislature and executive, necessitates good governance and democracy in data. Aberrant data must be valued and retained in order to avoid the perpetuation of bias, or worse yet, the deliberate targeting of real-life outliers by machines. The integration of AI into policing and sentencing decisions should be questioned, and we should call for greater transparency to ensure that the 10-year-old suspects on the STMP, and people of colour offenders in Australia and abroad are afforded individualised justice.

Georgia Reid is a postgraduate law student at the University of Sydney. She works alongside the Centre for Inclusive Design, and Domestic Violence Service Management NSW.

[fbcomments]