George Orwell’s dystopian vision of the world in Nineteen Eighty-Four “could come to pass in 2024” if artificial intelligence is not better regulated, the President of Microsoft has warned.

A new documentary shines light on the dark side of artificial intelligence in the modern world and calls on authorities to enact stricter laws to govern AI.

Artificial intelligence could lead to an Orwellian future if laws to protect the public aren’t enacted soon, according to Microsoft President Brad Smith.

Smith made the comments to the BBC news program Panorama on May 26, during an episode focused on the potential dangers of artificial intelligence and the race between the U.S. and China to develop the technology.

“I’m constantly reminded of George Orwell’s lessons in his book 1984,” Smith said.

“The fundamental story was about a government that could see everything that everyone did and hear everything that everyone said all the time.

Well, that didn’t come to pass in 1984, but if we’re not careful, that could come to pass in 2024.

The programme explores China’s increasing use of AI to monitor its citizens.

Critics fear the state’s dominance in the area could threaten democracy.

“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith said.

The warning comes about a month after the European Union released draft regulations attempting to set limits on how AI can be used.

There are few similar efforts in the United States or Australia, where legislation has largely focused on limiting regulation and promoting AI for ‘national security purposes’.

Could Brad Smith be right about a future Orwellian world?

Or could this world already be here in many ways?

Let’s explore just how (real) artificial intelligence is being used as a tool of great transformation.

GOING BEYOND ORWELL

How AI is used isn’t just a technical issue — it’s just as much a political and moral question. And those values vary widely from country to country.

“Facial recognition is an extraordinarily powerful tool in some ways to do good things, but if you want to surveil everyone on a street, if you want to see everyone who shows up at a demonstration, you can put AI to work,” Smith told the BBC.

He added: “And we’re seeing that in certain parts of the world.”

China has already started using artificial intelligence technology in both mundane and alarming ways. Facial recognition, for example, is used in some cities instead of tickets on buses and trains.

But this also means that the government has access to copious data on citizens’ movements and interactions, the BBC’s Panorama found.

The U.S-based advocacy group IPVM, which focuses on video surveillance ethics, has found documents suggesting plans in China to develop a system called ‘One Person, One File’, which would gather each resident’s activities, relationships and political beliefs in a government file.

“I don’t think that Orwell would ever have imagined that a government would be capable of this kind of analysis,” Conor Healy, director of IPVM, told the BBC.

He may be correct.

Orwell’s famous novel, Nineteen Eighty-Four, described a society in which the government watches citizens through telescreens, even at home.

But Orwell did not imagine the capabilities that artificial intelligence would add to surveillance — in his novel, characters find ways to avoid the video surveillance, only to be turned in by fellow citizens. 

In the autonomous region of Xinjiang, where the Uyghur minority has accused the Chinese government of torture and cultural genocide, where AI is being used to track people and even to assess their guilt when they are arrested and interrogated, the BBC found.

It’s an example of the technology facilitating widespread human-rights abuse: The Council on Foreign Relations estimates that a million Uyghurs have been forcibly detained in ‘re-education camps’ since 2017, typically without any criminal charges or legal avenues to escape. 

The tools in place are already Orwellian in nature — perhaps beyond Orwellian.

For a sophisticated network of algorithms controls this dystopia, powering the modern fascists.

A TOOL WITH A DARK SIDE

Artificial intelligence is an ill-defined term, but it generally refers to machines that can learn or solve problems automatically, without being directed by a human operator.

Many AI programs today rely on machine learning, a suite of computational methods used to recognize patterns in large amounts of data and then apply those lessons to the next round of data, theoretically becoming more and more accurate with each pass.

This is an extremely powerful approach that has been applied to everything from mathematical theory to facial recognition, and can be dangerous when applied to social data, experts argue.

Data on humans comes preinstalled with human biases.

Machine learning bias occurs when machine learning algorithms produce results that are systematically prejudiced in favour or against an individual, group, or characteristic that is considered to be unfair, such as race, age, gender, disability, or ethnicity.

Chinese start-ups have already built algorithms that the government uses to track and suppress members of Muslim minority groups. It is machine learning that drives this.

There are concerns that as China’s ‘Social Credit System’ slowly reaches Australia, soon to be incorporated with ‘unacceptable behaviours’ linked to COVID guidelines, this type of AI will drive new-age suppression, such as vaccine apartheidimmunity passports and more.

China’s ambition is to become the world leader in AI by 2030, and many consider its capabilities to be far beyond the EU. Bias can never be completely avoided, but they can be addressed.

The U.S. federal government’s interest in artificial intelligence, by contrast, has largely focused on encouraging the development of AI for national security and military purposes.

The Pentagon now spends more than $1 billion a year on AI contracts, and military and national security applications of machine learning are inevitable, given China’s enthusiasm for achieving AI supremacy.

Already, there have been cases of facial recognition software leading to false arrests. In June 2020, a Black man in Detroit was arrested and held for 30 hours in detention because an algorithm falsely identified him as a suspect in a shoplifting case. 

A 2019 study by the National Institute of Standards and Technology found that software returned more false matches for Black and Asian individuals compared with white individuals, meaning that the technology is likely to deepen disparities in policing for people of colour.

A modern day arms race, revealed as the world suffers through a semiconductor shortage.

Machine learning is incompatible to a free society and must be stopped at all costs.

Outside of public opposition, reducing bias must become a top priority within both academia and the AI industry. Is anyone cognizant in the community of that issue and are trying to address it?

PUSHING BACK

The EU’s potential regulation of AI would ban systems that attempt to circumvent users’ free will or systems that enable any kind of ‘social scoring’ by government.

This could be a huge milestone for countries like Australia if successfully passed.

Applications with powerful artificial intelligence would be classified as ‘high risk’ and must meet requirements of transparency, security and oversight to be put on the market.

This would include things like AI for critical infrastructure, law enforcement, border control and biometric identification, such as face-or-voice identification systems.

Other systems, such as customer-service chatbots or AI-enabled video games, are considered ‘low risk’ and not subject to strict scrutiny.

This is a step in the right direction.

In addition, people power can potentially hinder a foreign country’s desire to develop these technologies, with enough backlash to sway marketplace momentum.

In 2018, Google killed its Project Maven, a contract with the Pentagon that would have automatically analysed video taken by military aircraft and drones. 

The company argued that the goal was only to flag objects for human review, but critics feared the technology could be used to automatically target people and places for drone strikes.

Whistleblowers within Google brought the project to light, ultimately leading to public pressure strong enough that the company called off the effort.

In the meantime, efforts to rein in AI is being led by state and local governments in the U.S.

Washington state’s largest county, King County, just banned government use of facial recognition software. It’s the first county in the U.S. to do so, though the city of San Francisco made the same move in 2019, followed by a handful of other cities. 

Sadly, Australian authorities don’t have the backbone to enact protections for citizens like this domestically at home, but these are still positive stories nonetheless.

The best you we can do is understand them and protect yourself, while being the moral leader. 

TOTT News has provided guides on how to prevent AI from recognising your face in photos and how to stop your ISP from tracking you, for example. Knowledge is power.

If we don’t enact, now, the safeguards that will protect the public in the future, we will indeed find ourselves in a world where the technology racing ahead.

Share this piece with your friends/family and check out the presentation for yourself below to see the impending transhuman agenda at work.


The full documentary is available on YouTube:
https://youtube.com/watch?v=CWJKH67SWPw%3Fversion%3D3%26rel%3D1%26showsearch%3D0%26showinfo%3D1%26iv_load_policy%3D1%26fs%3D1%26hl%3Den-US%26autohide%3D2%26wmode%3Dtransparent

RELATED CONTENT

Microsoft president: Orwell’s 1984 could happen in 2024 | BBC News

Alibaba facial recognition tech specifically picks out Uighur minority – report | Reuters

Proposal for a Regulation laying down harmonised rules on artificial intelligence | European Commission

Australians have low trust in AI, want it regulated

Transhumanism: When sci-fi becomes reality

AUSTRALIAN INDEPENDENT NEWS SITES