How to prepare for AI

The survival guide to immortality when artificial superintelligence forces us off the balance beam.

Avantika Mehra
Towards Data Science

--

The advent of AI

Artificial Intelligence isn’t arriving, it’s here. Articles and books have been written, anthropomorphizing AI, listing its failure modes, and testifying that humanity will struggle to navigate the intricate moral and ethical issues associated with the new paradigm. Some have been met with acclaim, for uncovering social issues that humanity is yet to face; others, with criticism, for unsolicited and unsupported fear-mongering.

These references largely cover the field of artificial general intelligence (AGI), referring to a machine that has the ability to carry out higher-order mental functions in a variety of contexts, like humans. Experts have varying standpoints on when we can expect to see AGI in society — but there is a universal consensus on the idea that it is not a question of if, but when.

Meanwhile, artificial narrow intelligence (ANI) — referring to the specialised tasks a machine or algorithm can do — is here, and it’s pretty good. It can do some tasks as well as humans (like distinguishing a puppy from a muffin) and some tasks better than humans (like playing chess).

Why is this significant?

  • Cognitive implications: It takes each human years to develop the perceptual, sensory and cognitive skills necessary to understand the concept of a muffin, perceive a muffin when shown it from multiple angles, and retrieve the associated word ‘muffin’ when presented with a muffin. However, once an image-classifier is built, the algorithm can instantly be implemented in thousands of machines with a single software update — what takes humans years to acquire can be achieved by machines in a matter of minutes.
  • Economic implications: Unemployment is cited as a significant implication of ANI, in a range of industries, including entertainment and news media, warfare, law, sports, transportation, and financial markets. In the recent years, AI researchers, entrepreneurs, and policy-makers have posited multiple solutions to combat externalities associated with developments in AI, from Universal Basic Income to statewide regulation.

Is AGI the most intelligent AI?

No — introducing Superintelligence.

Ray Kurzweil proposed the idea of an intelligence explosion — a point where AIs become intelligent enough to improve themselves. A software-based AGI will enter into a recursive loop of self-improvement cycles, leading to an intelligence explosion that surpasses even human intelligence — a Superintelligence (ASI)

pixabay via pexels (CC0)

How to prepare for AI

This article aims to provide a guide to the various strategies you can deploy to best equip yourself for the AI-age.

Understand the risks associated with artificial narrow intelligence, artificial general intelligence, and artificial superintelligence. This includes:

  1. Understanding the scope of current machine-learning algorithms, and how they are transforming the economy and disrupting industries.
  2. Being discerning about humanly unintelligible correlations generated by unsupervised learning algorithms

3. Understanding how algorithmic bias can occur:

  • Algorithmic bias can arise from training data that is not representative

For example, from a training dataset that does not have enough dark-skinned people in the training dataset, leading to a facial recognition system that doesn’t recognise ethnic minorities

  • Algorithmic bias can be caused by biased training data

For example, by using a corpus of writing data to improve an algorithm — in one study, the sentence ‘a man is to doctor, as a woman as to…’ was completed with ‘nurse’, by a predictive NLP algorithm.

Create a dialogue around the ethical and moral implications of new technologies as they are released

It is the responsibility of individuals, entrepreneurs, companies, and policy-makers to foster productive, critical, debate surrounding new technological paradigms. This would result in the public awareness of the scope of current technologies, preventing a Turry-like case from ensuing — a thought-experiment positing the extinction of humanity by what was originally programmed to be a writing machine.

At the level of company-owned ANIs, an individual could best prepare themselves by being aware of how their data is collected (eg: cookies when you visit websites), used (eg: targeted advertising), stored (eg: Snapchat stores data on its servers), and biased at various levels (eg: biased training data, resulting in an inability for the service to work on edge-cases.)

At the level of ASI, humans could be construed as another species on the balance beam of life, whose tripwire is approaching; the advent of ASI could propel us either into extinction or immortality. Nick Bostrom, an acclaimed AI researcher, proposed the concept of a balance-beam model. All species pop into existence, teeter on the balance-beam, and then fall into extinction — Bostrom calls extinction an attractor-state.

In addition to extinction, there is another attractor-state — immortality. Therefore, attractor states are comprised of species extinction, and species immortality. So far, all species have have eventually fallen into extinction — but there is a possibility that a species could fall onto the other side of the balance beam, into immortality

A tripwire refers to the threshold of existence of a species. Hitting a tripwire is a massive, transformative event for a species, like a worldwide pandemic, or asteroid. Or artificial superintelligence.

Philosophers believe that ASI could be humanity’s tripwire, spiraling us into either extinction or immortality. Therefore, at the larger scale, whether AI will be positive for humanity is a question intertwined inextricably with questions of our very survival.

An individual’s greatest asset through this period of uncertainty is knowledge. This includes understanding the various theories of AI, being aware of the challenges and risks associated with its development, and rationalising potential outcomes that could result from combining innovation in various fast-growing fields, such as AI, biotechnology, and nanotechnology.

As innovators, we should be aware of capitalist incentive-structures that promote innovation and reward investment into rapidly-growing technologies, at the cost of humanity. Rather than fast-moving disruption, with high-impact technologies such as ASI, we should focus on stable development, testing features at each stage, to avoid an intelligence explosion.

As individuals, and humans, it is within our rights to lobby for policies that create ‘checkpoint’ levels for companies and research institutions pursuing AGI — with potential fiscal benefits for companies that are transparent about innovation, research, and development practices — thus incentivising the regulated development of AI tech, and calibrating the level of progress at a global scale.

--

--

provide the subjective experience of a genz with a passion for epistemology and futurism.