Being Nudged by AI

Personal AI (artificial intelligence) assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks such as searching, planning, messaging and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanizing, leads to cognitive degeneration, and robs us of our freedom and autonomy.

Classically, following the work of Alan Turing, human-likeness was the operative standard in definitions of AI. A system could only be held to be intelligent if it could think or act like a human with respect to one or more tasks. The precise definition of AI is divided into four main categories: thinking like a human, acting like a human, thinking rationally, and acting rationally.

Humans have long outsourced the performance of cognitive tasks to others. If humanistic outsourcing demands its own ethical framework, then presumably AI outsourcing does too. But what might that ethical framework look like?

In order to think about the ethical significance of such cognitive outsourcing, it helps to draw upon some theoretical models. One thesis- according to the situated/embodied cognition school of thought is that cognition is not a purely brain-based phenomenon. We don't just think inside our heads.  Cognition is a distributed phenomenon, not a localized one. We use maps to navigate, notebooks to remember, rulers to measure, calculators to calculate and so on. We can think about these interactions with cognitive artifacts at the system level (i.e., our brains/bodies plus the artifact) and the personal level (i.e., how we interact with the artifact):

  • At the system level, the cognitive performance is often enhanced by the artifact: me-plus-pen-and-paper is better at than me-without-pen-and-paper. But the system level enhancement is achieved by changing the cognitive task performed at the personal level: instead of imagining numbers in my head and adding and subtracting them using some mentally represented algorithm, I visually represent the numbers on a page, in a format that facilitates the easy application of an algorithm.
  • On the other hand, when we start using a new artifact to assist with the performance of a cognitive task, we shouldn't think of this simply as a form of outsourcing. The artifact may share (or takeover) the cognitive burden, but in doing so, it will also change the cognitive environment in which we operate. It will create new cognitive tasks for us to perform and open up new modes or styles of cognition.

One thing that is missing is any discussion of the positive role that AI assistance could play in addressing other cognitive deficits that are induced by resource scarcity. If a resource is scarce to you, you tend to focus all your cognitive energies on it. In other words, cognitive outsourcing through AI could redress scarcity-induced cognitive imbalances within one’s larger cognitive ecology. This serves as a counterbalance to some concerns about degeneration.

Moreover, autonomy and responsibility should also be taken into account when it comes to a discussion of the role of AI assistance. It is commonly believed that personal happiness and self-fulfillment are best served when one pursues goals that are of his/her own choosing; it is also commonly believed that the achievement and meaning derived from personal goals is dependent on one's own being responsible for what one does. If AI assistance threatened autonomy and responsibility, it could have an important knock-on effect on our personal happiness and fulfillment.

There is some worry that AI would gradually ‘nudge’ a person into a set of preferences and beliefs about the world that are not of his or her own making. Yet, there might be something different about the kinds of nudging that are made possible through AI assistants: they can constantly and dynamically update an individual’s choice architecture to make it as personally appealing as possible, learning from past behavior and preferences, and so make it much more likely that they will select the choice architect’s preferred option.

The primary value of some interpersonal actions comes from immediate, conscious engagement in the performance of that action. To the extent that AI assistants replace that immediate, conscious engagement, they should be avoided. Nevertheless, in many other cases, the value of interpersonal actions lies in their content and effect; in these cases, the use of AI assistants may be beneficial, provided they are not used in a deceptive/misleading way. This is, of course, very generic.

The intention would be for these principles to be borne in mind by users of the technology as they try to make judicious use of them in their lives. Yet, these principles could also be of use to designers. If they wish to avoid negatively impacting on their user's lives, then considering the effect of their technologies on cognitive capacity, autonomy, and interpersonal virtue would be important.

More guidance on which types of activity derive their value from an immediate conscious engagement or the situations/abilities that would be in need of some resiliency would always be desirable.

The original Data Driven Investor article contributed by Ayse Kok is here. The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.