Artificial Intelligence & You
How should we think about artificial intelligence and the implications that it has for our work and leisure? There are many articles on artificial intelligence and its potential impacts on jobs, and the ethics of applications. These are important topics, but I want to focus on some less discussed aspects, which I covered in a recent presentation.
A helpful quote when thinking about the impacts of technologies comes from a historian, Arthur M. Schlesinger:
“Science and technology revolutionize our lives, but memory, tradition and myth frame our response.”
It is easy to get overwhelmed, or overawed, with recent progress in artificial intelligence. Developments are moving rapidly, and there are some impressive advances. Take a look at DeepIndex to see what is, in their categorisation, “crushing it” (games mostly at this stage).
The hype and spin associated with AI is now also starting to be more widely discussed. Yes, there are important advances and applications in the field, and we can expect more. But don’t believe everything you read. Calling something “AI” is used both as a marketing and fundraising gambit. But there is also a lot of hype and spin. Filip Piekniewski and Scientific American point this out.
The myth of intelligence
There can also be wild speculation about “artificial general intelligence” and “super intelligence”. The former is where a software system can handle a range of different novel tasks, rather than just one it has been trained to do. The latter is a mythical realm where computers become “smarter than humans.”
A common argument, or assumption, is that as data, algorithms and computational power improve then AI moves from “narrow” (good at a single task) to “general” artificial intelligence that mimics “human-level intelligence”, and even on to “super intelligence”. However, that is likely to be a fallacious supposition. Intelligence is more than pattern recognition and process optimisation (which is the current state of most AI applications).
Kevin Kelly points out that intelligence is not like a ladder with simple rung-like progressions. It is multi-dimensional, and there are trade-offs. He likens it to a pocket knife, which does some things reasonably well, but not necessarily perfectly. People don’t have “general purpose minds” that are great at everything, but believing that we do leads engineers to think artificial intelligence can be optimised for multiple tasks.
This was re-iterated recently by Facebook’s head of AI and others.
AI and You
Down at a more personal level there are three things to think about for interactions with automation and artificial intelligence. I like the metaphor that the Estonians have applied to algorithmic accountability, drawing on their own folklore about the magical creature the Kratt.
The Kratt is made from hay or household objects that can be animated to do tasks for its creator.
Kratt are brought to “life” by the devil in exchange for (usually) three drops of blood. They will then do the bidding of their owner (typically stealing or fetching things). However, Kratt need to be kept busy otherwise they cause trouble. When they are deemed no longer useful they are set an impossible task, which usually results in them burning up (at least the vegetative varieties). [In some respects Kratt resemble the golem of Jewish folklore].
In Estonia the “Kratt Law” specifies that the state organisation, or other user, of an algorithm (ie those who “bring it to life” and control it) is responsible for its actions. This helps to provide legal clarity.
Discussions of automation are often framed as humans vs machines, but the critical issue is really about how well people and software can complement each other. Another way of looking at AI from a Kratt-like perspective is to consider those three drops of blood as what gives power to the person, not what they take away. (I haven’t come across a Maori or Polynesian magical creature of a similar type to provide a more local context).
Three such mana-enhancing drops to consider are:
Enabling & Engaging
Balance
Expectations
Drop 1: Enabling & Engaging
Key drivers for adopting AI are efficiency and effectiveness. Algorithms, with the right models and data, are better at some tasks than people. So, the argument goes, we should let the algorithms do them. That may often seem to be the best response from a narrow perspective. However, it can result not only in “learned helplessness” (where we don’t know what to do if the technology fails or isn’t available), but also may mean we stop working to get better at things:
“… As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.” Gary Klein, quoted by Tim Harford
This type of argument is common when new technologies are introduced (a version was around when the first printing presses started up). However, with more important functions being taken over by software (flying, driving, control of energy systems and media) this is becoming a real issue.
In many cases, what’s required may be for the algorithms to be able to “show their work” so people can evaluate it, but this is likely to get more difficult as software (and the problems they tackle) get more complicated or complex. It is useful for all to become more curious and critical in determining the reliability of information and knowledge, like children.
So ask yourself
What skills do you need to keep sharp to make the most of AI, and other technologies?
It’s tempting to try always to go for more efficiency. But designing for less efficiency and more friction can be desirable.
Harford illustrates this with examples of urban design in the Netherlands, where road changes were made that force drivers (and cyclists and pedestrians) to become more engaged in what they were doing when navigating more risky parts of towns and cities. Road signs are easy to ignore, and traffic lights can tempt drivers to beat them. Fewer smooth roads and straight streets, rather than creating traffic jams and slowing traffic, improved flow as well as reducing accidents and near misses.
Ambiguity and confusion can lead to caution and better outcomes. This may need to be designed in for algorithms too.
Drop 2: Balance
One of the promoted benefits of AI is that it will take over all the dull, monotonous work, giving us more time to work on the interesting, challenging and creative stuff. Machines and software doing the dull, dirty and dangerous work is often a good thing. But it can be too much of a good thing. Combining human and algorithmic capabilities can often involve trade-offs between efficiency and engagement. We don’t want to make work (and leisure) less rewarding.
People need a balance of the dull and the demanding. Only having cognitively demanding jobs risks burning ourselves up or out, like a Kratt assigned impossible tasks.
“… it’s nice to have some tasks that provide a sense of accomplishment but just require getting it done and repeating what you know, rather than everything needing very taxing novel decision making.” Alice Boyes
This is demonstrated by the online citizen science platform zooniverse. Researchers looked at how to integrate human and machine classifiers in some of the projects, such as Galaxy Zoo. They found that giving all the easy classification jobs to algorithms can lead to a decrease in engagement from volunteer classifiers. Having some easy stuff to do keeps a lot of people involved, which in turn can help with serendipitous discoveries that algorithms miss.
So a balance is required between efficiency and engagement when people and algorithms work together.
What’s the easy satisfying stuff that you don’t want to stop?
Drop 3: Expectations
Lastly, we wouldn’t want AI to perpetuate existing bad practices and systems, or introduce new bad ones. This is already being widely discussed in relation to data and algorithmic bias.
But a more subtle aspect is the risk that AI while making some things easier can make our lives busier and less productive. Crystal Chokshi suggests that Gmail’s Smart Compose, which is designed to help you write emails quicker, could mean that you spend more time writing more emails rather than spending less time on email.
“If there’s one thing Smart Compose accurately predicts, it’s not words. It’s behaviour — not only a continued reliance on email but also (as if this were possible) even higher social expectations for swift sends and replies.” Crystal Chokshi
This is a digital equivalent of Jevons Paradox - improving efficiency leads to greater use, which counteracts the original intent.
This reflects not necessarily a problem with technologies, but with expectations and uneven power structures. We can see that with companies like Amazon expecting its human warehouse workers to operate like robots.
So, the last metaphorical drop of blood is about being vigilant and aware of whether AI is helping you be more productive and/or creative, or just busier. Are they entrenching existing poor behaviours and practices, or helping you break free from them?
How may the technologies reinforce rather than disrupt existing behaviours?
Think more about thinking
There are benefits to involving algorithms and artificial intelligence in many aspects of work and leisure. But it shouldn’t be uncritical adoption. Many of us may have no choice when using some applications. That, though, shouldn’t stop us thinking about how to use or work with them in ways that help us. In human-algorithm interactions what can give us more autonomy and fulfilment rather than less?
Featured image: based on a photo by Aaron Burden on Unsplash