Is science becoming artificially intelligent?

Is science becoming "AI-led", as some venture capitalists suggest?

The short answer is no. A slightly longer response is that's not the most important question to ask about the future of science.

 

A tool, not a solution

DeepMind’s success in determining quite accurate 3D protein structures in a competition made headlines last week. Rightly so because it is an impressive achievement.

The company is understandably gung ho on the future scientific possibilities

“The progress announced today gives us further confidence that AI will become one of humanity’s most useful tools in expanding the frontiers of scientific knowledge, and we’re looking forward to the many years of hard work and discovery ahead!”

 

It is easy, though, to get carried away with hype. Solving a protein’s structure is just one step (often a very important one) in understanding functions and interactions, and developing drugs.

The protein folding problem also hasn’t been “solved” by an algorithm. AlphaFold, and all other computational methods, predict. Protein scientists need to confirm structures experimentally.

As Vishal Gulati points out just knowing more protein structures doesn’t lead to more drugs. You need to find structures that can be targeted by drugs.

But better predictions of protein structures will help with the study of protein-protein interactions and misfolded proteins, and inform the design of novel (or at least not yet identified) proteins.

 

Be cautious about AI hype

Some applications of AI haven’t ended well. IBM’s Watson healthcare system was quietly placed on sick leave in 2019 after over promising and under delivering.

Gary Marcus and Jeffrey Funk have also noted other examples where results and expectations about AI haven’t stood up to scrutiny. Or, after the big press release, nothing seems to happen.

Progress is often more gradual, or not progress at all.

You need to have a critical mindset. Technology Review suggested 5 questions to ask about AI news:

  1. What is the problem that needs to be solved?

  2. How is the company, or lab, approaching that problem with AI methods?

  3. How do they source the training data?

  4. Do they have processes for auditing the products and results?

  5. Should they be using AI methods to solve this problem?

DeepMind's AlphaFold gives good answers to these questions.

Additional questions I include:

  1. Do they explain their method(s) clearly for a more general audience?

  2. Do they discuss limitations and potential biases?

 

Over-estimating short term developments and underestimating long term progress is as common to AI as to many other new technologies. So you can't assume current successes and failures describe the future.

 

It’s not all hype

There are many examples of artificial intelligence methods being used in scientific research, and applications are increasing rapidly. Often without much fanfare.

Last year The Royal Society produced a report highlighting the potential of AI in research. It provided examples of the roles it can play as an enabler of research and development in many fields.

Applications are being used to identify drug combinations that can inhibit cancer cells, create artificial proteins, and categorise galaxies.

AI isn’t just in the lab either. Conservation biology is also an adopter, as described a recent Nature article.  An example being its use to see if it can detect elephant poachers.

New Zealand’s Cacophony Project uses AI to detect predators, and NEC and Victoria university are using machine learning to identify bird calls.

AI methods are being used in a variety of ways in Covid-19 research. For example, to identify genes interacting with SARS-CoV-2, or existing drugs that may be useful, analysis of the research papers, or to process medical images.

A year ago, the use of AI would probably have been in the paper’s title. Now they are just part of the methodology section. That is a real indicator of progress.

Another article in Nature suggests that what is really going to help advance AI in research is better collaboration and transparency. Not sharing data sets and models creates barriers to progress rather than bridges. That's not just an issue with AI. Many areas of science would benefit from more collaboration and sharing, as the pandemic has illustrated.

The most interesting question that The Royal Society posed in its report was:

“Is there a rigorous way to incorporate existing theory/ knowledge into a machine learning algorithm, to constrain the outcomes to scientifically plausible solutions?"

 

This highlights that we often need to adapt new tools to suit the tasks rather than adopt them without too much thought. So it's not always about how will AI shape science, but how will, or should, research shape AI applications.

 

Don’t consider AI in isolation

As I flagged at the start of this post, it's unhelpful to focus too much on the role of artificial intelligence. That's "singularity thinking". Yes, AI is likely to become increasingly important in many areas of science.

In reality, there are many things shaping the future of science. Such as automation more generally. Arup produced a report in 2018 (and lightly updated a couple of months ago) looking at the future of labs. It highlights not just automation, but also some of the social, political and financial factors influencing research over the coming decade.

An increasingly important aspect of science is how different knowledge systems are woven into together. A Guide to Vision Mātauranga highlights the experiences of Māori researchers in the New Zealand science system, the challenges and the opportunities for valuing Mātauranga Māori (Māori knowledge systems) alongside Western science. The Building cultural perspectives report from Superu describes how different streams of knowledge can both work alongside each other and work together.

The pandemic may also nudge researchers, and research organisations, to change how science is conducted as well. In good ways and not so good.

As DeepMind's AlphaFold has shown, AI can be good at helping solve some types of puzzles. But science is about investigating mysteries too. This is where there isn't one answer or solution, or when the answer naturally emerges from gathering all the facts.

If it is to continue to help us better understand the world, and do more good and less harm, science will need to become more socially intelligent - responsive to social expectations and needs - rather than just algorithmically more sophisticated.

 

Update 10 Dec: A paper just out in Nature describes natural language processing programs that analyse and summarise thousands of scientific papers. The next goal is to try and get programs to synthesise information from different papers.

Featured image: Photo by Mathew Schwartz on Unsplash, transformed using Deep Dream