Alexander Pattison
SLAM Talk Title: Artificial Neural Networks: Letting the Computers do all the Work
How did you originally get interested in science?
I have been interested in science for as long as I can remember. I first wanted to be an astronaut, then learned about nanotechnology from a spy novel I read when I was 10 and thought ‘that sounds cool’! 18 years later, I’m a nanoscientist.
What is your favorite place at the Lab?
My lab. It’s air-conditioned.
Most memorable moment at the Lab?
The moment I discovered that my microscope was controlled by a PlayStation controller, meaning that all the years I’d spent playing computer games could be retroactively classified as ‘academic research’.
What are your hobbies or interests outside the Lab?
Hiking, Computer Games, Reading, Playing music (double bass and cornet), Fencing, running the Berkeley Lab Postdoc Association.
Alex's Script - "Artificial Neural Networks: Letting the Computers do all the Work"
Have you ever been stuck in your office doing some long, boring, repetitive task and wondered “why can’t the computer do this for me”? Scientists certainly know what that feels like, especially when they’re running their experiments.
Take my work, for instance. I work with electron microscopes, using high-energy electrons to study the arrangement of atoms in materials. Cool, yes, but taking data from them is a really long, boring, repetitive process. Move, take an image. Move, take an image. Over and over again.
Now, admittedly, moving and taking images are simple operations for computers to handle. The difficult part is deciding where to move and what images to take. Not every bit of a sample is useful, so we want to be selective, imaging only what’s interesting and disregarding the rest. “Interesting”, however, is often a rather difficult concept to explain to computers, especially in microscopy. Imagine you’re looking at a nanoparticle shaped like my hand, for instance, and you want the computer to take images of all similar particles. Computers like simple yes or no decisions, so you’re fine if every particle looks exactly your reference image, but what if they’re at different orientations? What if they’re smaller or larger? What if a finger’s broken off? Ordinarily, each one of these possibilities would have to be individually accounted for during programming, which is just more trouble than it’s worth. That’s why us microscopists tend to resign ourselves to doing the long, boring, repetitive task of taking images manually.
But fear not; there is a solution. Unlike normal computer programs, Artificial Neural Networks are really good at image recognition for two reasons. One: rather than seeing objects as single blocks, they see them as collections of features – lines, curves, shapes – and they can recognise objects by these features rather than relying on exact matches to reference images. Two: they’re a form of machine learning; they teach themselves how to think, rather than being told how to think by a human. This means they can devise brand-new algorithms that us humans might never dream of, all without us ever having to write a single line of code.
So how does that help me with my microscopy? Well, by providing images of my particle of interest in all its different possible states, a neural network can figure out its own way to recognise them all. Then, when I look at a large area of a sample, this network can locate all these particles in this area and instruct the microscope to zoom in for a closer look. Once it’s done, it can move to the next area and do the same again… and the next… and the next… all without me ever having to be in the room.
So there you have it. Neural networks can make intelligent decisions in the heat of an experiment, automating the entire process. And just like that, the long, boring, repetitive task of taking data is a thing of the past!