Scientists at the University of North Carolina at Chapel Hill have created a new method of particle tracking based on machine learning that is far more accurate and provides better automation than techniques currently in use. They have launched a new company, Chapel Hill-based AI Tracking Solutions, to commercialize the discovery.

The team at UNC-Chapel Hill that developed the new tracking method uses particle tracking to develop new ways to treat and prevent infectious diseases. They examine molecular interactions between antibodies and biopolymers and characterize and design nano-sized drug carriers. Their work is published in the Proceedings of the Nationals Academy of Scientists.

The company has received a Small Business Technology Transfer award from the National Institutes of Health to commercialize the technology.

Single-particle tracking involves tracking the motion of individual particles, such as viruses, cells and drug-loaded nanoparticles, within fluids and biological samples using videos. The technique is widely used in both physical and life sciences.

Sam Lai, Ph.D
UNC-CH photo.

“In order to derive meaning from videos, you have to convert the videos into quantitative data,” said Sam Lai, Ph.D., an associate professor in the UNC Eshelman School of Pharmacy and one of the creators of the new tracker. “With current software, researchers must carefully supervise the video conversion to ensure accuracy. This often takes many weeks to months, and greatly limit both the throughput and accuracy.

“We got tired of the bottleneck,” he said.

The root of the problem can be traced to the small number of parameters, such as particle size, brightness and shape, used by current software to identify the full range of particles present in any video. Things get missed because they don’t quite fit the parameters, and the parameters vary as different operators set them, Alison Schaefer, a Ph.D. student in the Lai lab, said. This creates a tremendous challenge with data reproducibility, as two users analyzing the same video frequently obtain different results.

Hints from self-driving cars

“Self-driving cars work because they can see and keep track of many different objects around them in real time,” said M. Gregory Forest, Ph.D., the Grant Dahlstrom Distinguished Professor in the UNC Departments of Mathematics and Applied Physical Sciences, and co-senior author on the project.

“We wondered if we could create a version of that kind of artificial intelligence that could track thousands of nanoscale particles at one time and do it automatically.”

Lai and his collaborators in the UNC Department of Mathematics designed an artificial neural network to go to work on their problem. Neural networks are loosely based on the human brain but learn by being fed a large number of examples. For example, if a neural network needs to recognize photos of dogs, it is shown lots of photos of dogs. It doesn’t need to know what a dog looks like; it will figure that out from the common elements of the photographs.

The better the examples, the better the neural network will be.

The UNC team first taught the neural network tracker from a truth set of computer-generated data. They then further refined the tracker using high-quality data from past experiments conducted in Lai’s lab. The result was a new tracker with thousands of well-tuned parameters that can process a highly diverse range of videos fully automatically, is at least 10 times more accurate than systems currently in use, is highly scalable, and possesses perfect reproducibility, Lai said.

The new system is ready just in time to support the increasing availability of powerful microscopes capable of collecting terabytes of high resolution 2D and 3D video in a single day, said Jay Newby, Ph.D., lead author of the study and an assistant professor at the University of Alberta.

“Tracking the movement of nanometer-scale particles is critical for understanding how pathogens breach mucosal barriers and for the design of new drug therapies,” Newby said. “Our advancement provides, first and foremost, substantially improved automation. Additionally, our method greatly improves accuracy compared with current methods and reproducibility across users and laboratories.”