New Study Shows How Deep-learning Technology Can Improve Brain Imaging

March 8, 2021 • by Amanda Figueroa-Nieves

The technology can be used to train computers to increase the resolution of low-quality cellular and tissue images acquired on point-scanning systems, such as MRI

Comparison of brain imaging before and after the new technique

Neuroscience researchers often face challenges when using high-powered microscopes to capture clear images of brain tissue. Microscopes suffer from what researchers call the "eternal triangle of compromise" — image resolution, the intensity of the illumination the sample is subjected to, and speed compete with each other. For example, taking an image of the sample very quickly can result in a dark image, but subjecting a biological sample to more intense light can damage it.

new study in Nature Methods revealed that deep learning (DL) technology can be used to train computers to increase the resolution of low-quality cellular and tissue images acquired on point-scanning systems, which are widely used in biological imaging. Scientists at The University of Texas at Austin, the Salk Institute for Biological Studies, the University of California, San Diego, Fast.AI, and the Wicklow AI in Medicine Research Initiative (WAMRI.ai) collaborated on the method, which they named point-scanning super-resolution (PSSR) imaging.

"It has been wonderful to participate in this world-wide collaboration to develop ways to improve image quality, especially for samples that must be collected fast at lower resolutions or under suboptimal conditions," said Kristen Harris, a professor of neuroscience who leads the UT Austin part of the collaboration. "The teams that have contributed to this work have been an absolute joy to work with."

Zhao Zhang, a research associate at the Texas Advanced Computing Center, set up the infrastructure for the project, including a parallel implementation that reduced a single training session from days to hours. TACC's diverse and robust supercomputing systems enable a wide range of scientific applications, including support for popular deep learning software.

The researchers used existing high-resolution images and subjected it to a "crappifier" that injected noise, or large amounts of meaningless information, into the image while simultaneously downsizing the pixel resolution of the images. These new "crappified" images were paired with their corresponding high resolution images to train computing models to improve noisy, undersampled images. The scientists made the code for this process openly available through GitHub.

Using artificial low-quality images gives the models "large, pre-existing gold standard data-sets for training new models without acquiring new data," the study says. "We hope the open-source availability of our crappifier will be reciprocated by open-source sharing of high-quality imaging data, which can then be used to train new DL models."

The researchers found that deep learning-based pixel super-resolution can be a viable strategy of particular value for both optical and electron point-scanning microscopes. They tested PSSR on mouse, rat and fly samples on four different microscopes in four different laboratories and found that their model holds up under different conditions.

"Although the accuracy of DL approaches such as PSSR is technically imperfect, real-world limitations on acquiring ground truth data may render PSSR the best option," the study says. "Our results show the PSSR approach can in principle enable higher speed and resolution imaging with the fidelity necessary for biological research."

Share