
Researchers at Barrow Neurological Institute utilize ZEISS arivis Pro for deep learning-based analysis of mitochondria in brain tissue. This approach streamlines segmentation and phenotype classification under hypoxic conditions, facilitating advanced microscopy workflows.

Researchers at Barrow Neurological Institute, Phoenix Children’s Hospital are using transmission electron microscopy imaging and ZEISS arivis Pro (formerly Vision4D) to gain an understanding of how the mitochondria in the brain tissue are affected by hypoxic conditions.
Automated segmentation of the transmission electron microscopy images remains a challenge. ZEISS arivis Pro is specifically designed to allow customers to easily apply the Deep Learning models on the images and run the subsequent analysis within the established workflow tailored to the specific analysis needs. This article describes the best practices in mitochondria EM image analysis: creating the ground truth annotations, running the inference (predictions) in ZEISS arivis Pro, and utilizing its generous toolset for downstream analysis.
Key learnings:

When preparing for the Deep Learning (DL) training, the critical step is creating the ground-truth annotations. This is often achieved by manually annotating the regions of interest, thereby creating the objects. In total, for this research project, 30 TEM serial sections were used with 309 mitochondria objects, annotated manually with the drawing tool (Vision4D ver. 3.6 was used). Both mitochondria phenotypes, normal and swollen, were pooled into one class for the DL training.
These manual annotations were used to train the Deep Learning model for semantic segmentation, also known as pixel classification. Specifically, we have used the U-net model, with the architecture very similar to the original publication (O. Ronneberger et al., 2016). Prior to the training, the images and the annotations were downscaled two-fold of the raw image size using the bicubic interpolation method to facilitate the DL training time and match the feature size. In the following step, the grayscale images and the binary ground-truth images were augmented by applying rotations, reflections, and elastic transformations. The U-net model was trained with the custom-made script* for 50 epochs and the 42nd epoch had the highest accuracy score and was selected for running the inference (predictions) to segment the mitochondria. During the last training step, the model was converted to the ONNX format to run the inference (predictions) in ZEISS arivis Pro.
* For more details, contact the arivis team.


The Deep Learning model then was applied to the whole dataset in ZEISS arivis Pro for automated segmentation. We can apply the DL model in the software on the data with the same resolution in the pipeline and scale the resulting objects back to the original size. The pipeline for segmenting the mitochondria was run at 50% scale compared to the original images. Object filtering and classification by the phenotype and export of the numerical data into the Excel format is done automatically within the same pipeline. This enables applying the entire workflow in a Batch mode on a set of images.

ZEISS arivis Pro has an extensive list of quantitative features, that characterize each object. In addition, we have the possibility to create custom features or import them from external sources. In other to access the quantitative distribution of the mitochondria phenotypes, we have created a custom object feature, which computes the ratio of the mean intensity of each object to its volume.
This ratiometric function reflects and emphasizes the differences in the mitochondria phenotype with high accuracy. It was used for classifying the objects into the ‘Control’ and ‘Swollen’ groups. For visualization purposes, each object was color-coded according to the value of the mitochondria phenotype custom feature.
















