ZEISS
FeedChatSaved
Featured image

This white paper explores AI-enhanced image analysis techniques using ZEISS software for microscopy. It provides insights into segmentation, quantification, and data management improvements for organoid and tissue imaging applications.

AI-Driven Microscopy: Advanced Image Analysis with ZEISS

Key Takeaways

  • Focuses on AI-driven image analysis with ZEISS software.
  • Model systems: organoids and tissue samples.
  • Research goal: enhance segmentation and manage large datasets.
  • Authored by ZEISS experts in imaging technology.
  • Content type: White Paper on AI microscopy analysis.
Show less
2 min read

View the full white paper here



AI for Advanced Image Analysis — Full Text Extract (ZEISS arivis)

_Source: uploaded PDF. Extracted text; images/figures not included. Page headers inserted for easy navigation._



— — — — — — — — — — — — — — —


Page 1


AI for Advanced Image Analysis

A Practical Guide for Microscopy Analysis

with ZEISS Software


— — — — — — — — — — — — — — —


Page 2


_(No extractable text on this page.)_


— — — — — — — — — — — — — — —


Page 3


_(No extractable text on this page.)_


— — — — — — — — — — — — — — —


Page 4


Foreword

As the CEO of Carl Zeiss Microscopy, a global

leader in microscopy and imaging solutions, it

gives me great pleasure to introduce this book

on AI for image analysis. We at ZEISS believe

that technology can be a powerful tool for

driving innovation and advancing science and

we are proud to be leading the charge in the

field of microscopy and imaging solutions.

This book is not just a collection of technical

information: it is a source of inspiration for

anyone who wants to unlock the full potential

of AI in microscopy. Using Machine Learning

and Deep Learning, we can now achieve

results that were once thought impossible.

The examples and case studies included in this

book are a testament to the transformative

power of AI in image analysis.

At ZEISS, we are committed to pushing the

boundaries of what is possible and we are

proud to be at the forefront of this exciting

new field of AI-powered image analysis.

Whether you are a researcher, clinician, or

engineer, I believe this book will be a valuable

resource for unlocking the full potential of AI in

microscopy for you.

Dr. Michael Albiez

Member of the Management Board IQR &

Head of SBU RMS ZEISS

President & CEO Carl Zeiss Microscopy GmbH

“We at ZEISS believe that

technology can be a powerful

tool for driving innovation and

advancing science and we are

proud to be leading the charge

in the field of microscopy and

imaging solutions.”


— — — — — — — — — — — — — — —


Page 5


“AI and Machine Learning

are transforming the fi eld of

image analysis, and this book

provides a comprehensive

guide to these powerful new

technologies.”As the head of sales and service for Carl Zeiss

Microscopy, I am excited to introduce this

book on the power of AI for image analysis.

Our teams work tirelessly with our customers

to provide the tools and support needed

to achieve their goals, and AI technology

is a game-changer that can supercharge

their success. AI and Machine Learning are

transforming the fi eld of image analysis, and

this book provides a comprehensive guide to

these powerful new technologies. It covers the

basics of AI and provides practical examples

of how to apply these concepts to microscopy

image analysis.

At ZEISS, we believe that AI can make our

customers’ lives easier by reducing manual time

overhead in their workfl ows, both in terms of

microscope hardware and software. We are

proud to be pioneers in this exciting fi eld and

hope that our book will inspire and empower

others in the microscopy community to take

advantage of the incredible benefi ts of AI.

Martin Fischer

Head of Global Sales & Service

ZEISS Research Microscopy Solutions


— — — — — — — — — — — — — — —


Page 6


Contents

Foreword .............................................................................................................................. 4

What is AI and why does it matter? ..................................................................................... 8

Why you need AI in your research ......................................................................................................... 8

AI, Machine Learning, and Deep Learning: What is the difference? ....................................................... 9

Conventional Machine Learning vs. Deep Learning for image analysis .................................................. 9

Microscopy image analysis automation powered by AI ....................................................................... 11

No-code products from ZEISS.............................................................................................................. 11

An introduction to image segmentation ............................................................................ 14

What is image segmentation? ............................................................................................................. 14

Algorithms for image segmentation .................................................................................................... 14

Machine Learning segmentation techniques .......................................................................................... 15

Deep Learning algorithms for image segmentation ............................................................................. 16

The ZEISS software ecosystem ............................................................................................................. 17

AI in ZEISS arivis software for scalable automated analysis .............................................. 18

Training Deep Learning models using ZEISS arivis Cloud ...................................................................... 18

Tips for achieving a reliable Deep Learning model ............................................................................... 21

Using AI-trained models in applications .............................................................................................. 23

AI in ZEISS arivis Pro for automated image analysis .............................................................................. 24

Machine Learning for object classification in ZEISS arivis Pro ............................................................... 33

Deep Learning for denoising multi-dimensional datasets .................................................................... 34

AI in ZEISS arivis Hub for scalable image analysis ................................................................................. 36


— — — — — — — — — — — — — — —


Page 7


AI in ZEN and ZEN core imaging and analysis platform ..................................................... 42

Preconfigured workflows in ZEN and ZEN core ................................................................................... 42

AI-based image segmentation in ZEN and ZEN core ............................................................................ 43

Advanced AI tools for image analysis beyond segmentation ............................................................... 45

Harnessing AI in automated image analysis workflow s ....................................................................... 48

AI for routine image analysis using ZEISS Labscope .......................................................... 56

The potential role of AI tools in routine image analysis ............................................................................ 56

Overcoming limitations of AI tools ...................................................................................................... 57

The role of AI tools for determining cell confluency ............................................................................ 57

How AI can help with cell counting ..................................................................................................... 59

The benefits of AI in routine image analysis ........................................................................................ 60

AI for X-ray microscopy with Deep Learning-based reconstruction ................................... 62

Drawbacks of generating 3D reconstructions from 2D sample sections .............................................. 62

X-ray microCT: A versatile tool for non-destructive 3D characterization across scientific domains ....... 62

How XRM surpasses traditional microCT by using dual-stage magnification ....................................... 63

Advancements in CT reconstruction: Harnessing Deep Learning for enhanced imaging ...................... 63

Demonstrating the impact of Deep Learning with example applications ............................................. 66

Case studies : Example from Life Sciences .......................................................................... 72

Microscopy and Deep Learning for neurological disease research ....................................................... 72

Enhancing single-cell analysis with instance segmentation in phase contrast microscopy images ....... 78

Analysis of FIB-SEM volume electron microscopy data ......................................................................... 82

Analysis of mitochondria using Deep Learning .................................................................................... 88


— — — — — — — — — — — — — — —


Page 8


Enhancing the utility of zebrafish models to study infectious diseases using Deep Learning ................ 92

Exploring mouse embryo development with microCT and AI .............................................................. 98

Case studies : Example from Materials Science ................................................................. 102

Improving microstructure analysis of aluminum oxide with Deep Learning ....................................... 102

Instance segmentation in C45 steel analysis: Improving microstructural insights with AI .................. 110

Summary ............................................................................................................................ 114

ZEISS Microscopy Software Solutions ............................................................................... 116

ZEISS arivis Family of Products .......................................................................................... 116

ZEISS arivis Pro ................................................................................................................................... 116

ZEISS arivis Hub .................................................................................................................................. 117

ZEISS arivis Cloud ............................................................................................................................... 117

ZEISS ZEN Family of Products ........................................................................................... 118

ZEN Microscopy Software ................................................................................................................. 118

ZEISS ZEN core ................................................................................................................................... 119


Other Software Solutions ................................................................................................. 120

ZEISS Labscope .................................................................................................................................. 120

ZEISS DeepRecon Pro ........................................................................................................................ 121


Contributors .................................................................................................................... 122


— — — — — — — — — — — — — — —


Page 9


Cover image: The figure displays a cross-sectional view of an intestinal gut organoid captured at

20X magnification on ZEISS Celldiscoverer 7 and segmented using ZEISS arivis Pro image analysis

software. The i mage highlights outer cell layer nuclei in pink and the inner lumin al nuclei in yellow.

The chapters in this book employ the new product names for arivis products, which have been

rebranded by ZEISS following the acquisition. Specifically, arivis Vision4D is now known as ZEISS

arivis Pro, arivis VisionHub as ZEISS arivis Hub and APEER cloud platform as ZEISS arivis Cloud.


— — — — — — — — — — — — — — —


Page 10


What is AI and why does it matter? 8What is AI and why does it matter?

Why you need AI in your research

In 1955, John McCarthy, Assistant Professor

of Mathematics at Dartmouth College, coined

the term ‘Artificial Intelligence’ to represent

the field of thinking machines, including

cybernetics, automata theory, and complex

information processing [1]. Today, Artificial

Intelligence (AI) refers to the collection of

techniques that mimic human intelligence in

performing tasks.

AI has become ubiquitous in the 2020s,

helping us in many aspects of our lives, from

acting as personal assistants and delivering

customized information on social media, to

driving automobiles and trading stocks. In

recent years, it has become popular to use

AI capabilities for diverse image-processing

applications. In research, AI has the potential

to solve many challenges by enabling faster,

more accurate analysis of large amounts of

data. AI can significantly impact biotechnology,

where it can optimize the drug discovery and development process, reducing the time and

cost of bringing new therapies to market.

AI can also benefit diverse image analysis

applications, such as analyzing medical images

to help diagnose diseases and predict which

treatments will likely be most effective for an

individual patient.

While AI technology is rapidly developing,

certain challenges hinder the adoption of AI

in biomedical applications. Developing AI

systems can be expensive for biotech startups,

especially when hiring skilled personnel to

develop and maintain AI systems. There are

also ethical concerns around the use of AI

for biomedical applications. Despite these

objections, AI has seen rapid adoption in the

past decade, primarily driven by its ability

to solve challenges quickly. The exponential

growth in AI-related publications reflects

the technology adoption by the scientific

community (see Figure 1 ).

Figure 1: There has been a nearly exponential growth in the number of biomedical publications related to AI, including

Machine Learning and Deep Learning, since the year 2000. (Data sourced from PubMed January 2024).


— — — — — — — — — — — — — — —


Page 11


What is AI and why does it matter? 9AI, Machine Learning, and Deep

Learning: What is the difference?

Artificial intelligence, Machine Learning,

and Deep Learning are related but distinct

terminology (see Figure 2).

Artificial intelligence is the broadest term

and describes techniques that mimic human

intelligence in performing tasks. AI-related

biomedical publications in the past decade

primarily focused on solving challenges

using Machine Learning and Deep Learning

techniques.

Machine Learning is a subfield of AI that

focuses on learning from data and improving

processing efficiency and accuracy over time

with experience. There are several Machine

Learning algorithms available, encompassing

various learning approaches such as

supervised, unsupervised, and reinforcement

learning.

Deep Learning is a Machine Learning technique

that trains artificial neural networks on a large

dataset, allowing them to learn and make

independent, intelligent decisions. These

networks have gained popularity due to their

ability to learn and improve accuracy over time

without explicit programming. They are well

suited to solving image analysis challenges

that require algorithms to identify complex

Figure 2: Deep Learning is a powerful subset of Machine

Learning, which in turn is a subset of the broader field of

artificial intelligence.

Figure 3: Training a model on a small ROI to create the Machine Learning-driven classifier. The figure shows a mouse brain

cross-section imaged at 10x using ZEISS LSM980 with Airyscan. Sample courtesy of Prof. Jochen Herms, LMU München,

Germany.patterns and features in the data. It is worth

mentioning that, for the purposes of this book,

a distinction is made between Deep Learning

and non-Deep Learning-based algorithms.

The latter algorithms are referred to as

‘conventional’ Machine Learning techniques.

Conventional Machine Learning vs. Deep

Learning for image analysis

Conventional Machine Learning can learn from

a small amount of data, but an expert engineer

needs to handpick features to feed into a

classification algorithm such as Random Forest

[2] or Support Vector Machines [3] (SVM).

Features can be obtained from training images

through the use of digital image filters such

as Sobel, Entropy, and Gabor [4]. Alternatively,

Deep Learning networks trained on extensive

datasets can be utilized as a method for feature


— — — — — — — — — — — — — — —


Page 12


What is AI and why does it matter? 10extraction instead of manual feature crafting.

These approaches are ideal for scenarios where

future data is not anticipated to vary much

from the data used to train the model.

For example, a small region of interest (ROI)

from a large image can be used to train a

model, which can then process the entire large

image (see Figure 3 ). Similarly, users can take

random 2D slices from a 3D volume to train a

model to process the whole 3D dataset.

A conventional Machine Learning model

may not work well on datasets distinct from

the training data because the handful of

parameters used by Machine Learning cannot

be tuned to anticipate the variability in future

data. Additionally, a handful of parameters is

insufficient to capture the complexity in certain

data making the model fail at solving complex

challenges.

For example, conventional Machine Learning

fails at segmenting organelles in an electron

micrograph of a cell where the objects of

interest (e.g., mitochondria) show up against a

busy background (see Figure 4 ).

Deep Learning does not require hand-tuning

of features by an expert. It optimizes millions

of parameters during training without humans

explicitly engineering the features. These

algorithms can learn multiple levels of detail

and significance in the data, allowing them to

identify high-level features important

for the task.

This ability to learn by tuning millions of

parameters using a vast amount of data makes

Deep Learning algorithms generalizable to

handle data with large variations, such as

microscopy data that can vary because of

sample preparation, lighting, background,

objective, etc.

This large number of features also enables

Deep Learning to solve complex challenges, Figure 4: (a) Slice from a FIB-SEM volume of a HeLa cell

that was high-pressure frozen. The sample is courtesy

of Anna Steyer and Yannick Schwab of EMBL. (b) The

segmentation result from conventional Machine Learning.

A Random Forest algorithm was trained using features

derived by applying the first convolutional layer in the

pre-trained VGG16 model. The model was trained using

the AI toolkit in ZEISS ZEN software. (c) This figure depicts

the same outcome from (b), with the exception that the

output has been cleaned using a conditional random field

to remove isolated pixels. Although the segmentation was

able to detect a majority of pixels from mitochondria, it

failed to identify a significant number of pixels within these

objects, thereby making it challenging to differentiate them

entirely from the background. Furthermore, a large number

of non-mitochondria pixels were erroneously labeled as

mitochondria.

such as segmenting organelles against a busy

background (see Figure 5 ).

However, it is essential to note that Deep

Learning algorithms learn from the given data.

If the training data does not contain sufficient

examples of the variations, the model may not

perform well on those variations.


— — — — — — — — — — — — — — —


Page 13


What is AI and why does it matter? 11

Figure 5: (a) This picture displays the same slice from a

high-pressure frozen HeLa cell in a FIB-SEM volume as

seen in Figure 4a. The sample is courtesy of Anna Steyer

and Yannick Schwab of EMBL. (b) This image depicts the

result of Deep Learning segmentation. The U-net based

Deep Learning algorithm was trained on ZEISS arivis Cloud

platform. The segmentation results from Deep Learning

outperformed those obtained through conventional

Machine Learning. It is important to note that the pixels

utilized for training the conventional Machine Learning

(as seen in Figure 4) and the Deep Learning (as seen in this

figure) were not the same. Both approaches followed best

practices, as advised by the respective software packages.

school biology classrooms. The app provides

ready-to-use AI-powered solutions, including

fast and effective cell counting, allowing its

users to perform analysis on any microscope

with a camera.

Products for automated image

acquisition and segmentation

In biotech and academic research, users often

automate the image acquisition process to

ensure reproducibility and faster throughput.

ZEN software suite makes high-quality

image acquisition easy on research-grade

ZEISS microscopes. ZEN also provides an ‘AI

toolkit’ for image analysis that allows for

smart microscopy; for example, using AI to

automatically analyze a low-magnification

survey image to detect regions of interest for

high-magnification experiments. This allows for

automated imaging of multiple large samples

without any human intervention.

Automated imaging allows the collection Microscopy image analysis automation

powered by AI

A survey of PubMed publications since 2020

shows that AI technology has the potential to

solve a wide range of challenges in biomedical

research, including drug discovery [5], radiology

[6], and medical image analysis [7].

Microscopy image analysis as a subfield saw

rapid growth in AI-based applications, primarily

driven by the goal to automate image analysis

pipelines. Researchers have tried to automate

microscopy analysis to remove human bias and

improve throughput since the beginning of

digital image analysis in the 1960s [8].

This book focuses on AI applications for

microscopy image analysis, including various

case studies and the no-code tools from ZEISS

that make AI algorithms accessible to everyone.

AI can be daunting, especially for users with

little or no programming experience. The

no-code interfaces are user-friendly and allow

users with no coding experience to create

automated image analysis pipelines. They also

allow users to build custom workflows without

technical expertise. Labscope, ZEN, and arivis

are software platforms from ZEISS that provide

no-code interfaces that enable AI-powered

automated image analysis for scientific

challenges.

No-code products from ZEISS

ZEISS offers a range of no-code products to

allow users to benefit from AI-powered image

analysis solutions. These tools are accessible

to a range of users, from routine labs and

digital classrooms conducting small-scale

experiments, to biotech and academic

researchers conducting experiments with large,

multi-dimensional datasets.

Products for routine lab tasks

Many routine lab imaging tasks, such as

cell counting, can benefit from AI-powered

automation. Labscope is an easy-to-use

imaging app for routine labs and university or


— — — — — — — — — — — — — — —


Page 14


What is AI and why does it matter? 12

Learn more about ZEISS arivis Cloud

Train and share Deep Learning models on

the cloud for AI-driven image analysis.

www.zeiss.com/arivis-cloudof large amounts of data in a short period,

which can be helpful for applications such as

studying the eff ects of a particular treatment

on multiple cells or organisms. But the image

analysis throughput must keep up with image

acquisition to maximize the benefi t. ZEN’s AI

toolkit can be utilized to enhance application-

specifi c automated image analysis solutions.

Some sample applications within ZEN include

2D cell counting, cell confl uency, gene and

protein expression, as well as automated spot

detection. As the data size, dimensions, and

complexity increase, the analysis can be scaled

up using the arivis software ecosystem.

Data-agnostic image analysis tools

arivis represents an ecosystem of software

solutions designed for data-agnostic image

analysis, allowing the analysis of images in

many formats from diff erent microscope

vendors (and other imaging hardware, such

as MRI and CT). The primary arivis solutions

include ZEISS arivis Pro, ZEISS arivis Hub, and

ZEISS arivis Cloud.

ZEISS arivis Pro is a visualization-centric

multi-dimensional image analysis platform

that provides interactive tools and the ability

to develop automated analysis pipelines for

virtually unlimited-size data with just a few

clicks.

ZEISS arivis Hub enables the design and

execution of large-scale experiments via

parallelized processing using multiple

computational workers on local workstations,

servers, or cloud servers.

Figure 6 provides an overview of the ZEISS

microscopy software ecosystem.ZEISS arivis Cloud provides the infrastructure

for cloud storage and computation of image

analysis pipelines. Its segmentation tools

enable users to benefi t from Deep Learning

without needing to know how to code.

These Deep Learning trained models can be

incorporated into arivis and ZEN image analysis

pipelines.

Figure 6: ZEISS microscopy software ecosystem.


— — — — — — — — — — — — — — —


Page 15


What is AI and why does it matter? 13References

1. Wikipedia. Dartmouth workshop. URL: https://en.wikipedia.org/wiki/Dartmouth_workshop

(accessed 24 January 2023).

2. Wikipedia. Random Forest. URL: https://en.wikipedia.org/wiki/Random_forest (accessed 24

January 2023).

3. Wikipedia. Support vector machine. URL: https://en.wikipedia.org/wiki/Support_vector_

machine (accessed 24 January 2023).

4. Wikipedia. Gabor filter. URL: https://en.wikipedia.org/wiki/Gabor_filter (accessed 24 January

2023).

5. Vamathevan J, Clark D, Czodrowski P, Dunham I, Ferran E, Lee G, et al . Applications of Machine

Learning in drug discovery and development. Nat Rev Drug Discov. (2019) 18 (6):463–477. doi:

10.1038/s41573-019-0024-5.

6. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in

radiology. Nat Rev Cancer . (2018) 18(8):500–510. doi: 10.1038/s41568-018-0016-5.

7. Castiglioni I, Rundo L, Codari M, Di Leo G, Salvatore C, Interlenghi M, et al . AI applications

to medical images: From Machine Learning to Deep Learning. Phys Med. (2021) 83 :9–24. doi:

10.1016/j.ejmp.2021.02.006.

8. Prewitt, JMS, Mendelsohn, ML. The analysis of cell image. Ann N Y Acad Sci. (1966)

128:1035–1053. doi: 10.1111/j.1749-6632.1965.tb11715.x.


— — — — — — — — — — — — — — —


Page 16


An introduction to image segmentation 14An introduction to image segmentation

What is image segmentation?

Image segmentation is the process of dividing

an image into various sections corresponding

to different regions of similarity, referred

to as regions of interest (ROI) in scientific

terminology. These regions represent the

original image in a way that is easier to analyze.

In microscopy image analysis, segmentation is

a key step in many applications. For example,

automated counting, sizing, and tracking

of biological cells enable high-throughput

screening in drug discovery experiments (see

Figure 1 ).

Similarly, grain segmentation of 3D-printed

materials informs and improves the additive

manufacturing process by providing

microstructural insights. Plus, the segmentation

of various minerals and porous structures helps

petrologists understand the movability of

hydrocarbons in sedimentary rocks.

Algorithms for image segmentation

Image segmentation has evolved significantly

over the last five decades, from traditional

techniques in the 1970s and 1980s to using

Deep Learning in recent years. Traditional

methods, such as thresholding, edge

detection, and region growing, relied on

manually tuning parameters making the results

irreproducible and subject to human bias.



Figure 1: Segmentation in a microscopy experiment tracking cell nuclei. (a) Image showing the DAPI-stained cell nuclei in

blue. (b) The nuclei from (a) were segmented by employing global thresholding and then separated using the Watershed

algorithm. The segmented nuclei are depicted in red. (c) The nuclei were segmented and tracked throughout the time

series, with each nucleus and its corresponding track displayed in randomly assigned colors. (d) A plot showing the mean

squared displacement of selected nuclei.Otsu’s segmentation method

A key method, called Otsu’s method, provides

a way to perform automatic segmentation

using the histogram threshold approach [1].

Otsu’s algorithm returns a single intensity

threshold value that separates pixels into either

foreground or background classes.

Otsu’s algorithm is a global thresholding

method and assumes the image is

homogeneous and follows a bimodal

distribution.

Therefore, this approach may not be ideal for

noisy images or showing multiple regions with

similar mean gray levels but varying textures.

However, its simplicity and computationally

fast nature made it the preferred choice for

simple segmentation tasks such as nuclei

segmentation in fluorescence microscopy

images (see Figure 2 ).

The Watershed algorithm

Otsu segmentation only divides the image

into background and foreground, but it

cannot distinguish between objects that touch

one another. Additional image processing

techniques, like the Watershed algorithm [2],

are often used to separate touching objects.

The Watershed algorithm separates objects by

creating boundaries between regions ‘flooded’

from different markers, hence its name (see

Figure 3 ).


— — — — — — — — — — — — — — —


Page 17


An introduction to image segmentation 15

Figure 2: Otsu-based segmentation of a fl uorescence

micrograph. (a) Fluorescence micrograph of a sample

stained with DAPI showing nuclei in blue. (b) Otsu

segmentation shows the nuclei regions in white.

However, a disadvantage of the Watershed

method is that it may break down a single

object into several pieces, depending on its

shape.

Machine Learning segmentation

techniques

The 2000s saw the introduction of

conventional Machine Learning techniques for

image segmentation, including decision trees,

Random Forests, and Support Vector Machines

(SVM). These methods improved traditional

techniques by incorporating contextual

information and learning from data, making

it possible to automate the segmentation of

images with complex or varied intensity values

and textures. Conventional Machine Learning

works by training a classifi er (e.g., a SVM) on

various attributes associated with the training

data. For images, these attributes can be

defi ned via features extracted from them.

Digital image fi lters can be engineered to

extract features representing various intensities

Figure 3: (a) Otsu-segmented binary image. (b)

Otsu-segmented binary image followed by the Watershed

separation of objects. The separation between grouped

objects is evident in this image. and textural information in images. For

example, the Sobel fi lter [3] calculates the

image intensity gradient at any point and

generates an image emphasizing edges.

Similarly, the Gabor fi lter [4] combines

sinusoidal and Gaussian functions to describe

and show diff erent textures. Adjusting fi lter

parameters can create countless Gabor kernels

that serve as feature extractors. For instance, a

kernel with theta set to π/2 acts as a band-pass

fi lter that emphasizes horizontal features in the

image. Likewise, a kernel with theta set to π

accentuates vertical features.

Figure 4 shows the application of these kernels

on a cross-section of a NAND fl ash memory

chip, illustrating that modifi cation of the theta

value can emphasize features oriented in a

specifi c direction.

Instead of handcrafting the features, Deep

Learning networks trained on large datasets

can also extract features from an image.

For example, the VGG16 network [5] trained

on the ImageNet [6] dataset can extract many

features from images of a NAND fl ash memory

chip (see Figure 5 ). These features can be used

as input information for conventional Machine

Learning algorithms capable of learning how to

classify pixels (segmentation) or entire images

(classifi cation).

Although it is possible to use conventional

Machine Learning techniques for a broad range

of image segmentation, their eff ectiveness

decreases as the images become more

complex in shape and texture. Furthermore,

these algorithms tend to perform poorly on

images that vary in intensity compared to

the training images, making them poorly

generalizable to other datasets.


— — — — — — — — — — — — — — —


Page 18


An introduction to image segmentation 16

Figure 5: The use of a pre-trained Deep Learning model as a feature extractor. (a) A cross-section of a NAND fl ash

memory chip imaged using ZEISS Crossbeam 550 FIB-SEM. (b) The VGG16 neural network was pre-trained on the ImageNet

dataset. (c) Features obtained from the input image using the second convolutional block of the pre-trained VGG16

network. See reference 5 for technical details.

Figure 4: Using the Gabor fi lter to extract features from a micrograph of NAND fl ash memory. (a) A cross-section of a

NAND fl ash memory chip imaged using ZEISS Crossbeam 550 FIB-SEM. (b) Digital fi lter kernels generated from adjusting

Gabor parameters. (c) The features that are produced when the appropriate Gabor kernels are applied. One kernel

emphasizes the input image’s horizontal details (top), and the other highlights the vertical details (bottom).

Deep Learning algorithms for image

segmentation

Deep Learning algorithms demonstrate greater

generalizability than conventional Machine

Learning algorithms.

A convolutional neural network (CNN) is a

Deep Learning algorithm explicitly designed

for image processing tasks. One of the

most popular CNN architectures is U-net,

introduced in 2015 by Olaf Ronneberger et

al. [7]. It is widely used for biomedical image

segmentation.The U-net architecture is particularly good at

image segmentation because it can learn both

local and global features of images.

While Deep Learning is a powerful technique,

it requires a lot of labeled data and

computational resources for training. But once

trained, Deep Learning models can be used

for extended periods due to their excellent

generalizability. ZEISS provides software

solutions to assist researchers in addressing the

diffi culties of analyzing massive amounts of


— — — — — — — — — — — — — — —


Page 19


An introduction to image segmentation 17data with limited resources, enabling them to

achieve reproducible results at a quicker pace.

The ZEISS software ecosystem

Each method discussed has its strengths

and weaknesses, and the choice of method

depends on the application and the type of

image being analyzed. The ZEISS software

ecosystem offers a variety of powerful tools

to train and integrate conventional Machine

Learning and Deep Learning models into image

processing and analysis pipelines. The key

software products covered in this book are:

■ZEISS arivis suite: Designed for scalable

data-agnostic image analysis.


ZEISS arivis Cloud: Provides user-friendly

access to Deep Learning tools, enabling

the training of custom models for image

segmentation tasks.


References

1. Otsu N. A Threshold Selection Method from Gray-Level Histograms. I EEE Trans Syst Man

Cybern. (1979) 9(1):62–66. doi: 10.1109/TSMC.1979.4310076.

2. Wikipedia. Watershed (image processing). URL: https://en.wikipedia.org/wiki/Watershed_

(image_processing) (accessed 14 February 2023).

3. Wikipedia. Sobel operator. URL: https://en.wikipedia.org/wiki/Sobel_operator (accessed 31

January 2023).

4. Wikipedia. Gabor filter. URL: https://en.wikipedia.org/wiki/Gabor_filter (accessed 31 January

2023).

5. Simonyan K and Zisserman A. Very Deep Convolutional Networks for Large-Scale Image

Recognition. (2014) arXiv:1409.1556. doi: 10.48550/arXiv.1409.1556.

6. Deng D, Dong W, Socher R, Li L-J, Li K, and Fei-Fei L. ImageNet: A large-scale hierarchical image

database. IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA. (2009)

248–255. doi: 10.1109/CVPR.2009.5206848.

7. Ronneberger O, Fischer P, and Brox T. U-Net: Convolutional Networks for Biomedical Image

Segmentation. (2015) arXiv:1505.04597. doi: 10.48550/arXiv.1505.04597.ZEISS arivis Pro: Visualization-centric

multidimensional image analysis software.


ZEISS arivis Hub: Execution of large-scale

experiments via parallelized processing using

multiple computational workers.

■ZEN and ZEN core: Universal software

interfaces for image acquisition and analysis

on advanced microscopes from ZEISS.

■Labscope: An easy-to-use imaging app for

routine labs, universities, and schools.

■ZEISS DeepRecon Pro: State-of-the-art Deep

Learning-based reconstruction for ZEISS

X-ray Microscope (XRM) or microCT.

“ZEISS provides software solutions

to assist researchers in addressing

the difficulties of analyzing massive

amounts of data with limited

resources.”


— — — — — — — — — — — — — — —


Page 20


AI in ZEISS arivis software for scalable automated analysis 18

In the previous chapter, we introduced ZEISS

arivis, a versatile software suite designed for

data-agnostic image analysis. This powerful

suite can handle images from various sources,

including different microscope vendors, MRI

scanners, and CT scanners, across multiple file

formats. The primary products within the arivis

suite are:

■ZEISS arivis Cloud.

■ZEISS arivis Pro.

■ZEISS arivis Hub.

This chapter starts by focusing on ZEISS arivis

Cloud, which provides user-friendly access

to advanced Deep Learning tools for training

custom models tailored to image segmentation

tasks. These models can be seamlessly

integrated into ZEN, ZEN core, ZEISS arivis Pro

and ZEISS arivis Hub, enabling automated,

scalable analysis of multidimensional datasets.

Additionally, we will discuss the AI capabilities

Figure 1: A re-created U-net architecture based on the original paper [2] which takes in an RGB input image with

dimensions of 512x512 and produces a segmented image with the same dimensions for a chosen class. The ZEISS arivis

Cloud implementation dynamically chooses the tile size based on the images, ranging from 1024x1024 for larger images to

128x128 for smaller images.embedded within ZEISS arivis Pro and ZEISS

arivis Hub, as well as the ability to create

ground truth labels in three dimensions

(3D) using the immersive ZEISS arivis Pro VR

environment.

Training Deep Learning models using

ZEISS arivis Cloud

ZEISS arivis Cloud helps users annotate images

and train Deep Learning models for image

segmentation. Users can use the resulting

models on both ZEISS arivis and ZEN platforms.

ZEISS arivis Cloud offers a user-friendly interface

that allows users to establish the ground

truth by simply painting pixels and training

a personalized model by clicking the “Train”

button. The following link leads to a video

tutorial that explains the process of custom

Deep Learning model training for image

segmentation using ZEISS arivis Cloud:

bit.ly/arivis-deep.


ZEISS arivis Cloud employs the widely

recognized U-net architecture (see Figure1 ) AI in ZEISS arivis software for scalable

automated analysis


— — — — — — — — — — — — — — —


Page 21


AI in ZEISS arivis software for scalable automated analysis 19Figure 2: The training interface in ZEISS arivis Cloud showing partial annotations for mitochondria (yellow) and

background (purple dots). The image shows a slice from a FIB-SEM volume of a HeLa cell that was high-pressure frozen.

The sample is courtesy of Anna Steyer and Yannick Schwab of EMBL.

for semantic segmentation with encoder and

decoder modifi cations to enhance speed

and accuracy. For instance segmentation,

ZEISS arivis Cloud uses Mask2Former [1]. Both

architectures have been adapted to work with

microscopy data and to enable segmenting

images with any number of channels. The

“loss functions” for both approaches have

also been customized for training with partial

annotations, further improving the effi ciency

and accuracy of the training process.

Note: As Deep Learning technology evolves,

the specifi c algorithms used for semantic and

instance segmentation may change in the

future.

Several other improvements have been made in

the Deep Learning training and segmentation

process to make it user-friendly and accessible

to individuals of any skill level. Examples

include:

■Using pre-trained weights.

■Allowing for partial annotations.

■Automatic defi nition of boundary

annotations.■Using image augmentation techniques.

■Selecting the segmentation tasks

“Semantic Segmentation” and “Instance

Segmentation”.

■Implementation of smooth tiling.

P re-trained weights

Unlike conventional Machine Learning, Deep

Learning can require a large amount of data

for training. However, ZEISS arivis Cloud is

equipped with preloaded, pre-trained weights,

allowing fast model training from less data.

It is recommended to start with as little as 20

annotations. The users can then add additional

labels based on the outcome of the initial

segmentation, thus tweaking their trained

model with only the necessary eff ort.

Partial annotations

Traditional training methods for Deep Learning-

based semantic or instance segmentation

algorithms often require extensive labeling.

Every pixel in each training image must be

annotated, including any over-represented

areas, which can be a time-consuming and

ineffi cient process.


— — — — — — — — — — — — — — —


Page 22


AI in ZEISS arivis software for scalable automated analysis 20Figure 3: Augmented images and the respective masks produced while training a model on ZEISS arivis Cloud. The image

shows a slice from a FIB-SEM volume of a HeLa cell that was high-pressure frozen. The sample is courtesy of Anna Steyer

and Yannick Schwab of EMBL.

ZEISS arivis Cloud introduces a more effi cient

method called “partial annotations” for

segmentation as part of its Deep Learning

workfl ow. It allows users to concentrate on

under-represented regions in training images

to make the process more effi cient. This is

particularly useful for microscopy applications

where images are usually large (see Figure 2 ).

Automatic boundary annotation further

optimizes the usefulness of partial annotations.

Au tomatic boundary annotation

Segmenting the central pixels of objects is

easier than segmenting the edge because

the boundary between objects and the

background is often uncertain. Thus, it is

crucial that users properly annotate them

during the training phase. ZEISS arivis Cloud

makes it convenient for the user to defi ne

these boundaries by automatically cutting

out annotated objects from the surrounding

background (see Figure 2 ).

Im age augmentation

Image augmentation improves the

generalizability of a trained model by giving the

algorithm variations of the training data, such

as rotated, zoomed, and stretched images. This

helps improve model accuracy when it analyzes

new data because they might resemble the

transformed images used during training.

ZEISS arivis Cloud performs various image

augmentation in the background (see Figure 3 ).Cho osing the appropriate segmentation

method

On ZEISS arivis Cloud, users can select the

segmentation approach appropriate to

their desired application. There are two

segmentation options:

1. Semantic segmentation (pixel-based).

2. Instance segmentation (object-based).

For example, when classifying regions of tissue,

semantic segmentation enables users to assign

each pixel to a specifi c tissue class. When

segmenting nuclei, instance segmentation

is necessary as it allows the user to identify

and outline each individual nucleus and, for

example, extract morphological parameters

from them. ZEISS arivis Cloud off ers both

options, giving the user the freedom to achieve

their image segmentation goals.

Figure 4 shows the results obtained using

semantic and instance segmentation

approaches. Figure 4a displays the original

phase contrast image and Figure 4b displays

the semantic segmentation result, where every

pixel corresponding to the cells is colored

purple.

While this approach highlights the pixels

occupied by cells eff ectively, it fails to

distinguish each individual cell as a separate

object, which is crucial for quantifying and


— — — — — — — — — — — — — — —


Page 23


AI in ZEISS arivis software for scalable automated analysis 21Figure 4: Comparison of semantic and instance segmentation approaches for phase contrast cells. (a) Original phase

contrast image. (b) Semantic segmentation result highlighting pixels corresponding to the cells in purple. (c) Instance

segmentation result, clearly delineating each individual cell as a separate object, even when they are touching each other

(shown in random colors).

analyzing them at the object level. In contrast,

Figure 4c shows the result from instance

segmentation, where each individual cell is

clearly segmented as a distinct object. This

result emphasizes the benefit of instance

segmentation when object-level information is

required for analysis.

Smooth tiling

Deep Learning-based segmentation uses a

lot of device memory. To address this, it is

common practice to divide large images into

smaller patches (tiles) and combine them

back into the large image. However, simply

arranging the patches back into a large

image can result in edge artifacts where the

continuity of objects may be disrupted (see

Figure 5 ). To avoid this, ZEISS arivis Cloud uses

predictions from overlapping tiles that are

blended by assigning a weighting coefficient

to pixels. Satisfactory blending is achieved by

assigning larger weights to pixels closer to the

tile center.

This method is called “smooth tiling”. The

logic behind it is that pixels closer to the tile

center provide more image context and are

considered more reliable.

Tips for achieving a reliable Deep

Learning model

ZEISS arivis Cloud offers a range of features

that simplify Deep Learning training.

Additionally, the user can make many decisions

to further streamline the model creation process. Here are some suggestions to achieve

a reliable Deep Learning model.

Standardize the imaging conditions

The complexity of the segmentation task is

impacted by variations in imaging conditions.

Standardizing the imaging parameters

facilitates the Deep Learning algorithm’s

learning of the task, as fewer annotations are

required. Adhering to the following guidelines

ensures optimal training of the model on ZEISS

arivis Cloud.

■The microscope illumination settings

should remain consistent between images

to maintain similar intensity histograms

between them.

■The magnification and binning should be

kept so that identical objects have similar

pixel sizes.

Choose a magnification and resolution that is

sufficient to visualize the structures of interest

that don’t unnecessarily increase resolution.

An unnecessarily high resolution makes it

harder for the model to detect the structures of

interest and increases the processing times.

Avoid complexity when defining classes

Try to avoid defining multiple classes to

segment similar objects with minor differences.

For example, instead of training an algorithm

to segment small and large objects, it may be

better to train an algorithm to segment


— — — — — — — — — — — — — — —


Page 24


AI in ZEISS arivis software for scalable automated analysis 22all objects and use the size information to

separate them into distinct classes after the

initial segmentation. While segmenting objects

into specifi c classes during the training process

may seem easier and obvious, post-processing

can be more effi cient because it enables

a more generic model that can handle a

wide range of objects to adapt to diff erent

applications.

Start simple and increasing the

complexity as needed

Achieving robust segmentation of many classes

across diff erent imaging conditions is the goal

of developing a Deep Learning segmentation

model. However, it is challenging for the

algorithm to learn all the complexities when

provided only a few annotated objects within

the large parameter space.

A data-centric approach [1] can quickly develop

a robust Deep Learning segmentation model.

ZEISS arivis Cloud provides the necessary tools

to construct the perfect training dataset using

the data-centric strategy (see Figure 6 ). Such

datasets include just the correct number of

annotations in crucial areas to attain the level

of segmentation robustness that the user

desires.Figure 5: (a) Cryo-electron microscopy image of a cell showing mitochondria. A Deep Learning model has been trained

using ZEISS arivis Cloud to segment faint mitochondria from the background. (b) The segmentation result without

smooth blending produces noticeable artifacts along the patch edges, leading to incorrect classifi cation of edge pixels

as mitochondria. Improbably straight edges are also clearly visible, as indicated by the black arrows. (c) The seamless

integration of patches using smooth tiling creates a segmented image without any visible anomalies. Sample courtesy of

Dr. York-Dieter Stierhof from Eberhard Karl University of Tübingen.

It is recommended to approach complex

tasks by starting with a small, straightforward

portion of data before gradually increasing

the complexity in order to build the complete

annotated dataset.

Recommended approach for

segmentation

■Begin by selecting a single class to segment.

■Approximately 20 objects or regions in

similar images should be annotated (for

example, from a single experiment).

■After the training, the accuracy of the

algorithm at segmenting the fi rst class

should be evaluated.

■To improve the robustness of the algorithm,

images with more variability (such as from

diff erent experiments) should be added and

steps 2 and 3 should be repeated.

■Once the fi rst class has been successfully

segmented across all images, additional

classes should be annotated and trained by

repeating these fi ve steps.


— — — — — — — — — — — — — — —


Page 25


AI in ZEISS arivis software for scalable automated analysis 23

This approach allows the user to concentrate

on annotating challenging features rather

than wasting time on easy ones. Gradually

increasing the complexity helps the user

develop intuition about which image features

are diffi cult for the algorithm to learn.

In microscopy, variations in data sets can

arise because of diff erent sample preparation

procedures and experimental conditions.

Examples include the illumination source,

magnifi cation, and duration of observation.

Therefore, to create generalized models, it

is essential that the fi nal annotated data set

refl ects the diversity expected in future data.Additional tips to enhance image

segmentation effi ciency

1. It is advised not to waste time annotating

areas where the algorithm has already

demonstrated mastery.

2. The recent training segmentations should

be examined to determine areas where the

algorithm struggles, and these areas should

be given priority for annotating.

3. Recognizing rare classes can be a

challenge for the algorithm. To improve

its understanding of these classes,

fi nding additional training images that

include examples of these rare classes is

recommended.

Using AI-trained models in applications

Models trained with ZEISS arivis Cloud can be

incorporated into image analysis workfl ows

across various ZEISS software packages,

including ZEN and arivis. The models can

be used for image segmentation on ZEISS

arivis Cloud, which is especially eff ective for

applications where no further image analysis is

needed beyond the initial segmentation andFigure 6: Model-centric versus data-centric model development for microscopy applications.

Figure 7: FIB-SEM image of a high-pressure frozen HeLa

cell segmented in ZEISS arivis Pro for various organelles

using Deep Learning models trained on ZEISS arivis Cloud.

Sample courtesy of Anna Steyer and Yannick Schwab.


— — — — — — — — — — — — — — —


Page 26


AI in ZEISS arivis software for scalable automated analysis 24simple measurements. Users get a report that

details over 18 morphological measurements

extracted from the segmented objects,

including the area and diameter of each object.

Many applications will require advanced post-

segmentation image analysis. ZEISS arivis Cloud

models can be downloaded and used with the

ZEN, ZEN core, ZEISS arivis Pro, and ZEISS arivis

Hub image analysis pipelines. These products

off er customizable, push-button solutions for

most applications. On-microscope analysis of

images captured using ZEISS microscopes is

achievable using AI-powered image analysis

pipelines in ZEN. Large multi-dimensional

datasets can be imported into ZEISS arivis Pro

(see Figure 7 )for automated analysis regardless

of whether they were collected using ZEISS or

non-ZEISS microscopes. Automated analysis

can be performed on these datasets and scaled

up for faster processing with ZEISS arivis Hub.

The rest of this chapter provides an overview of

AI-powered tools in ZEISS arivis Pro and ZEISS

arivis Hub.

AI in ZEISS arivis Pro for automated

image analysis

ZEISS arivis Pro is a visualization-centric

platform designed for multi-dimensional image

analysis, off ering interactive tools and the

ability to develop automated analysis pipelines

for datasets of virtually unlimited size with just

a few clicks.

The software provides users with a

comprehensive range of segmentation options,

from traditional thresholding techniques to

advanced Deep Learning models. This includes

tools for detecting round objects using blob

fi nder, membrane-based segmentation that

leverages bright outlines, feature-based

Machine Learning, and state-of-the-art Deep

Learning approaches (see Figure 8 ). This diverse

Case Study: AI for Image

Analysis in vEM

www. zeiss.com/ai-for-vemFigure 8: The segmentation window in ZEISS arivis Pro

off ering a comprehensive range of options from traditional

techniques such as intensity thresholding and color-based

segmentation to advanced methods such as blob fi nding,

Watershed segmentation, Machine Learning-based

segmentation, Deep Learning segmentation (ZEISS arivis

Cloud-trained models, user-defi ned models, and pre-

trained Cellpose models), membrane-based segmentation,

and seeded region growing. This diverse selection enables

users to choose the most appropriate tool tailored to their

specifi c image analysis requirements.

selection empowers users to choose the most

appropriate tool for their specifi c analysis

requirements.

In ZEISS arivis Pro, automated image analysis

routines are confi gured using a “pipeline”

concept. This approach allows users to arrange

image processing and analysis operations

into a seamless workfl ow, with data input

automatically from the previous step and

output to the next stage. This streamlined

process ensures reproducible analysis between

multiple datasets.


— — — — — — — — — — — — — — —


Page 27


AI in ZEISS arivis software for scalable automated analysis 25Segmentation is a critical operation that is

often irreproducible. Manual approaches,

such as threshold-based segmentation

which requires user input, can introduce

irreproducibility, user bias, and impact

the overall throughput of image analysis.

Fortunately, AI technologies integrated into

ZEISS arivis Pro, including both feature-based

conventional Machine Learning and Deep

Learning, enable automatic segmentation even

for complex images.

Conventional Machine Learning for

image segmentation in ZEISS arivis Pro

As previously discussed, conventional Machine

Learning involves extracting various features

from the training data by applying digital

filters and training a Machine Learning model

based on these extracted features. Compared

to Deep Learning, Machine Learning is

faster to train and requires smaller training

datasets, making them a preferred choice for

applications where the complexity of features

in the image is low. For example, segmenting

bright objects against a gray background.

Figure 9 shows the Machine Learning Trainer

interface in ZEISS arivis Pro. The central image

Figure 9: Machine Learning Trainer interface in ZEISS arivis Pro showing a 2D slice of Nissl-stained neuronal soma imaged

using micro-optical sectioning tomography (MOST). The bright soma objects are annotated in yellow as the class of

interest, while background regions are annotated in cyan. Additional information about this dataset can be found in the

corresponding citation [4].

displays a two-dimensional (2D) slice from a

volumetric dataset of Nissl-stained neuronal

soma imaged using micro-optical sectioning

tomography (MOST). The soma appears as

bright rounded objects scattered throughout

the image. This is a classic example of an

image that is too challenging to segment using

traditional thresholding methods, but which

can be segmented using conventional Machine

Learning. The training process involves defining

the classes, in this case, background and

Neuronal Soma, followed by annotating the

respective pixels to establish the ground truth.

In the figure, a handful of neuronal soma are

annotated in yellow, while background regions

are annotated in cyan.

ZEISS arivis Pro provides a comprehensive suite

of digital filters for feature extraction from

multichannel images, including intensity, edge,

texture, and orientation-based filters, available

in both 2D and 3D formats. The software

allows users to preview the effect of applying

specific filters to their data. For example, Figure

10 shows the response obtained by applying a

Texture filter with a medium kernel size on the

Nissl-stained neuronal soma example shown in

Figure 9 .


— — — — — — — — — — — — — — —


Page 28


AI in ZEISS arivis software for scalable automated analysis 26Figure 10: Feature matrix in the Machine Learning Trainer of ZEISS arivis Pro. Users can select the appropriate feature

set based on the microscopy image type and preview the effects of applying different filters, such as the response from a

Texture filter with a medium kernel size.

Figure 11: Machine Learning-based segmentation of neuronal soma. (a) 2D slice of Nissl-stained neuronal soma. (b)

Pixel-level segmentation of soma (yellow) using a trained Machine Learning model. (c) Object-level segmentation of

individual soma (random colors) after applying a Watershed algorithm. Note that many objects are not fully separated as

the sensitivity of the Watershed algorithm was set to a conservative value to avoid over-segmentation of objects.Using the features generated from the

annotated pixels shown in Figure 9 , a Random

Forest Machine Learning model was trained.

Given the small training dataset, the training

process finished within seconds. The trained

model was subsequently integrated into an

image analysis pipeline in ZEISS arivis Pro to

segment the entire volumetric dataset. This

dataset consisted of 86 planes (2D slices), each

measuring 571x571 pixels.

Figure 11a displays the same 2D slice shown in

Figure 9 . Since conventional Machine Learning

is designed for pixel-level segmentation,

it performed excellently in segmenting all

pixels corresponding to soma, depicted in

yellow in Figure 11b . To convert this pixel-level segmentation into object segmentation, a

Watershed algorithm was applied to separate

touching objects. Figure 11c shows the

individually segmented soma in random colors.

The Machine Learning-based segmentation

of soma, followed by the Watershed-based

separation of objects, was applied to all 86

planes in the dataset. This process segmented

all the soma within the entire volumetric

dataset. Figure 12 displays a volumetric

rendering of the segmented 3D soma overlaid

onto the original dataset. ZEISS arivis Pro

automatically performs 3D measurements and

reports various morphological and intensity

measurements for every segmented object

within the dataset.


— — — — — — — — — — — — — — —


Page 29


AI in ZEISS arivis software for scalable automated analysis 27This conventional Machine Learning approach

is recommended for segmenting large regions

of interest, such as auto-fluorescent tissue

sections. To segment individual objects,

conventional Machine Learning can be used in

conjunction with Watershed-based separation

to separate the objects, as illustrated in

this example. However, depending on the

sensitivity with which the Watershed algorithm

is applied, objects may not be fully separated,

as shown in Figure 11c , or may be over-

segmented for high settings. Therefore, for

robust and automated segmentation, a Deep

Learning-based approach is recommended.

Deep Learning for image segmentation

in ZEISS arivis Pro

ZEISS arivis Pro offers various Deep Learning

tools, including a Deep Learning Trainer, Deep

Learning Segmenter, and Cellpose-based

Segmenter, respectively.

Deep Learning Trainer

The Deep Learning Trainer can be used to

locally train custom models. The trained model

can be integrated into the Analysis Pipeline

to segment images or extract probability

maps. As with any AI model training process,

the user begins by labeling the ground truth.

The labeling process is performed similarly

to conventional Machine Learning, where Figure 12: Volumetric rendering of the segmented 3D neuronal soma (colored objects) overlaid onto the original

Nissl-stained dataset achieved using the Machine Learning-based segmentation pipeline in ZEISS arivis Pro.

different classes are defined, and pixels are

painted to establish ground truth for their

respective classes.

External annotations can also be imported to

define the ground truth in ZEISS arivis Pro. This

approach is especially useful when historical

data and ground truth labels already exist. It

is also useful for training models using public

datasets with ground truth labels. Regardless

of the source of external labels, users are urged

to validate the ground truth before training

the model, as the model’s performance is

inherently dependent on the quality of the

training data itself.

During the training process in the Deep

Learning Trainer, users can monitor the

Intersection over Union (IoU) metric reported

per epoch (see Figure 13 ). After starting at a

low value, the IoU should exhibit an upward

trend as the model trains. This trend reflects

improved model accuracy. An IoU metric

above 0.7 is generally considered “very good”

for many segmentation tasks, indicating a

substantial overlap between predicted and

ground truth masks. However, what constitutes

an “excellent” IoU metric can vary depending

on the specific application and the complexity

of the objects involved. In some cases, an IoU

of 0.8 or higher may be necessary to meet the


— — — — — — — — — — — — — — —


Page 30


AI in ZEISS arivis software for scalable automated analysis 28Figure 13: Mean IoU plot during Deep Learning model training showing the IoU value increase over epochs before

reaching saturation.

task requirements. For particularly demanding

tasks, striving for an IoU of 0.9 or above may

be essential.

For example, when studying specifi c

subcellular structures such as mitochondria,

endoplasmic reticulum, or the Golgi apparatus

using confocal microscopy, high IoU values

are necessary. These structures often have

complex shapes and are densely packed within

cells. Accurate segmentation ensures reliable

quantifi cation of their volume, surface area,

and spatial distribution, which is crucial for

understanding cellular processes like apoptosis,

energy metabolism, and protein traffi cking.

If the IoU does not improve during training,

users can attempt to improve it by adding new

images and additional annotations, as these

provide the model with a richer understanding

of the data.

The mean IoU plot generated during training

(see Figure 13 ) shows the current IoU value

at 91.75% after 65 epochs. The best IoU was achieved at epoch 64, with a value of

92.3%. It also shows that, when the model

started training, the IoU was low in the fi rst

few epochs. It subsequently increased and

plateaued for a few epochs before rising again

and appearing to plateau again at a value of

around 92%. This fi nal plateau is known as

“saturation,” and model training should be

stopped at saturation if no improvement in IoU

is observed for subsequent epochs.

Where model training occurs on a local

workstation, the process is reliant on the local

computational resources available. Therefore,

running Deep Learning training on a system

equipped with a modern GPU that supports

Compute Unifi ed Device Architecture (CUDA®)

is recommended for achieving faster training

times locally. Alternatively, ZEISS arivis Cloud

can be used to train Deep Learning models,

as it provides on-demand access to scalable

resources tailored for effi cient model training,

eliminating the need to maintain local

GPU hardware specifi cally for this purpose.

Regardless of where the Deep Learning model


— — — — — — — — — — — — — — —


Page 31


AI in ZEISS arivis software for scalable automated analysis 29is trained, it can be used to seamlessly segment

images locally in ZEISS arivis Pro using the Deep

Learning Segmenter operation.

Deep Learning Segmenter

The Deep Learning Segmenter operation

allows users to segment images using a

trained semantic or instance Deep Learning

model on datasets of any size and dimension.

This operation can be incorporated into an

analysis pipeline and combined with all other

analysis operations in ZEISS arivis Pro. The Deep

Learning Segmenter supports a wide range

of Deep Learning models, including models

trained locally in ZEISS arivis Pro using the Deep

Learning Trainer as described above, models

trained on the ZEISS arivis Cloud platform, as

well as external pre-trained models saved in the

ONNX format.

Figure 14b shows the segmentation result of

the image from Figure 14a using the trained

model from the Deep Learning Trainer, as

described earlier. Figure 14a is the same as

Figure 11a , depicting a 2D slice of Nissl-stained

neuronal soma. The Deep Learning-based

approach in this example employs semantic

segmentation at the pixel level. Therefore,

objects need to be separated using a

Figure 14: (a) 2D slice of Nissl-stained neuronal soma. (b) Semantic segmentation result using a trained Deep Learning

model from the Deep Learning Trainer in ZEISS arivis Pro.

Watershed algorithm, like the approach taken

for conventional Machine Learning-based

segmentation. However, the Watershed

algorithm can struggle with over- or under-

splitting objects, especially when the objects

vary in size, which requires a more effective

instance segmentation approach.

Comparing this result with the one obtained

using the Machine Learning-based approach in

Figure 11c , both results appear nearly identical,

suggesting that semantic segmentation Deep

Learning does not offer a significant advantage

over conventional Machine Learning for simple

objects in this specific dataset. However, it

is worth noting that Deep Learning models

typically yield more robust results when the

input image exhibits variance in image contrast.

To leverage the true power of Deep Learning,

an instance segmentation model can be

trained on ZEISS arivis Cloud and imported into

the Deep Learning Segmenter in ZEISS arivis

Pro (requires version 4.2 or later). Users can

access the “ZEISS arivis Cloud AI model store”

using an access token that allows them to

download ZEISS arivis Cloud-trained models for

local execution (see Figure 15 ).


— — — — — — — — — — — — — — —


Page 32


AI in ZEISS arivis software for scalable automated analysis 30Figure 15: Accessing the “ZEISS arivis Cloud AI model

store” to download trained models for local execution.

Figure 16: Comparison of segmentation results for neuronal soma using (a) conventional Machine Learning, (b)

semantic Deep Learning, and (c) instance Deep Learning with a mode trained on ZEISS arivis Cloud. Note that the instance

segmentation approach results in proper separation of touching objects (blue arrows). While the diff erence may seem

minor in a single slice, the errors accumulate signifi cantly over large volumes or areas containing vast amounts of objects.

Once downloaded, the model can be used in

various image analysis pipelines to automate

the segmentation process. Figure 16c shows

the instance segmentation result using a

trained arivis Cloud instance model imported

into the Deep Learning Segmenter in arivis Pro.

For comparison, the results from conventional

Machine Learning (see Figure 16a )and

semantic Deep Learning (see Figure 16b )are

also juxtaposed next to the instance Deep

Learning segmentation result (see Figure 16c ).

Individual soma are segmented much better

using the instance segmentation method

compared to pixel level segmentation using

Machine Learning or Deep Learning followed

by Watershed-based separation. This allows

for reproducible image analysis when instance segmentation models are used in large analysis

pipelines.

Figure 17 showcases a challenging

segmentation task involving tightly packed

nuclei from an intestinal organoid dataset

spanning 170 time points. Panels (a), (b),

and (c) depict time points 1, 85, and 170,

respectively, at which individual nuclei need to

be reliably segmented and tracked throughout

the experiment. Deep Learning-based instance

segmentation proves to be the ideal approach

for tackling this task.

A custom instance segmentation model,

trained on ZEISS arivis Cloud, was downloaded

and employed in ZEISS arivis Pro to segment

the nuclei across the entire dataset. The

segmentation results, displayed in panels

(d), (e), and (f), (corresponding to the

respective input images), demonstrate

robust performance. It is noteworthy that

the segmentation results are robust, even for

the last time point shown in panel (f) , where

the intensity appears to gradually decrease

due to photobleaching. This level of reliable

segmentation across time points and z-planes

prepares the data for downstream analysis,

enabling valuable applications such as tracking.

While training custom Deep Learning models

on ZEISS arivis Cloud enables tailored solutions

for segmenting complex images, off -the-shelf


— — — — — — — — — — — — — — —


Page 33


AI in ZEISS arivis software for scalable automated analysis 31

Figure 17: Deep Learning-based instance segmentation of nuclei in an intestinal organoid dataset across multiple

time points. (a–c) Input images at time points 1, 85, and 170. (d–f) Corresponding instance segmentation results using

a ZEISS arivis Cloud-trained model, enabling robust nuclei tracking over time. Sample credit: Clayton Schwarz of Labs of

Anna-Katerina Hadjantonakis at Memorial Sloan Kettering Cancer Center and Eric Siggia at Rockefeller University.

pre-trained models can be useful in certain

scenarios. For example, when segmenting cells

in fluorescent microscopy images, a pre-trained

Cellpose model may provide excellent results

without the need for custom training.

Cellpose Pre-trained Segmenter

ZEISS arivis Pro provides the option to

seamlessly incorporate Cellpose-based

Segmenter operations that are designed

specifically to segment cells or nuclei in

fluorescence microscopy images. Custom-

trained Cellpose models can also be imported into this operation to segment objects of

interest.

The Cellpose Segmenter, an open-source

software, provides pre-trained models that can

be used to segment images for cells and nuclei,

in most cases, without any custom training.

These include models trained on fluorescent

cell images, diverse cell images, cytoplasm

models, models trained on multiple channels,

and nuclei models. Similar to other Deep

Learning tools in ZEISS arivis Pro, the Cellpose-


— — — — — — — — — — — — — — —


Page 34


AI in ZEISS arivis software for scalable automated analysis 32Figure 18: Cellpose segmentation parameters in ZEISS

arivis Pro, including cell diameter, mask threshold, and

mask quality.

Figure 19: Segmentation results obtained using Cellpose. (a) Fluorescence microscopy image showing cytoplasm (green),

mitochondria (red), and nuclei (blue). (b) Cell segmentation result using the pre-trained Cellpose model in ZEISS arivis Pro.

Detected cells are shown in random colors.

based Segmenter leverages GPU acceleration

for faster computations if it’s available.

The Cellpose model uses additional parameters

for segmentation, including cell diameter, mask

threshold, and mask quality parameters (see

Figure 18 ). Please refer to the corresponding

Cellpose paper [2] for a detailed explanation of

these parameters and their impact. In Figure

18, the pre-trained Cellpose model (named

“CP”) refers to a model primarily trained on fl uorescence microscopy images of cells. This

model has been applied to segment cells in

the image shown in Figure 19a, which displays

cytoplasm (green), mitochondria (red), and

nuclei (blue), respectively.

The green cytoplasm channel was defi ned as

the primary input, with the blue nuclei channel

as the secondary input. The approximate

cell diameter was set to 30 microns after

measuring several cells using the measurement

tool in ZEISS arivis Pro. Other parameters

were set to their default values. With a

single click, the image was segmented with

excellent results, shown in Figure 19b . This

demonstrates the out-of-the-box power of

the Cellpose-based Segmenter for segmenting

cells in fl uorescent microscopy images. Custom

Cellpose models can also be imported, or

custom ZEISS arivis Cloud models may be

trained for more challenging images.

Labeling 3D ground truth using ZEISS

arivis Pro VR

ZEISS arivis Pro VR is a module that enables

virtual reality-based collaborative data

visualization and interactivity during image

analysis. While virtual reality (VR) is often

associated with gaming, its applications extend

far beyond entertainment. With ZEISS arivis Pro

VR, researchers can literally step into their 3D


— — — — — — — — — — — — — — —


Page 35


AI in ZEISS arivis software for scalable automated analysis 33Figure 20: 3D data annotation in ZEISS arivis Pro VR. (a) User interface for selecting annotation tools, with a 3D dataset of

cells from a developing fl y embryo in the background. (b) 3D annotated objects created by navigating through the dataset

and annotating cells using the selected tool. Dataset courtesy of Celia Smits, Stanislav Y. Shvartsman, Department of

Molecular Biology, Princeton University.

ZEISS arivis Pro VR capabilities

www. zeiss.com/arivis-pro-vr

datasets, tagging voxels for ground truth and

laying the foundation for advanced 3D Deep

Learning.

Figure 20 shows the data annotation screen in

ZEISS arivis Pro VR as experienced through the

immersive VR environment. Figure 20a displays

the user interface within the VR environment,

where the user can select an appropriate tool

for 3D annotation, in this case, a Magic Wand.

The 3D dataset in the background corresponds

to images of cells from a developing fl y embryo

acquired using a Luxendo MuVi SPIM. Figure

20b illustrates 3D annotated objects, where

the user can navigate through the cells and

annotate them using the selected tool, creating

ground truth data for advanced 3D Deep

Learning applications.

A recent Nature Methods paper [3] highlights

the innovative use of virtual reality annotation

with ZEISS arivis Pro VR. Researchers discovered

that the immersive VR environment allowed

them to generate high-quality 3D training data

for Deep Learning-based cell segmentation

across entire mouse brains, much faster than

traditional 2D methods.By leveraging the power of VR, researchers

can truly immerse themselves in their data,

enabling more intuitive and effi cient 3D ground

truth labeling. This cutting-edge technology

paves the way for advanced 3D Deep

Learning analysis, opening new possibilities in

understanding complex biological structures

and processes.

Machine Learning for object

classifi cation in ZEISS arivis Pro

The segmentation and analysis of images in

ZEISS arivis Pro result in the quantifi cation of

various parameters for all detected objects

in the dataset. These parameters range

from morphological characteristics, such

as area, volume, and sphericity, to intensity

measurements from various channels in the

dataset. The Machine Learning trainer for

object classifi cation in ZEISS arivis Pro uses

these parameters as input features to train

a Random Forest algorithm. Conventional

Machine Learning is well suited to this task,

given the limited number of features required

for training. The trained Machine Learning

model can then be used to classify all

segmented objects within a dataset.

Figure 21 illustrates the ZEISS arivis Pro interface

for Machine Learning-based Object Training.

The objects displayed on the screen depict

the Nissl-stained neuronal soma segmented

using conventional Machine Learning shown


— — — — — — — — — — — — — — —


Page 36


AI in ZEISS arivis software for scalable automated analysis 34Figure 21: Machine Learning-based Object Training interface in ZEISS arivis Pro, showing segmented neuronal soma

objects classifi ed into Spherical (blue) and Non-Spherical (orange) classes based on selected features like Mean intensity,

Sphericity, and VoxelCount.

earlier in Figure 11 . It is worth noting that any

segmentation approach, including semantic

or instance Deep Learning methods discussed

earlier, could be used to generate objects for

this training process.

To initiate object classifi cation training,

individual objects were manually selected

by clicking and assigned to one of two

classes: Spherical and Non-Spherical in this

example. Various morphological and intensity

measurements can be chosen as features to

train the Machine Learning object classifi er.

In the provided example, “Mean intensity,”

“Sphericity,” and “VoxelCount” were selected

as the features for training. The training

process typically occurs in real-time, within a

second. Clicking the “Run” button applies the

trained model to the entire dataset, providing

valuable visual feedback for any necessary

modifi cations. In this example, all objects in

blue correspond to the “Spherical” class, while

the orange objects represent the “Non-Spherical” class, as defi ned by the user. It is

important to note that this is a simple example

featuring two classes, but multiple classes and

multiple features can be selected for other

object classifi cation tasks.

This object classifi cation approach is highly

benefi cial for classifying objects that cannot

be easily categorized using a single parameter,

especially when classifying into more than

two classes. Typical applications range from

classifying contaminant particles on fi lter paper

to categorizing cells undergoing mitosis into

various stages.

Deep Learning for denoising multi-

dimensional datasets

Fluorescence microscopy of biological

specimens, especially for live-cell imaging,

often exhibits noise due to the low signal-to-

noise ratio. This arises from the need to limit

the amount of excitation energy to avoid

phototoxicity or damage to the live cells,


— — — — — — — — — — — — — — —


Page 37


AI in ZEISS arivis software for scalable automated analysis 35

Figure 22: Denoising of 4D fluorescence microscopy data. (a) Raw, noisy image of a live cell undergoing mitosis, showing

microtubules (green) and separating chromatids (red). (b) Denoised image after applying a Noise2Void model trained in

ZEN and imported into ZEISS arivis Pro, revealing clear microtubule structures and cellular features.resulting in a relatively weak fluorescence

signal. While techniques like signal averaging

or slower scanning can improve the signal-

to-noise ratio during image acquisition, the

resulting images still contain noise, requiring

the use of denoising algorithms to enhance the

quality of the acquired data.

The advent of Deep Learning in the mid-2010s

has led to numerous proposed Deep Learning-

based denoising algorithms, offering more

robust and efficient solutions. Among these,

the Noise2Void approach has emerged as

the preferred algorithm for scientific image

denoising [4,5]. It is capable of learning directly

from noisy images and effectively removing

noise while preserving important image

features and details. Noise2Void models can

be easily trained using ZEN, as will be discussed

in detail in the next chapter. These models

can also be trained using custom code and

saved in the CZANN format using the czmodel

library [6]. The trained CZANN denoising model

can then be imported into ZEISS arivis Pro

to denoise images, including large multi-

dimensional datasets.

Figure 22 shows a denoising example of 3D

time-series data (4D). Figure 22a shows the

raw, noisy fluorescence microscopy image of

a live cell undergoing mitosis. As discussed earlier, the noise arises from the necessity to

image the sample under gentle conditions to

avoid disturbing the mitotic process. The green

channel shows the microtubules, and the red

channel shows the separating chromatids.

Individual microtubules are indiscernible in

this noisy image. To denoise the data, a single

plane was extracted from this dataset and

used to train a Noise2Void model in ZEN. This

trained model was then imported into ZEISS

arivis Pro and applied to denoise the entire

4D dataset. The denoised image in Figure 22b

clearly reveals the microtubules, along with

other features in the image. This level of clarity

allows researchers to extract insights from the

dataset to further scientific understanding of

the process under study.

As discussed, ZEISS arivis Pro offers various

AI-powered tools to enable automated

image analysis via customized pipelines.

These pipelines can be executed to process

multiple datasets in batches. The computation

fully relies on the local resources available

on the workstation. However, for very large

datasets, particularly in applications such as

3D high-content analysis, scalable processing

infrastructure is required to obtain timely

results. ZEISS arivis Hub is specifically designed

to address this need.


— — — — — — — — — — — — — — —


Page 38


AI in ZEISS arivis software for scalable automated analysis 36AI in ZEISS arivis Hub for scalable image

analysis

ZEISS arivis Hub enables the design and

execution of large-scale experiments via

parallelized processing using multiple

computational workers on on-premises or

cloud-based servers. It is designed to execute

the AI-powered image analysis pipelines

customized in ZEISS arivis Pro at scale by

parallelizing the computation across local or

cloud-based server resources.

Applications that fall under high-content

analysis (HCA), where numerous samples

in multiwell plates are analyzed, can greatly

benefi t from such scaled analysis capabilities.

AI becomes crucial for these applications,

even for 2D analysis. For example, assays

that rely on unlabeled samples can leverage

Deep Learning-based segmentation of cells

imaged under brightfi eld illumination. Even

for labeled samples where cellular and nuclear

segmentation is challenging, custom Deep

Learning models trained on ZEISS arivis Cloud

or pre-trained open-source models such as

Cellpose can be employed to segment cellular

and nuclear structures accurately.

Figure 23: Example of a 3D organoid analysis pipeline in ZEISS arivis Pro, using blob detection for nuclei and conventional

Machine Learning segmentation for the overall organoid structure.

While traditional HCA has focused primarily

on 2D cell cultures, there is a growing

recognition of the limitations associated with

this approach. 2D cell cultures often fail to

accurately replicate the complex 3D structure

and microenvironment of tissues found in the

human body, leading to potential discrepancies

between in vitro and in vivo results. In contrast ,

3Dcell cultures off er a more physiologically-

relevant model for studying cellular responses.

They enable the investigation of cell–cell and

cell–matrix interactions, nutrient gradients,

and other factors that are not present in 2D

cultures. As a result, there is an increasing

demand for 3DHCA in drug discovery and

other biomedical research fi elds .

Transitioning from 2Dto3DHCA poses

signifi cant challenges due to the inherent

complexity and heterogeneity of 3D cell

cultures compared to their 2Dcounterparts,

making image acquisition and analysis more

diffi cult. A major bottleneck for most HCA

analysis software lies in their limited capability

to handle large 3D datasets, which can contain

terabytes of data. The ZEISS arivis software

architecture addresses this limitation by


— — — — — — — — — — — — — — —


Page 39


AI in ZEISS arivis software for scalable automated analysis 37

Visit Organoid Analysis Case Study

www.zeiss.com/3d-organoid-analysisseamlessly processing 2Dand3Dimages in the

terabyte range. In addition, the scalability of

ZEISS arivis Hub, coupled with its AI-powered

analysis capabilities, enables the handling and

analysis of growing image and experiment

sizes for a wide range of applications, including

phenotypic screening, organoid analysis, and

other 2D, 3D, and 4D cellular studies .

AI-Powered 3D analysis of organoid

multiwell plates on ZEISS arivis Hub

3D analyses of organoids in multiwell plates

illustrate the power of automated scalable

analysis in ZEISS arivis Hub. The process

begins in ZEISS arivis Pro, where users

construct an image analysis pipeline with

real-time 3D feedback during the defi nition

phase. This interactive approach allows for

experimentation with various tools and parameter optimization. For segmentation

tasks, users may start by evaluating traditional

methods like thresholding or blob detection.

If these prove unsuitable for specifi c features,

they can use conventional Machine Learning

or Deep Learning models. ZEISS arivis Pro

enables the creation of complex image

analysis pipelines tailored to extract desired

insights from the image data. Figure 23 shows

one such pipeline for 3D organoid analysis,

employing Deep Learning (Cellpose) for nuclei

segmentation and conventional Machine

Learning segmentation (Random Forest) for

the overall organoid structure, complemented

by additional image processing operations

like region growing. The pipeline follows the

workfl ow illustrated in Figure 24.

Once validated on a few datasets, the pipeline

is imported into ZEISS arivis Hub for scalable

analysis across multiple datasets. Users can

apply the pipeline as individual jobs or larger

workfl ows to analyze numerous images stored Figure 24: Workfl ow chart for the 3D organoid analysis pipeline.


— — — — — — — — — — — — — — —


Page 40


AI in ZEISS arivis software for scalable automated analysis 38

Figure 25: ZEISS arivis Hub viewer showing the multiwell plate layout in the top left, featuring an image of a single

organoid in the selected well B7 on the right.as datasets within ZEISS arivis Hub. Figure 25

shows an image of a single organoid in well B7

from a multiwell plate, shown in the top left

of the fi gure. The multiwell plate dataset has

been analyzed using the pipeline constructed

in ZEISS arivis Pro, with the workfl ow results

screen displayed in Figure 26 .

This image shows a heat map of the selected

metric, which in this case is the mean intensity

of nuclei within the organoid. Clicking on a

specifi c well, such as B7, reveals additional

result details alongside the processed image

in an interactive viewer on the bottom left.

Results can be exported in various formats,

including CSV outputs for further analysis. A

detailed study of this use case is found using

the link below.While this example focuses on a single

multiwell plate, users benefi t signifi cantly

from this AI-powered automated and scaled

analysis when applied to multiple multiwell

plates, where data is processed in parallel using

multiple analysis worker processors (see

Figure 27 ). This speed scales seamlessly based

on the number of analysis workers subscribed

to local or cloud servers.

As the microscopy fi eld continues to evolve, so

must the tools and techniques. By embracing

AI powered solutions, researchers can continue

to analyze microscopy data even when it grows

in size and complexity.


— — — — — — — — — — — — — — —


Page 41


AI in ZEISS arivis software for scalable automated analysis 39

Figure 26: Workfl ow results screen in ZEISS arivis Hub, displaying a heat map of the mean intensity of nuclei in the

organoid with additional result details and an interactive viewer for the processed image.

Figure 27: An illustration highlighting the scalable analysis capabilities of ZEISS arivis Hub, depicting parallel processing

of multiple multiwell plate datasets. Multiple analysis worker processors are shown concurrently processing diff erent

organoids, demonstrating the ability of the platform to effi ciently analyze large volumes of data through seamless scaling.


— — — — — — — — — — — — — — —


Page 42


AI in ZEISS arivis software for scalable automated analysis 40References

1. Youtube. A Chat with Andrew on MLOps: From Model-centric to Data-centric AI. URL: https://

www.youtube.com/watch?v=06-AZXmwHjo&ab_channel=DeepLearningAI (accessed 02

September 2024).

2. Stringer C, Wang T, Michaelos M, et al . Cellpose: a generalist algorithm for cellular

segmentation. Nat Methods . (2021) 18 :100–106. doi: 10.1038/s41592-020-01018-x.

3. Kaltenecker D, Al-Maskari R, Negwer M, et al . Virtual reality-empowered deep-learning analysis

of brain cells. Nat Methods . (2024). doi: 10.1038/s41592-024-02245-2.

4. Krull A, Buchholz T-O, Jug F. Noise2Void - Learning Denoising from Single Noisy Images. (2018)

arXiv:1811.10980. doi: 10.48550/arXiv.1811.10980.

5. Höck E, Buchholz T-O, Brachmann A, Jug F, and Freytag A. N2V2 - Fixing Noise2Void

Checkerboard Artifacts with Modified Sampling Strategies and a Tweaked Network

Architecture.

6. Pypi. czmodel. URL: https://pypi.org/project/czmodel/ (02 September 2024).


— — — — — — — — — — — — — — —


Page 43


AI in ZEISS arivis software for scalable automated analysis 41


— — — — — — — — — — — — — — —


Page 44


AI in ZEN and ZEN core imaging and analysis platform 42Figure 1: A schematic representation of a typical microscopy imaging workfl ow, illustrating the sequential steps from

image acquisition to preprocessing, image analysis, classifi cation, and result and report generation.

Figure 2: Microscopy imaging workfl ow with examples of AI techniques applied in preprocessing (denoising), image

analysis (segmentation), and classifi cation.

A schematic representation of a typical microscopy imaging workfl ow, illustrating the sequential steps from

Microscopy imaging workfl ow with examples of AI techniques applied in preprocessing (denoising), image

ZEN and ZEN core are robust microscopy

software packages that off er a broad range

of image analysis and processing tools

tailored to support the standard workfl ow

of microscopists (see Figure 1 ). From image

acquisition to preprocessing, analysis, and the

fi nal result presentation, these tools guide users

through every step.

The ZEN software packages off er dedicated

analysis tools alongside a versatile image

analysis toolkit, incorporating powerful

Machine Learning algorithms for diff erent

phases of the workfl ow (see Figure 2 ). For

example, the Noise2Void algorithm facilitates

image denoising, while semantic and instance

segmentation methods are available for image AI in ZEN and ZEN core imaging and analysis platform

segmentation tasks. Additionally, the software

supports Machine Learning-based object

classifi cation.

These solutions build upon established and

widely recognized tools and frameworks like

PyTorch, TensorFlow, and ONNX, and are fi ne-

tuned for simplicity and seamless integration

with imaging workfl ows. They can be readily

used within preconfi gured workfl ows in ZEN

and ZEN core.

Preconfi gured workfl ows in ZEN and

ZEN core

Preconfi gured workfl ows in ZEN and ZEN core

simplify common image analysis tasks and

are organized into Material Apps (for tasks


— — — — — — — — — — — — — — —


Page 45


AI in ZEN and ZEN core imaging and analysis platform 43Figure 3: Machine Learning segmentation in ZEN facilitates training classical Machine Learning models for image

segmentation tasks. (a) Image of an organoid that requires segmentation. (b) Annotated organoid demonstrating how

only a few annotations are required for training. (c) Organoid image segmented into cell layer (orange), lumen (red),

and background (cyan) classes. (d) The initial prediction along with annotations. Panel (c) illustrates the model’s final

segmentation predictions, assigning every pixel to one of the three classes. Additional annotations can refine the model,

enabling segmentation of the entire 3D organoid stack.

like Grain Size Analysis and layer thickness

measurement) and Bio Apps (for tasks such as

cell counting and gene expression analysis). The

following modules are available:

Bio Apps

■Cell Counting.

■Gene- and Protein Expression.

■Translocation.

■Confluency.

■Automated Spot Detection.

Material Apps

■Grain Size Analysis.

■Multiphase Analysis.

■Cast Iron Analysis.

■Layer Thickness.

■Technical Cleanliness Analysis.The integration of AI within the ZEN

software has revolutionized microscopy,

enhancing speed, efficiency, and accuracy to

unprecedented levels.

AI-based image segmentation in ZEN and

ZEN core

Image segmentation is a critical step in the

automated analysis process of microscope

images. It involves the precise and reliable

identification and separation of regions of

interest (ROI) from the background. ZEN

and ZEN core offer a diverse range of image

segmentation options, including classical

methods such as thresholding, variance-based

segmentation, and dynamic thresholding.

Machine Learning

In recent years, Machine Learning-based

techniques like Random Forest pixel classifiers

and Deep Neural Networks (DNNs) have

been developed and successfully applied to

enhance image segmentation. ZEN and ZEN

core provide the capability to directly train a

Machine Learning model based on a Random

Forest pixel classifier within the software or

to use prior-trained Deep Learning networks

for segmentation. This flexibility enables users


— — — — — — — — — — — — — — —


Page 46


AI in ZEN and ZEN core imaging and analysis platform 44Figure 4: Examples of image segmentation via classical Machine Learning across life sciences and materials sciences

disciplines. The figure displays original microscopy images acquired through techniques such as (a) X-ray microscopy, (b)

and (c) fluorescence microscopy, (d) and (e) brightfield microscopy, and (f) electron microscopy. Image pairs are presented

with the original image on the left and the segmentation results obtained using conventional Machine Learning algorithms

on the right.

to choose the most suitable method for their

specific application.

Users can train the Random Forest pixel

classifier within ZEN, where they can load

images, create multiple classes as required,

annotate the images, and train the model.

Figure 3 illustrates the training user interface,

demonstrating an example of training a

Machine Learning model to segment the cell

layer and lumen of an organoid.

A variety of examples from both life sciences

and materials sciences are illustrated in Figure

4, where the image on the left-hand side in

each column depicts the original image, and

the corresponding image on the right-hand

side displays the segmentation result using

a conventional Machine Learning model

trained in ZEN. These examples reflect the

agnostic nature of these algorithms regarding

the microscope that generated the images,

encompassing brightfield and fluorescent

light microscopy images, electron microscopy

images, and even three-dimensional

(3D) volumetric data collected on X-ray

microscopes.Deep Learning

While conventional Machine Learning provides

robust segmentation for many applications,

users may opt for Deep Learning to achieve

enhanced segmentation of complex images.

ZEN offers various interfaces for importing

externally trained Machine Learning and

Deep Learning models. For example, Deep

Learning models trained on ZEISS arivis Cloud

can be imported to segment images as part

of image analysis pipelines. ZEN software has

also expanded its AI capabilities by enabling

users to import instance segmentation

models trained on ZEISS arivis Cloud.

These models excel in scenarios involving

touching and/or overlapping objects, which

are common in scientific images and pose

significant challenges for traditional pixel-level

segmentation methods.

The openness of AI interfaces in ZEN (see

Figure 5 ) empowers users to import models

trained elsewhere, such as those developed

using their own Python code in a Jupyter

notebook. These external models can be

seamlessly imported using the czmodel [1]

open-source Python package. By integrating


— — — — — — — — — — — — — — —


Page 47


AI in ZEN and ZEN core imaging and analysis platform 45Figure 5: Flowchart illustrating the integration of AI models trained from various sources into ZEN and ZEN core software

for image analysis tasks. Conventional Machine Learning models can be trained using the Intellesis Training UI within

ZEN. Deep Learning models for semantic or instance segmentation can be trained on the ZEISS arivis Cloud platform,

while semantic segmentation or denoising models can be developed through custom Jupyter notebooks or Python code.

Regardless of where the models are trained, they can be imported and used within the ZEN and ZEN core environments for

diverse image analysis applications.

various segmentation algorithms into one

analysis, users can adaptively address the

specifi c requirements of their samples.

Advanced AI tools for image analysis

beyond segmentation

With its diverse segmentation options, ZEN

is invaluable to researchers in biology and

materials sciences. Yet, segmentation is just

one facet of image analysis workfl ows where

AI can make signifi cant contributions. For

example, segmented objects can undergo

further classifi cation using Machine Learning

algorithms. Additionally, AI-driven denoising

enhances image quality, which is particularly

benefi cial for sensitive samples yielding images

with low signal-to-noise ratios. This section

explores these additional aspects to shed light

on the transformative potential of AI beyond

segmentation.

Object classifi cation

Various methods ranging from simple

thresholding to advanced Deep Learning

techniques can segment objects within images. However, certain applications demand

additional classifi cation of these objects based

on shape, size, morphological parameters,

and pixel intensity values from one or more

channels. While traditional cluster analysis can

achieve this, Machine Learning off ers a more

robust approach to object classifi cation.

ZEN and ZEN core object classifi cation

solutions

ZEN and ZEN core provide a user-friendly

interface for training Machine Learning-based

object classifi cation models, leveraging various

morphological and intensity parameters

calculated automatically by the software.

For model training, users can use one or

more images that have been previously

segmented using any segmentation method

(e.g., thresholding or Deep Learning) via the

image analysis tools in ZEN or ZEN core. Users

can classify segmented objects into as many

diff erent classes as needed, simply by visually

identifying objects belonging to specifi c classes

and assigning them a label by simply clicking

on them with a computer mouse.


— — — — — — — — — — — — — — —


Page 48


AI in ZEN and ZEN core imaging and analysis platform 46Figure 6: The Object Classification Training interface in ZEN core facilitates training classical Machine Learning models for

particle classification tasks. (a) All particles are segmented via simple thresholding and shown with a light gray outline. (b)

A selection of particles from each of the three classes: metallic (orange), non-metallic (cyan), and fiber (red) are selected

from the thresholded particles to train the classification model. (c) The final model predictions with every particle classified

into one of the three classes. (d) Classification results for the entire image based on the training selections. This trained

model could be applied to classify particles in other images.

Only a few labeled objects are needed to

initiate training, during which the Machine

Learning model learns from the parameters

extracted from these objects. The training

process occurs in near real-time, allowing users

to dynamically adjust the selection of objects,

add or remove labels, or reassign labels based

on the evolving results.

Figure 6 shows the ZEN core training interface,

demonstrating a filter sample with various

particles; fibers labeled in red, metallic particles

in orange, and non-metallic particles in

cyan. Once trained, the model can automate

the object classification process as part of

end-to-end image analysis workflows for future

images.

Denoising

Microscope image quality can be compromised

by a multitude of imperfections originating

from various sources. For example, electronic

and thermal sources often introduce noise into images, making it challenging to

distinguish signal from noise (see Figure 7).

Noise is particularly problematic and affects

image quality across different microscopy

modalities, including fluorescence and electron

microscopy. It obscures the signal of interest,

complicating the differentiation between

genuine signal and noise.

Noise in fluorescence microscopy images

Fluorescence microscopy of biological

specimens, especially live-cell images, often

suffer from a low signal-to-noise ratio due to

several factors. Live cells are highly sensitive

to external stimuli, such as intense light

or chemicals, which limits the amount of

excitation energy that can be used to induce

fluorescence without causing phototoxicity

or other damage to the cells. Consequently,

the fluorescence signal emitted by the

fluorophores in the sample is relatively weak,

making it challenging to distinguish from the

background noise.


— — — — — — — — — — — — — — —


Page 49


AI in ZEN and ZEN core imaging and analysis platform 47Figure 7: Examples of various imaging imperfections and noise sources that degrade image quality. The top row shows

noise-free images as well as artifacts like diff raction limit, uneven background, and imaging errors. The bottom row

displays corresponding line profi les, illustrating how diff erent noise types, such as shot noise and detector noise, aff ect the

detected intensity profi les compared to the noise-free case.

For images collected at low laser power,

approaches such as averaging the signal

over multiple frames or slowing the scanning

speed can be employed to improve the

signal-to-noise ratio. However, these may still

result in noisy images, necessitating the use

of noise-removing techniques to enhance the

quality of the acquired data.

Noise in electron microscopy images

Electron microscopy is also susceptible to

noise arising from various factors, including

low electron dose, specimen drift, and

detector noise. While metallic samples and

other conductors can be imaged under higher

voltages and currents to achieve higher

resolution, non-conductive materials such as

ceramics, polymers, and biological specimens

must be imaged at ultra-low voltages and

currents to prevent beam charging eff ects and

sample damage. This reduced-dose approach

results in images in which the underlying

structure of interest is obscured by the noise.

Therefore, denoising techniques become

essential for extracting meaningful information

from these low-dose electron microscopy

images.ZEN and ZEN core denoising solutions

Traditional denoising algorithms, while

eff ective, often come with trade-off s. For

example, applying a Gaussian fi lter is a

straightforward way to remove noise, but

it also reduces image sharpness due to the

blurring operation at every pixel. Non-local

means fi ltering [2,3] and block-matching and

3D fi ltering (BM3D) [4] are reliable denoising

approaches and are widely used, especially in

the fi elds of computed tomography and MRI

imaging. However, since the advent of Deep

Learning, numerous Deep Learning-based

algorithms that off er more robust and effi cient

denoising solutions have been proposed.

Among these, the Noise2Void approach [5–7]

has become a popular choice, as it can directly

learn from noisy images and remove noise

eff ectively while preserving important features

and details.


— — — — — — — — — — — — — — —


Page 50


AI in ZEN and ZEN core imaging and analysis platform 48Figure 8: Application of Noise2Void in ZEN for denoising an electron microscopy image. (a) The original image of a

tobacco leaf, displaying significant noise due to low voltage and current settings chosen for the sensitive nature of the

sample. (b) The image after denoising with Noise2Void in ZEN, revealing improved clarity and reduced noise artifacts. Real-time AI-assisted denoising

Noise2Void models can be easily trained

and deployed using ZEN and ZEN core,

allowing users to denoise images from any

imaging source, including light and electron

microscopy (see Figure 8 ). A key advantage of

the Noise2Void algorithm is that it does not

require corresponding clean images (ground

truth) for training, unlike typical Deep Learning

approaches. This makes the process of training

a Noise2Void model relatively straightforward.

Users simply need to load their noisy images,

train the model, and then apply it to denoise

images. The trained model can even be applied

during live sample navigation (see Figure 9 ),

reducing the need for higher light intensities,

thus allowing for gentler imaging with less

photobleaching and phototoxicity.

This real-time denoising capability of

Noise2Void models in ZEN offers a powerful

solution for enhancing image quality while

minimizing potential damage to sensitive

biological samples during the setup of live-cell

imaging experiments.

Harnessing AI in automated image

analysis workflows

While the AI tools discussed earlier for

segmentation, classification, and denoising are

valuable for analyzing individual datasets, their

full potential is realized when integrated into

end-to-end image analysis workflows tailored

to specific tasks. For example, tasks such as

grain size distribution analysis in materials and

geosciences or automated gene expression

workflows in biological research can benefit

immensely from the automation provided by

AI-trained models. These models streamline

processes that would otherwise be labor-

intensive and time-consuming, enabling users

to focus on higher-level analysis.

AI models can also facilitate guided image

acquisition workflows, allowing for the

efficient collection of high-resolution multi-

dimensional images from specific ROI. In the

following sections, a concise overview of

these applications is provided, exploring how

the integration of AI into the pre-defined

applications in ZEN, known as Material

Apps and Bio Apps, as well as guided image

acquisition workflows, enhances efficiency and

productivity in microscopy.


— — — — — — — — — — — — — — —


Page 51


AI in ZEN and ZEN core imaging and analysis platform 49Integration of AI into Material Apps

ZEN core Material Apps offer standardized

workflow-driven solutions tailored for materials

and production labs, adhering to industry

standards. These solutions, including Grain

Sizing, Multiphase Analysis, Cast Iron Analysis,

Layer Thickness measurements, and Technical

Cleanliness Analysis, are designed to support

the specific requirements of each application.

Each Material App comprises pre-configured

workflows encompassing all stages from

acquisition and analysis to result display and

reporting. For challenging samples, several

workflows incorporate the use of Machine

Learning models for image segmentation

during analysis.

Using Material Apps for Grain Size

Analysis

Grain Size Analysis plays a crucial role as the

size and distribution of grains directly impact

material properties. This analysis enables the

quantification of the crystallographic structure

of metallographic samples in accordance

with international standards. Segmenting

grain structures in microscope images has

traditionally been challenging due to various factors. The methods used often fall short

in accurately identifying individual grains,

leading to the adoption of alternate manual or

semi-automated approaches like the intercept

method. However, these tend to yield less

accurate results as they do not sample all grains

in their entirety to calculate grain size. Even

conventional Machine Learning and Deep

Learning techniques for semantic segmentation

struggle to properly segment and separate

grains.

In response to these challenges, Deep

Learning-based instance segmentation

techniques have emerged as the preferred

solution. This approach excels in detecting

touching and overlapping objects, making

it highly suitable for grain size distribution

analysis. Instance segmentation models

trained on ZEISS arivis Cloud can be seamlessly

imported into ZEN core Material Apps, offering

a streamlined analysis process that ensures

accurate and reliable results for materials

characterization.

Figure 10 displays the ZEN core Grain Size

Analysis results screen, featuring both the

original image and the analyzed image Figure 9: Live denoising during sample navigation. (a) Raw image captured during sample navigation in a widefield

microscope, showcasing a fluorescently labeled sample. (b) The same image with live denoising switched on demonstrates

improved clarity, highlighting the effectiveness of denoising even on images collected during navigation.


— — — — — — — — — — — — — — —


Page 52


AI in ZEN and ZEN core imaging and analysis platform 50Figure 10: Grain Size Analysis in ZEN core showing the ZEN core interface for Grain Size Analysis. An instance

segmentation model trained on ZEISS arivis Cloud is imported into ZEN core to segment individual grains within an

aluminum Barker etched sample. This segmentation process enables the extraction of the size distribution of individual

grains, facilitating comprehensive analysis.

with clearly separated grains. These grains

have been automatically segmented using

an instance segmentation model trained

on ZEISS arivis Cloud. In addition, Figure

10 includes a grain size distribution plot,

demonstrating the eff ectiveness of AI-powered

instance segmentation models in providing

single-click solutions for grain size distribution

analysis. Such automation not only enhances

analysis throughput but also ensures result

reproducibility, regardless of who conducts the

image analysis.

Integration of AI into Bio Apps

Bio Apps comprise a streamlined suite of image

analysis tools specifi cally tailored to common

tasks in cell biology and cancer research. These

tools provide specialized solutions for tasks

such as cell counting, cellular gene expression

analysis, and nuclear translocation studies.

Using Bio Apps to quantify gene

expression

Gene expression assays are powerful

techniques employed by researchers to investigate the complex patterns of gene

activity within cells or tissues. These assays

enable the quantifi cation of specifi c messenger

RNA (mRNA) molecules, or the corresponding

proteins produced to provide invaluable

insights into the dynamic processes that govern

cellular function and behavior. One widely

used approach in gene expression studies

involves the use of fl uorescent proteins, such

as mCherry [8], which can be genetically

engineered to serve as reporters for the

expression of genes of interest [9].

To accurately quantify gene expression at the

single-cell level, it is crucial to precisely segment

and identify individual cells within the sample.

While multiple fl uorescent markers can be

used to mark various cellular and sub-cellular

regions, researchers often prefer working with

unlabeled samples for cost-eff ectiveness and

to allow for gentle imaging conditions. Oblique

contrast microscopy of such unlabeled samples

provides the necessary contrast to visually

discriminate individual cells. To computationally

automate the cellular segmentation process


— — — — — — — — — — — — — — —


Page 53


AI in ZEN and ZEN core imaging and analysis platform 51and accurately quantify gene expression at

the single-cell level, advanced Deep Learning

techniques are required. For example, when

cells are confluent or even overlapping,

instance segmentation approaches designed to

segment touching or overlapping objects are

necessary.

Figure 11a depicts the Gene Expression Bio

App interface within the ZEN software. The

cells displayed in the figure were imaged using

a ZEISS Celldiscoverer 7 microscope, employing

oblique contrast imaging to enhance cellular

morphology and facilitate better segmentation.

The channel corresponding to mCherry

fluorescence is colored pink and displayed

alongside the cell channel, as shown in the top

right corner of Figure 11a . A Deep Learning

model for instance segmentation, trained on

the ZEISS arivis Cloud platform, was imported

into the “Gene- and Protein Expression”

module in ZEN Bio Apps to perform automated

cellular segmentation.

This automated application segments the

image to identify individual cells, calculates the

mCherry fluorescence intensity within each

segmented cell, and classifies cells as positive

or negative based on a predefined intensity

threshold. The bottom right corner of Figure

11a shows the segmented cells, with mCherry-

positive cells colored in green, negative cells in

Figure 11: Gene Expression Bio App in ZEN. (a) The setup of the Gene- and Protein Expression Bio App within the ZEN

interface. An instance segmentation model trained on ZEISS arivis Cloud is used for the segmentation of cells in the oblique

channel. Following cell segmentation, mCherry-positive cells are identified to evaluate the expression rate. (b) The result

of analyzing a multiwell plate using the Gene- and Protein Expression Bio App. It presents the percentage of positive cells

per well at the current time-point of the time series, offering insights into gene expression dynamics.aluminum Barker

etched sample. This segmentation process enables the extraction of the size distribution of individual grains, facilitating

comprehensive analysis.

white, and cell boundaries and background in

gray. This visual representation allows users to

easily distinguish between cells expressing the

gene of interest (positive) and those without

detectable expression (negative).

Figure 11b extends the analysis to a multiwell

plate, providing insights into transfection

efficiency. A heat map in the top right image

visualizes transfection efficiency across wells,

while a detailed table in the bottom right

summarizes analytical information from the

multiwell timeseries dataset.

This application showcases the power of

AI-driven automation in cellular analysis. It

leverages the Gene- and Protein Expression

Bio App and Deep Learning models trained

on the ZEISS arivis Cloud platform for efficient

processing of multiwell datasets across

various imaging modalities and experimental

conditions.

Integration of AI into guided acquisition

workflows

Biological imaging often involves multi-step

procedures, especially when identifying

and examining rare events or ROIs at

high resolution. For instance, to capture

multichannel fluorescence images of rare

events for further analysis, users must first

identify the specific ROIs and then zoom in


— — — — — — — — — — — — — — —


Page 54


AI in ZEN and ZEN core imaging and analysis platform 52

Figure 12: Workflow for guided acquisition in ZEN.

The diagram illustrates the guided acquisition process,

starting with an overview scan to capture a large area for

high-throughput imaging. Image analysis assisted by AI is

performed to identify specific ROI for detailed acquisition.

Finally, high-resolution multi-dimensional images are

acquired, facilitating comprehensive characterization of

the identified ROIs.to capture high-resolution images, possibly

under various illumination conditions. Another

example is when users need to sample a

predefined number of objects for statistical

purposes, such as selecting a specific number

of random organoids in each well of a

multiwell plate for further imaging.

Historically, manual ROI identification relied on

researcher expertise and was time-intensive,

susceptible to human error, and inefficient,

wasting valuable researcher time in searching

for rare events or sampling a statistically

relevant number of objects for analysis.

Simplified ROI identification with guided

acquisition in ZEN

ZEN addresses this challenge by enabling

automated guided acquisition workflows

(see Figure 12 ). In addition to conventional

segmentation methods, ZEN can also leverage

trained AI models to detect rare events or

other ROIs during an initial overview scan. The

coordinates of these identified ROIs are then

used to automatically guide the microscope

to image those specific regions at higher

resolution and under pre-defined experimental

conditions, enabling efficient multidimensional

image acquisition. This automated guided

acquisition approach streamlines the imaging

process, minimizes human error, and optimizes

the use of valuable microscope time.

The power of guided acquisition in ZEN is

exemplified in the imaging of mouse Lgr5+ gut

organoids mounted in a 3D matrix (Matrigel).

As shown in Figure 13a, the initial low-

magnification overview scan of a single well

from a 24-well plate reveals multiple organoids.

Individual organoids are precisely identified

and segmented using a simple thresholding

method, leveraging the contrast difference

between the organoids and the background.

Considering these objects are easy to segment,

this approach demonstrates that Deep

Learning is not required for segmentation.

However, in scenarios where organoids are clustered together or situated in the shadows

of the wells near the edges, a Deep Learning-

based approach may be required for accurate

segmentation.

With the coordinates of these target

organoids located, ZEN automatically

guides the microscope to acquire detailed,

high-resolution z-stack images of only the

identified organoids, capturing multi-channel

fluorescence information. Figures 13b and (c)

show 2D sections of these high-resolution 3D

scans, highlighting nuclei (blue) and E-cadherin

(green), respectively. Figure 13d is a composite

image combining both channels. This targeted

imaging approach ensures efficient use of

microscope time while enabling comprehensive

multi-dimensional characterization of the

organoids of interest within a complex 3D

culture system.


— — — — — — — — — — — — — — —


Page 55


AI in ZEN and ZEN core imaging and analysis platform 53Figure 13: Guided acquisition of mouse Lgr5+ gut organoids in ZEN. (a) The initial low-magnifi cation overview scan of

a single well from a 24-well plate, revealing multiple organoids. The three images on the right display high-resolution

two-dimensional (2D) sections of a select organoid from panel (a), highlighting nuclei (blue) in panel (b), E-cadherin (green)

in panel (c), and both channels combined in panel (d). The multi-dimensional characterization of organoids within a

complex 3D system is showcased. The organoids were mounted in a 3D matrix (Matrigel) and imaged on a Celldiscoverer

7, Sample courtesy of Dr. M. Lutolf, EPFL, Switzerland.

Conclusion

The integration of AI into ZEN and ZEN core has

ushered in a new era of AI-assisted microscopy,

empowering users with unprecedented speed,

effi ciency, and accuracy in image analysis. From

segmentation and classifi cation to denoising

and guided acquisition, AI-driven solutions

have revolutionized the way users approach

microscopy workfl ows within these platforms.As integral components of the ZEISS software

ecosystem, ZEN and ZEN core seamlessly

interface with the ZEISS arivis suite of

software for scalable image analysis. This

software ecosystem empowers users with the

combined power of AI, facilitating automated

image acquisition and the analysis of large

multidimensional datasets.


— — — — — — — — — — — — — — —


Page 56


AI in ZEN and ZEN core imaging and analysis platform 54References

1. Pypi. czmodel. URL: https://pypi.org/project/czmodel/ (accessed 06 April 2024).

2. Buades A, Coll B, Morel JM. A review of image denoising algorithms, with a new one.

Multiscale Model Simul. (2005) 4:490–530. doi: 10.1137/040616024.

3. Buades A, Coll B, Morel JM. Nonlocal image and movie denoising. Int J Comput Vis. (2008)

76:123–139. doi: 10.1007/s11263-007-0052-1.

4. Kostadin D, Alessandro F, Vladimir K, Karen E. Image denoising by sparse 3D transform-

domain collaborative filtering. IEEE Trans Image Process. (2017) 16(8):2080–2095. doi: 10.1109/

TIP.2007.901238.

5. Krull A, Buchholz T-O, Jug F. Noise2Void - Learning Denoising from Single Noisy Images. (2018)

arXiv:1811.10980. doi: 10.48550/arXiv.1811.10980.

6. Höck E, Buchholz T-O, Brachmann A, Jug F, and Freytag A. N2V2 - Fixing Noise2Void

Checkerboard Artifacts with Modified Sampling Strategies and a Tweaked Network Architecture.

(2022). URL: https://openreview.net/forum?id=IZfQYb4lHVq (accessed 06 April 2024).

7. GitHub. Juglab/n2v. URL: https://github.com/juglab/n2v?tab=readme-ov-file (accessed 06 April

2024).

8. Shaner NC, Campbell RE, Steinbach PA, Giepmans BN, Palmer AE, Tsien RY. Improved

monomeric red, orange and yellow fluorescent proteins derived from Discosoma sp. red

fluorescent protein. Nat Biotechnol . (2004) 12 :1567–1572. doi: 10.1038/nbt1037.

9. Ransom EM, Ellermeier CD, Weiss DS. Use of mCherry Red Fluorescent Protein for Studies

of Protein Localization and Gene Expression in Clostridium difficile. (2015) 5 :1652—1660. doi:

10.1128/AEM.03446-14.


— — — — — — — — — — — — — — —


Page 57


AI in ZEN and ZEN core imaging and analysis platform 55


— — — — — — — — — — — — — — —


Page 58


AI for routine image analysis using ZEISS Labscope 56AI for routine image analysis using ZEISS Labscope

Life science encompasses diverse disciplines—

from systematic zoology to human anatomy

and protein interactions at the molecular

level. Equally diverse is the application of

microscopy in these branches of science.

Microscopes are capable of much more than

resolving smaller and smaller structures. The

microscope is perhaps the best multitool in the

laboratory, with uses in medical diagnostics,

biotechnology, and the pharmaceutical sector.

Analysis and monitoring are two critical

applications of microscopes. For example,

tissue and blood samples are routinely analyzed

for atypical cells and cell morphologies, and

eukaryotic cells in cell cultures are checked for

their health and physiological behavior (see

Figure 1 ). Furthermore, these applications are

routine and repetitive, and the resulting images

can answer crucial questions, such as:

■Are my cells healthy?

■Is there a detectable pathogen?

■Was the gene successfully inserted into my

cells?

Figure 1: Cell cultures need regular monitoring to check their health and behavior.While reliability and reproducibility are

always critical, time is also important because

microscopy experiments can produce a lot of

data, all of which needs to be analyzed with

care and validity.

The potential role of AI tools in routine

image analysis

AI tools can assist with repetitive and time-

consuming microscopy tasks to save time and

eliminate human error (see Figure 2 ).

Artificial neural networks can identify

processes, patterns, and states in organisms,

tissues, and cells that humans may find difficult

to detect even with advanced microscopy

techniques.

These AI tools can also link vast amounts of

data and learn from accumulated experience

to refine specified processes. Manual work that

may have taken hours, days, or weeks can now

be performed automatically with ease, and

results are delivered in real time. Plus, the ability

of AI to detect and analyze properties that

would be difficult for humans to detect enables

the fascinating prospect of revolutionary

discoveries.


— — — — — — — — — — — — — — —


Page 59


AI for routine image analysis using ZEISS Labscope 57Like the human brain, AI algorithms constantly

learn and improve. Features are detected,

interpreted, and compared, and decisions

and predictions are made. The accuracy of

predictions and decisions improves with larger

datasets, and with every new input or inquiry,

the network learns to adapt to new structures.

Overcoming limitations of AI tools

While AI tools for lab applications are

sophisticated, their wider use may be limited

because they can be difficult to adapt to new

applications, require enormous amounts of

computing power, or require advanced IT skills.

Ideally, AI tools should be accessible to as many

people as possible, adaptable to different areas

of interest, and work on inexpensive hardware.

The AI modules for the ZEISS Labscope imaging

app offer these advantages and assist with

performing time-consuming yet important lab

tasks.

Figure 2: Counting cells and determining their confluency manually can become cumbersome.

“While AI tools for lab applications are

sophisticated, their wider use may be

limited.”By combining Deep Learning methods with

large training datasets, the modules can adapt

to various cell types and morphologies on

which they were not initially trained and can

handle images of varying quality.

The versatility and the ability to collect reliable

and reproducible data with minimal input and

expertise required from the user make the

Labscope AI modules from ZEISS an essential

product for microscopists in life science,

medicine, and biotechnology.

The role of AI tools for determining cell

confluency

Cell confluency refers to the extent to which

a layer of cells in a culture dish or flask has

grown and spread to cover the surface area. It

describes how densely packed the cells are and

is typically expressed as a percentage of the

total surface area covered by cells.

In general, cells are seeded into a culture dish

or flask at a low density and allowed to grow

until they reach a desired level of confluency

(see Figure 3 ). At low confluency, cells are

often actively dividing and may be used for

experiments that require actively proliferating


— — — — — — — — — — — — — — —


Page 60


AI for routine image analysis using ZEISS Labscope 58

Figure 3: Cells can be seeded in Petri dishes, flasks or even cell factories.cells. At higher confluency, cells may become

more quiescent and may exhibit different

behaviors or responses to stimuli.

Cell confluency is a fundamental parameter

in cell culture experiments, as it can impact

cell behavior and experimental outcomes.

Monitoring cell confluency is routine for every

cell culture, as it determines when cultures

need to be transferred to a new cell culture

vessel. This step may dictate whether an

experiment can be carried out or not and thus

has a significant impact on the laboratory

workflow.

Challenges of measuring cell confluency

Traditionally, cell confluency is assessed by

looking at the layer of cells under a microscope

and estimating the degree of surface area

coverage. However, relying on individual

estimates of cell confluency has several

disadvantages in cell culture experiments.

These include:

■Lack of reproducibility.

■Inaccuracy.

■Lack of standardization between

laboratories.

These issues can be caused and exacerbated

by different individuals making confluency measurements, and variability in how the cells

were seeded.

AI tools can improve reliability

and reproducibility of confluency

measurements

AI tools like ZEISS Labscope AI Cell Confluency

address these issues, enabling reproducible

and accurate measurements with the click of a

button.

The AI-trained algorithm recognizes cells in

culture vessels based on transmitted light

microscopy images, regardless of cell type and

magnification of the image, and provides a

specific value for confluency in the respective

frame. The algorithm also provides an average

of all acquired data points in the culture vessel

(see Figure 4 ). Also, users can retrospectively

analyze already stored image data for

confluency.

“The AI-trained algorithm recognizes

cells in culture vessels based on

transmitted light microscopy

images, regardless of cell type

and magnification of the image.”


— — — — — — — — — — — — — — —


Page 61


AI for routine image analysis using ZEISS Labscope 59

Figure 4: Screenshot showing ZEISS Labscope AI Cell Confluency measurement for HeLa cells. The module shows the

confluency for the current field of view (55%) and the average of the already acquired field of views (55%).The ability to examine any number of sections

of the culture vessel enables a statistical

determination of cell density. Furthermore, the

accumulated confluence data can be easily

exported and further analyzed in statistical

analysis software.

Given these advantages, the Labscope AI Cell

Confluency module significantly enhances

the efficiency and accuracy of cell confluency

measurements, ultimately improving the

reliability of experimental outcomes.

How AI can help with cell counting

Cell counting is another essential task in

cell biology laboratories, enabling the

determination of the number of cells in a

culture vessel or experiment setup. This

information is crucial for planning experiments

and ensuring the available number of cells is

sufficient.


Challenges associated with traditional

cell counting

The traditional method for cell counting

is to detach cells from the surface of the

culture vessel using trypsin, transfer them to a counting chamber, and count them using

phase contrast microscopy and a manual hand

counter.

However, manual cell counting is a time-

consuming and labor-intensive process,

especially when large numbers of samples

need to be counted. This can slow research

progress and increase the likelihood of errors

due to fatigue. It also relies on the observer’s

ability to visually distinguish between cells

and debris, and to accurately count the cells

in each grid, which can introduce significant

subjectivity into the results, as different

observers may count cells differently.

In addition, manually counting cells can

increase the risk of contamination and

impact cell viability. The results can be hard

to reproduce since they differ across different

observers, labs, and experiments. In cases

when there are not enough cells for an

experiment after manual counting, valuable

time is lost both by the measurement itself and

while the cells settle down and reattach to the

culture vessel so they can continue to grow.


— — — — — — — — — — — — — — —


Page 62


AI for routine image analysis using ZEISS Labscope 60

Figure 5: ZEISS Axiovert 5 digital is an all-in-one cell imaging system based on AI.

Learn more about microscopy solutions

for cell culture

Using cell contrast in current times.

www.zeiss.com/microscopy/cell-cultureAI tools can help simplify cell counting

The AI Cell Counting module for Labscope

overcomes these challenges by recognizing

and counting cells in a fi eld of view at the

touch of a button. The AI algorithm can detect

and diff erentiate cells regardless of their type

or morphology. Moreover, the algorithm’s

reliability and reproducibility provide consistent

and accurate results.

Like the Cell Confl uency module, users can

process and analyze existing images. In

addition to the number for the cell count,

a graphical representation of the detection

process allows users to check the algorithm’s

functionality at any time. Results can be

exported in common fi le formats for further

processing in statistical tools such as Microsoft

Excel.

The benefi ts of AI in routine image

analysis

Using AI in daily laboratory work promises

to optimize routine workfl ows and improve

productivity. AI combined with microscopy will

Learn more about Labscope

Easy-to-use imaging app for connected

microscopes, share your discoveries.

www.zeiss.com/labscopecontinue to be one of the game changers in

everyday laboratory life. Routine microscopes

like ZEISS Axiovert 5 digital, are already

compatible with the AI modules for Labscope

and off er all the advantages of automatic

cell counting and automatic confl uency

measurement (see Figure 5 ). While the human

factor remains essential in ensuring the

accuracy and reliability of results, AI enriches

microscopy examinations with tools for

reducing errors and providing greater effi ciency

by eliminating the need to perform repetitive

and time-consuming tasks.


— — — — — — — — — — — — — — —


Page 63


AI for routine image analysis using ZEISS Labscope 61


— — — — — — — — — — — — — — —


Page 64


AI for X-ray microscopy with Deep Learning-based reconstruction 62Traditionally, microscopy studies aimed

at examining the volumetric structure of

samples have relied on two-dimensional (2D)

slice-by-slice imaging using light or electron

microscopy, followed by the reconstruction of

the three-dimensional (3D) volume through

the registration of individual 2D slices. This 2D

slice-by-slice approach comes with significant

challenges, notably the risk of damaging

delicate structures or altering sensitive features

due to mechanical sectioning. Moreover, it

may introduce mechanical cutting or surface

ablation artifacts, expose internal structures to

the atmosphere, and cause damage through

physical cutting tools, FIB-SEM, or laser

exposure.

Drawbacks of generating 3D

reconstructions from 2D sample sections

Relying solely on 2D images to infer 3D

conclusions has proven problematic. Although

stereography has provided quantifiable

results in ideal conditions, research indicates

that extrapolating 2D images to 3D metrics

can be highly inaccurate, particularly for

heterogeneous or anisotropic real-world

materials. To overcome these limitations,

novel techniques for 3D characterization have

emerged, including the extension of optical

and electron microscopy to 2D-based serial

sectioning microanalysis [1].

Despite their potential, these methods

involve the repetitive slicing of samples while

capturing 2D surface images, which are

then used for 3D reconstruction. Although

this approach brought researchers closer to

achieving comprehensive 3D characterization,

its dependence on slicing frequently results in

restricted depth resolution, voxel shapes that

deviate significantly from cubes, and persistent

issues related to damage caused by 2D interior

sectioning. Ultimately, this method consumes

the sample during the imaging process, AI for X-ray microscopy with Deep Learning-based

reconstruction

eliminating the possibility of subsequent

re-imaging for four-dimensional (4D) studies

(3D imaging over time) or conducting multi-

length scale analyses using alternative imaging

modalities.

High-resolution 3D X-ray microscopes (XRM)

provide a solution to these challenges by

facilitating non-destructive 3D imaging at

comparable length scales [2]. The deep

penetration of X-rays eliminates or significantly

reduces the necessity for extensive sample

preparation. Additionally, full X-ray tomography

avoids altering the sample, which remains

unaffected by mechanical sectioning artifacts

and avoids non-cubic voxels. Consequently,

this approach offers superior visualization and

quantification of 3D microstructures.

X-ray microCT: A versatile tool for non-

destructive 3D characterization across

scientific domains

X-ray microCT (micro-computed tomography)

is a non-destructive imaging technique that

uses X-rays to generate 3D representations of

internal structures within objects at the micron

scale. This method is based on the principles of

computed tomography (like medical CT scans,

but with significantly higher spatial resolution).

Achieving these enhanced resolutions

necessitates fundamentally different instrument

architecture: the detector and X-ray source

remain fixed while the sample undergoes

rotation. In contrast, medical CT instruments

require stationary patients for obvious reasons,

with synchronized rotations of both the source

and detector.

The basics of X-ray microCT and its

applications

In X-ray microCT, the object under examination

is positioned in the path of an X-ray beam and

a sequence of X-ray projections is captured

from multiple angles around the object. These


— — — — — — — — — — — — — — —


Page 65


AI for X-ray microscopy with Deep Learning-based reconstruction 63projections are then used to reconstruct a 3D

image of the internal structure of the object.

This process allows the non-destructive

visualization of internal features, including

pores, cracks, and other details.

X-ray microCT has many applications across

diverse scientific and industrial domains,

spanning materials science, biology, geology,

and paleontology. It enables researchers to

explore the internal structure of samples

without the need for physical sectioning

to provide invaluable insights into their

microstructure. These applications are

demonstrated in 3D renderings of various

samples (see Figure 1 ).

How XRM surpasses traditional microCT

by using dual-stage magnification

Most microCT instruments rely on large

pixel (~100 μm) flat panel detectors and

primarily use small spot size and geometric

magnification (larger apparent size of an object

when it’s closer to the source) to achieve high

resolution. However, this approach results in a

rapid deterioration of resolution as the working

distance (the distance between the detector

and the sample) increases, which can be

problematic for large samples.

In contrast, XRM architecture integrates a

patented detector system rooted in ZEISS’

synchrotron heritage. This system features

small pixels (<0.5 µm) facilitated by scintillators

coupled with visible light optics. The optical

magnification within the detector diminishes

Figure 1: Diverse applications of X-ray microCT demonstrated using 3D renderings of samples from different fields. (a)

Materials science (21700 battery). (b) Geology (meteorite). (c) Life science (pig eye). (d) Electronics (camera module of a

smartphone).

the dependence on geometric magnification,

thus maintaining high resolution (small voxels)

even at long working distances. Consequently,

large samples of approximately 100 mm in size

or samples contained within in situ devices can

be imaged at submicron resolution.

Advantages of XRM

There are significant advantages of XRM over

microCT, including enhanced contrast and

higher resolution when imaging large samples

and conducting in situ studies, preserving the

physiological or environmental context of the

sample being studied. As illustrated in Figure 2 ,

the dual-stage magnification of XRM eliminates

the need for sample destruction to achieve

high-resolution imaging on large samples.

Unlike microCT, where samples must be cut to

bring the region of interest as close to the X-ray

source as possible for higher magnification,

XRM uses a combination of geometric and

optical magnifications to achieve the same

resolution without damaging the sample.


Advancements in CT reconstruction:

Harnessing Deep Learning for

enhanced imaging

The traditional method for reconstructing a 3D

volume from a series of sequentially acquired

2D X-ray projections is known as “filtered back

projection” in cone beam CT geometry and is

commonly referred to as FDK reconstruction

[3]. This technique involves weighting and

filtering projections before distributing them

across the image volume along their projection

directions.


— — — — — — — — — — — — — — —


Page 66


AI for X-ray microscopy with Deep Learning-based reconstruction 64Figure 2: Comparison of imaging techniques demonstrated using an example of non-destructive imaging of apple

seeds withing the fruit. (a) Traditional microCT imaging requires the extraction of an apple seed for high-resolution

imaging, where the seed is positioned close to the X-ray source for optimal magnification. (b) Because of the dual-stage

magnification in XRM, the full apple is imaged non-destructively. The image of the seed is first geometrically magnified on

a scintillator and further magnified using ZEISS proprietary optics before being detected by a CCD detector. This enables

high-resolution imaging without sample destruction.

The challenges of generating accurate

3D CT reconstructions

Achieving an accurate representation of the

3D volume of the sample necessitates a large

number of projections (ideally thousands).

However, this technique relies on the

assumption that the total projection dataset

contains sufficient projections spaced at small

angular intervals (i.e., that the data is “well

sampled”) and is free of significant noise. In

practice, to increase throughput and reduce

total tomography acquisition time, the total

projection dataset is often not well sampled,

leading to errors in the reconstructed image.

This challenge is particularly pronounced in

in situ experiments requiring higher temporal

resolution or industrial applications where the

effective cost per sample must be minimized.

Such errors can result in inaccuracies in

segmentation and any subsequent analysis

derived from the data.

Deep Learning overcomes challenges in

3D CT reconstruction

Deep Learning-based algorithms offer

promising solutions to the challenges

encountered in CT reconstruction, with

the potential to enhance image quality and

decrease throughput time for high-resolution

3D X-ray microscopes [4]. This innovative approach involves using trained neural

networks positioned between the X-ray

projections and the final reconstructed volume.

Deep Learning-based CT reconstruction

techniques can effectively reduce

noise in 3D XRM data and mitigate CT

reconstruction artifacts, such as aliasing

artifacts (shadow bands, dark streaks, or

noise-like distortions), which may arise when

insufficient X-ray projection data is available.

While Machine Learning applications in the

field have predominantly concentrated on

post-reconstruction tasks such as image

segmentation, feature classification, and

object recognition, the integration of Deep

Learning-based techniques within the complex

workflow of 3D XRM has only recently begun

to be extensively explored.

Enhancing 3D CT reconstruction with

ZEISS DeepRecon Pro

A Deep Learning-based reconstruction

workflow developed by ZEISS, known as

ZEISS DeepRecon Pro, greatly assists the CT

reconstruction phase of XRM measurement. It

is part of the Advanced Reconstruction Toolbox

(ART), which offers image reconstruction

technologies on ZEISS X-ray microscopes to

enhance X-ray system performance.


— — — — — — — — — — — — — — —


Page 67


AI for X-ray microscopy with Deep Learning-based reconstruction 65

Figure 3: Integrating a pre-trained neural network between 2D X-ray projections (radiographic data) and 3D CT

reconstructed volume.This Deep Learning-based reconstruction

workfl ow features a user-friendly software

interface that minimizes user input, requiring

only the specifi cation of the desired application

result, such as improved image quality or

reduced throughput time. ZEISS DeepRecon

Pro employs trained convolutional neural

networks positioned between the X-ray

projections and the fi nal reconstructed volume

(see Figure 3 ). This streamlined workfl ow

enables XRM image processing, interpretation,

and retrieval using an on-demand trainable

neural network. Consequently, high-quality

reconstructed data can be obtained even with

a reduced number of projections (Np).

ZEISS DeepRecon Pro uses ZEISS proprietary

cost functions and training protocols to

generate image reconstructions from datasets

obtained with a low Np as the training

input [4]. This is achieved by using an FDK-

reconstructed image produced with a large Np

as the reference ground truth training target

data. The Deep Learning network training is

customized to specifi ed XRM data acquisition

settings and a particular sample class, defi ned

as a group of samples with similar X-ray

attenuation, and scan recipe parameters.

Once trained, the network can eff ectively

process datasets belonging to the same sample

class. If there are diff erences in the sample

class or modifi cations in the XRM acquisition

parameters, retraining of the network is

necessary.Training a Deep Learning network with

ZEISS DeepRecon Pro doesn’t require prior

knowledge of the sample type, meaning

users can create custom networks for various

applications without Machine Learning

expertise. This automated training scheme is

seamlessly integrated into a software interface,

off ering users a selection of options through an

intuitive drop-down menu.

Comparing FDK and ZEISS DeepRecon

Pro 3D reconstructions

Figure 4 provides a compelling illustration of

the benefi ts off ered by Deep Learning-based

reconstruction methods. The fi gure compares

the reconstruction of a 21700 lithium-ion

battery using both traditional FDK and ZEISS

DeepRecon Pro.

The 2D section shown in Figure 4a was

reconstructed using the FDK algorithm from

a dataset comprising 3,200 projections

acquired over 11 hours. This extensive

projection dataset is typically required to

capture the necessary details when using

standard FDK reconstruction. Figure 4b shows

the same region reconstructed using FDK,

but from a signifi cantly reduced number of

projections, with only 400 collected over

84 minutes. This dramatic reduction in the

number of projections results in various

artifacts, particularly evident in the region

highlighted by the red ellipse. However,

when the same 400-projection dataset was


— — — — — — — — — — — — — — —


Page 68


AI for X-ray microscopy with Deep Learning-based reconstruction 66Figure 4: Comparison of FDK and ZEISS DeepRecon Pro reconstruction techniques for a 21700 lithium-ion battery sample.

(a) 2D section from a volume reconstructed using FDK from a dataset of 3,200 projections acquired over 11 hours. (b) 2D

section from a volume reconstructed using FDK from a reduced dataset of only 400 projections collected over 84 minutes,

exhibiting various artifacts, particularly in the region highlighted by the red ellipse. (c) 2D section from a 3D volume

reconstructed using the ZEISS DeepRecon Pro Deep Learning-based approach from the same 400-projection dataset,

demonstrating a clean, artifact-free image comparable to the high-quality FDK reconstruction in (a) despite the 8-fold

reduction in acquisition time and Np.

reconstructed using the ZEISS DeepRecon Pro

Deep Learning-based algorithm, the resulting

image (see Figure 4c ) is clean, artifact-free,

and comparable to the high-quality FDK

reconstruction from the comprehensive

3,200-projection dataset.

This remarkable 8-fold improvement in

throughput without compromising image

quality underscores the practical benefits of

the ZEISS DeepRecon Pro Deep Learning-based

reconstruction. The next section provides

additional examples further illustrating the

advantages of this Deep Learning-powered

technique.

Demonstrating the impact of Deep

Learning with example applications

Improving graphite contrast in battery

materials

Battery analysis represents a compelling

application for X-ray microscopy due to the

sealed nature of these devices. Batteries are

complex systems consisting of not just a single

material, but rather a functional composite of

multiple materials arranged precisely, as shown

in Figure 1a .

Battery analysis encompasses a wide range of

tasks, including inspection and measurement,

defect inspection, material evaluation, in

situ monitoring of cycling behaviors, and high-resolution imaging to provide input for

performance models. AI-based reconstruction

algorithms such as ZEISS DeepRecon Pro can

significantly enhance the value of these tasks.

Beyond inspection tasks, researchers often

seek to integrate the results of 3D imaging

experiments into computer simulation

packages to model the performance of various

microstructural and chemical arrangements in

batteries. Achieving the best possible image

quality is essential for accurately segmenting

different phases and generating suitable inputs

for these models. For instance, in lithium-

ion pouch cell batteries, obtaining good

contrast within the graphite anode region

can be challenging due to its low density

and immersion in a liquid electrolyte. ZEISS

DeepRecon Pro offers capabilities that surpass

those of standard reconstruction techniques,

particularly in applications of this nature.

Figure 5 presents a comparison of FDK and

ZEISS DeepRecon Pro reconstructed images

collected from a cell imaged for 24 hours on

the ZEISS Versa XRM at high magnification.

Figure 5a depicts a 2D section from the

volume reconstructed using standard FDK

reconstruction, whereas Figure 5b image

shows the same section from a volume

reconstructed using ZEISS DeepRecon Pro.

While Figure 5a (FDK) exhibits excellent

contrast between the graphite anode and


— — — — — — — — — — — — — — —


Page 69


AI for X-ray microscopy with Deep Learning-based reconstruction 67Figure 5: Comparison of reconstruction techniques

in a lithium-ion pouch cell battery. (a) 2D section

from a volume reconstructed using standard FDK

from a dataset acquired over 24 hours showing

good contrast but lacking detail within the anode

region. (b) The same section from a volume

reconstructed using ZEISS DeepRecon Pro from

the same dataset acquired over 24 hours. This

ZEISS DeepRecon Pro reconstruction demonstrates

improved visualization of fine details in the anode

structure allowing more accurate segmentation.

Note the contrast has been enhanced in both

(a) and (b) for better visualization of the darker

graphite regions. (c) 2D section from a volume

reconstructed using FDK from a quarter of the

projections of the original 24-hour dataset,

simulating a 6-hour acquisition. The data quality

is notably degraded compared to the 24-hour

dataset, with increased noise evident in the

image inset. (d) The same section from a volume

reconstructed using ZEISS DeepRecon Pro from the

simulated 6-hour dataset, demonstrating a 4-fold

improvement in quality without compromising

image information, as evidenced by the clean inset

image.

other cell components, it lacks the fine detail

within the anode region (dark gray) necessary

for accurate segmentation.

Conversely, Figure 5b (ZEISS DeepRecon

Pro) clearly depicts fine details in the anode

structure, facilitating improved segmentation

of these regions [5]. Note that contrast has

been enhanced in both panel (a) and (b) for

better visualization of the darker graphite

regions (see Figure 5 ). By employing ZEISS

DeepRecon Pro, much of the noise present in

standard FDK reconstruction can be eliminated

while preserving the features and sharpness

required for visualizing structures within the

sample. This capability enables researchers to

segment the anode layer microstructure more

easily and provides more accurate inputs for

modeling and expediting research objectives.

To further explore the benefits of ZEISS

DeepRecon Pro, a quarter of the projections

from the 24-hour dataset depicted in Figure 5a

and Figure 5b were used to simulate a 6-hour

acquisition and reconstructed using FDK and

ZEISS DeepRecon Pro, respectively. Figure

5c illustrates a 2D section from the volume

reconstructed using the FDK algorithm, where the quality is notably impacted compared

to the 24-hour dataset. This degradation is

particularly evident in the noisy inset image,

which displays a zoomed-in region from the 2D

slice. Conversely, Figure 5d shows the same 2D

section, reconstructed using ZEISS DeepRecon

Pro. The clean inset image demonstrates a

remarkable 4-fold improvement in quality

achievable with ZEISS DeepRecon Pro without

compromising the information contained in the

images.

Enhancing Inconel imaging in additive

manufacturing

Inconel, a nickel-based superalloy, has

emerged as a workhorse material for additive

manufacturing (AM) applications. Inconel has

unique properties, including high strength,

crack and corrosion resistance, and excellent

performance under harsh conditions, that

make it a popular choice for a wide range of

AM parts. In recent years, non-destructive

tomography techniques like microCT and

X-ray microscopy have become established

methods for testing and analyzing additively

manufactured Inconel components. These

advanced imaging techniques have proven

especially useful and accurate for dimensional


— — — — — — — — — — — — — — —


Page 70


AI for X-ray microscopy with Deep Learning-based reconstruction 68Figure 6: Comparison of FDK and ZEISS DeepRecon Pro reconstruction techniques for high-resolution imaging of an

Inconel alloy sample. (a) FDK reconstruction from 1601 projections, representing an optimal scanning recipe with longer

exposure times. (b) FDK reconstruction from only 401 projections, showing a significant decline in image quality and

the obscuring of smaller voids. (c) ZEISS DeepRecon Pro reconstruction from the 1601-projection dataset, delivering the

cleanest image with the best signal-to-noise ratio. (d) ZEISS DeepRecon Pro reconstruction from the 401-projection dataset,

maintaining the same level of detail as in panel (c), despite a 4-fold reduction in acquisition time.

measurement and porosity analysis of AM

parts. High-resolution scanning is often

required for these applications to detect tiny

defects and pores within the internal structures

of Inconel AM parts.

Dense metal samples, such as Inconel, can

pose significant challenges for high-resolution

interior tomography scans. Dense materials

often require extremely long exposure

times to achieve acceptable noise levels

in the reconstructed images. Figure 6

shows a comparison of Inconel alloy scans

reconstructed using both standard FDK

algorithms and the Deep Learning-based ZEISS

DeepRecon Pro approach.

The FDK-reconstructed slice from 1601

projections (see Figure 6a ) demonstrates

the image quality that can be achieved with

an optimal scanning recipe that consists of

longer scans. However, reducing the number

of projections to 401 results in a noticeable

decline in quality, where smaller voids become

obscured, as seen in Figure 6b .

In contrast, the ZEISS DeepRecon Pro-

reconstructed slices maintain exceptional image

quality, with the 1601-projection dataset in

Figure 6c showing the cleanest image with the

best signal-to-noise ratio. The 401-projection

ZEISS DeepRecon Pro reconstruction in Figure

6d captures the same level of detail as the 1601-projection ZEISS DeepRecon Pro result,

despite the 4-fold reduction in acquisition time.

This highlights the powerful capabilities of the

Deep Learning-based reconstruction, which

can deliver high-quality images without the

need for long scanning times.

Advancing PCB inspection with XRM and

Deep Learning reconstruction

The relentless push for miniaturization in the

semiconductor industry has introduced new

quality control challenges. X-ray microCT

has become a widely adopted technique to

quickly identify design issues, discrepancies,

and internal defects within printed circuit

boards (PCBs). The need for even higher image

resolution to detect smaller defects in large

PCB samples is addressed by the two-stage

magnification capabilities of XRM systems.

Until recently, the sensitivity of classic

scintillator materials to high-energy X-rays

has limited the application of XRM for PCB

inspection. The introduction of the “resolution

performance” feature in the ZEISS Xradia 630

Versa XRM system has been a game-changer,

enabling high-resolution imaging at the high

X-ray energies required to penetrate large PCB

samples. While this technological advancement

has significantly expanded the usefulness of

XRM for PCB analysis, long acquisition times

are often still necessary, and high noise levels

can obscure small defects. Deep Learning-


— — — — — — — — — — — — — — —


Page 71


AI for X-ray microscopy with Deep Learning-based reconstruction 69based reconstruction using ZEISS DeepRecon

Pro can generate higher-quality, lower-noise

images in shorter scanning times.

The value of XRM paired with advanced

reconstruction techniques like ZEISS

DeepRecon Pro is exemplified in the analysis

of PCBs. Figure 7 demonstrates a multi-scale

imaging workflow applied to a PCB sample,

highlighting the benefits of the ZEISS

DeepRecon Pro approach.

Figure 7a shows a low-magnification (12 μm

voxel) overview image capturing the full field

of view of the PCB sample. To further inspect

a specific region of interest, a high-resolution

(0.4 μm voxel) scan was performed targeting

the solder bump highlighted in Figure 7a .

The 2D slice from the high-resolution FDK

reconstruction reveals a noisy image, where

some of the larger cracks within the solder

bump are visible (see Figure 7b ). While this image provides good overall detail, the defects

and cracks present within the solder bump are

not clearly discernible. In contrast, the 2D slice

from the ZEISS DeepRecon Pro-reconstructed

volume reveals the critical solder bump

defects much more clearly (see Figure 7c ).

This highlights the importance of the Deep

Learning-based reconstruction for applications

requiring the detection of smaller, more

subtle internal features within complex PCB

structures.

Conclusion

The increasing complexity of X-ray

tomographic microscopy experiments has

made advanced image processing algorithms

an essential component of achieving accurate

and high-quality results. Traditionally, there has

been a trade-off between image quality and

experimental throughput that needed to be

carefully balanced. However, Deep Learning,

specifically convolutional neural networks,

Figure 7: Multi-scale XRM and Deep Learning-based reconstruction for printed circuit board (PCB) analysis. (a)

Low-magnification (12 μm voxel) overview image of the PCB sample, with a region of interest (solder bump) highlighted. (b)

2D slice from the high-resolution (0.4 μm voxel) FDK reconstruction of the solder bump region, where defects and cracks

are not clearly visible. (c) 2D slice from the high-resolution volume reconstructed using the ZEISS DeepRecon Pro Deep

Learning-based approach, revealing the critical solder bump defects much more clearly. The ZEISS DeepRecon Pro method

demonstrates the ability to detect subtle internal features within complex PCB structures that are obscured in standard FDK

reconstructions.


— — — — — — — — — — — — — — —


Page 72


AI for X-ray microscopy with Deep Learning-based reconstruction 70holds the potential to revolutionize this field by

overcoming this persistent challenge.

To address the obstacles associated with

implementing Deep Learning for X-ray imaging applications, a new technology called ZEISS DeepRecon Pro has been developed. ZEISS DeepRecon Pro enables fully automated training of high-performance neural networks for image reconstruction, with minimal user input required beyond specifying the desired outcome, such as improved image quality or increased throughput. This streamlined approach is effective in removing various imaging artifacts, including sparse sampling issues and random noise, resulting in higher-quality reconstructions with lower error.The effectiveness of the ZEISS DeepRecon Pro Deep Learning-based reconstruction has been demonstrated across a range of application examples, encompassing both full-field and interior tomography. Both qualitative and quantitative analyses have shown the ability of the network to produce high-quality results within its specific training dataset and for a broad range of samples and imaging conditions.

References

1.Lidke DS, Lidke KA. Advances in high-resolution imaging--techniques for three-dimensional

imaging of cellular structures. J Cell Sci . (2012) 125 (11): 2571–2580. doi: 10.1242/jcs.090027.

2.ZEISS Microsc opy. What is 3 D X-ray microscopy? Technic al Note. URL: https://pages. zeiss.com/

rs/896-XMS-794/images/Ebook_3D- X-ray-Microscopy-Second-Edit ion.pdf (accessed 23 April

2024).

3.Andrew M, Sanapala R. Advan ced reconstruct ion technolo gies Technical Note. URL: https://

zeiss.widen.ne t/s/zjqgbrsqs f/en_journal- article_eptc- 2021_package-f a-with-correla ted-xrm-

laserfib_viswa nathan-jiao-h artfield (acc essed 23 Apri l 2024).

4.Villarraga-Góm ez H, et al. I mproving thr oughput and i mage quality of high-resolu tion 3D

X-ray microscopes using deep learning reconstruction techniques. 11th Conference on IndustrialComputed Tomography (iCT), Wels, Austria. (2022) 8-11 Feb. e-Journal of Nondestructive Testing27(3). doi: 10.58286/26644.

5. Allen G, et al . Accelerate your 3D X-ray failure analysis by deep learning high resolution

reconstruction. Proceedings of the ISTFA 2021. ISTFA 2021: Conference Proceedings from the 47thInternational Symposium for Testing and Failure Analysis, Phoenix, Arizona, USA . (2021) 291–295.

doi: 10.31399/asm.cp.istfa2021p0291.


— — — — — — — — — — — — — — —


Page 73


AI for X-ray microscopy with Deep Learning-based reconstruction 71


— — — — — — — — — — — — — — —


Page 74


72

Case Studies: Examples from Life ScienceCase studies

Examples from Life Sciences

Microscopy is one of the primary methods

used to understand neurological diseases,

such as Parkinson’s disease, by studying

neural circuits. By examining the cellular

mechanisms that drive synapse formation

and regulate synapse composition,

researchers can identify patterns and rules

necessary for establishing neural circuits.

Mouse models are often used to investigate

the generation and function of these

circuits, which are relevant to various human

diseases.

This analysis involves examining dendritic

spines and neuronal projections to

understand neural circuits.

Figure 1: A four-channel microscopy image of a mouse brain with fluorescence of various labels. (a) Full image, and (b)

a single-channel image of td Tomato that highlights the neuron structure requiring segmentation for dendritic spines and

neuronal projections. (c) Zoomed-in view of a selected region from panel b , where yellow arrows indicate some dendritic

spines, which are small protrusions from the neuronal projections.The sample used for this study was provided

by R. Thomas and D. L. Benson from Icahn

School of Medicine at Mount Sinai, New York,

USA. Primary neurons expressing tdTomato

were isolated from the mouse brain and plated

in a 96-well plate for microscope imaging.

3D z-stack images were captured using a

ZEISS Celldiscoverer 7 microscope with LSM

900 and Airyscan 2, equipped with a 50x/1.2

water objective and 0.5x Tube lens. A 3D

z-stack image from one of the wells clearly

displays the reddish-yellow-colored neuronal

projections and dendritic spines that need to

be segmented (se e Figure 1 ).To truly understand and appreciate the power of AI for image analysis, practical applications

are key. This chapter shares various case studies demonstrating the diverse and practical ways

in which AI can aid image analysis. Through these examples, you’ll see the potential impact and

benefits that AI can bring to your imaging.

Microscopy and Deep Learning for neurological disease research


— — — — — — — — — — — — — — —


Page 75


73

Case Studies: Examples from Life ScienceSeparating dendritic spines and neuronal

projections with Deep Learning

A Deep Learning model must be trained to

separate spines and neuronal projections. Deep

Learning is superior to conventional Machine

Learning when dealing with complex images,

as is the case here, where spines and neuronal

projections appear similar in images.


A Deep Learning-based semantic segmentation

model was trained on ZEISS arivis Cloud. The

objective was to recognize two classes, namely

dendritic spines and neuronal projections, in

addition to the background. To create a ground

truth for each of the three classes, twelve

random slices were selected from the z-stack

and partially annotated.

The annotation process involved using a

digital paintbrush of different colors to mark

respective pixels for each class. In this case,

neuronal projections were painted in yellow,

dendritic spines in green, and the background

in dotted purple (see Figure 2 ).

Figure 2: The arivis AI training interface on the ZEISS arivis Cloud with three defined classes for segmentation: projections

(yellow), spines (green), and background (dotted purple). The inset image showcases a zoomed-in area with labeled classes

representing each category’s ground truth. It is important to note that the image is partially labeled, focusing on regions

that provide useful information for the Deep Learning model.

To refine the trained model, initial results

were visually inspected and annotations

added to indicate areas where the model was

unsuccessful. This iterative process is crucial

in data-centric model training, where the

expert’s input is a vital part of the workflow.

The iterative training process continued until

the subject matter expert was content with

the result. The model was then downloaded

and integrated into an image analysis pipeline

that involves segmentation followed by object

analysis, utilizing the 3D toolkit in ZEN. Figure 3

shows the segmented dendritic spines overlaid

on the tdTomato fluorescence image.

How microscopy and Deep Learning can

aid neurological research

Microscopy and Deep Learning are valuable

tools in Parkinson’s research, allowing

researchers to study neural circuits and

understand the cellular mechanisms that

regulate synapse formation and composition.

A Deep Learning-based semantic segmentation

model was trained to separate dendritic spines


— — — — — — — — — — — — — — —


Page 76


74

Case Studies: Examples from Life ScienceFigure 3: (a) Single-channel image of td Tomato highlighting neuron structure; same as Figure 1b. (b) Dendritic spines

segmented in blue and overlaid on the image in panel a. (c) Inset image zooms in on a region from panel b to show clear

segmentation of spines.

and neuronal projections using 3D z-stack

images captured from a ZEISS Celldiscoverer 7

microscope. An iterative process involving data-

centric model training was employed to refine

the model before integrating it into an image

analysis pipeline utilizing the 3D toolkit in

ZEN. The successful segmentation of dendritic

spines using the trained model demonstrates

the effectiveness of Deep Learning in complex

image analysis and its potential to contribute to

future neurological disease research.

Organoid analysis

Organoids are artificial three-dimensional

model systems that can imitate the cellular

composition and tissue architecture of organs

while being easier to maintain and manipulate

experimentally, making them ideal tools for

developmental biology research.

Intestinal (gut) organoids are indispensable

tools for studying both normal gut

development and the mechanisms that lead to

morbidities (e.g., inflammatory bowel disease).

The Wnt pathway is a well-known signaling

pathway regulating intestine development and

maintenance. The functions and effects of Wnt are very intricate and context-dependent, with

Wnt contributing to maintaining healthy tissue

stem cells and the transition and differentiation

of stem cells into mature enterocytes (intestinal

tissue cells). However, excessive Wnt activity

(e.g., by genetic mutations) contributes to

intestinal cancer.

Investigation of Wnt inhibition on

organoid formation

To study the effect of Wnt inhibition, intestinal

stem cells equipped with fluorescent proteins

Histone2B-RFP and Mem9-GFP to mark cell

nuclei and membranes were allowed to grow

to organoids for 5 days in the presence or

absence of Wnt signaling pathway inhibitor

IWP-2. Organoids were then fixed and

antibody-stained for aldolase B, a marker for

differentiated enterocytes, and counterstained

with DAPI (for nucleus detection).

Image acquisition was performed using a

confocal ZEISS Celldiscoverer 7 that combines

widefield and confocal imaging modes. Single

organoids were acquired at 20X magnification

with image stacks spanning the complete

organoid depth.


— — — — — — — — — — — — — — —


Page 77


75

Case Studies: Examples from Life ScienceFigure 4: Imaging of Organoids. (a) Overview scan of organoids (widefield). (b) Identification of areas of interest. (c)

Detailed confocal scan using Airyscan detector. The overview scan was performed with a 2.5x magnification in camera-

based widefield mode. For detailed scans (20x magnification), image stacks spanning the complete organoid depth were

captured in confocal mode using the Airyscan detector.

The ZEISS ZEN (blue edition) module ‘Guided

Acquisition’ was used to acquire many

individual organoids. This is an automated

imaging workflow consisting of three parts.

A large overview scan with a low magnification

(Figure 4a ), an image analysis pipeline to

identify areas of interest, in this case, individual

organoids on the overview image ( Figure 4b ),

and a detailed scan of all identified positions

(Figure 4c ).

Leveraging many segmentation tools in

ZEISS arivis Pro

The images were analyzed using ZEISS arivis

Pro with Machine Learning segmentation

performed to segment the outer organoid

cell layer. Next, the organoid lumen was

determined by filling inclusions in the

organoid cell layer segmentation. Nuclei were

segmented with the blob finder function from

H2B-RFP and DAPI channels. Nuclei within the

organoid cell layer and the organoid lumen

were separated into two object groups based

on object distances to the organoid lumen. The cell bodies were segmented via regions

growing from nuclei objects within the

organoid cell layer. Finally, all object groups

were stratified for single organoids to enable

better statistical analysis.

The validity and quality of the different

segmentations applied during the analysis were

checked. The organoid cell layer and organoid

lumen were segmented with the Machine

Learning segmenter. Employing Machine

Learning leads to superior segmentation results

compared to conventional threshold-based

segmentation, allowing discrimination

between cells in the cell layer (included in the

objects) and lumen (excluded from the objects)

based on complex image texture (see Figure

5a).

Cell nuclei were segmented with blob finder

segmentation, allowing high-quality separation

of nuclei despite them being densely packed

in 3D and despite intensity variations. By

setting up relationships between the organoid

Figure 5: Organoid cell layer and lumen segmentation. (a) The cell layer overlay is shown in green, and the lumen overlay

in yellow. (b) Nuclei in organoid cell layer and lumen. Cell layer nuclei are shown in red, and luminal nuclei in yellow. (c)

Cell bodies in the organoid cell layer. Cell layer nuclei are shown in red, and cell layer cell bodies are shown in green.


— — — — — — — — — — — — — — —


Page 78


76

Case Studies: Examples from Life ScienceFigure 6: Wnt inhibition impacts morphology of organoids. Overview images of organoids treated without (a) and with

(b) Wnt inhibitor. The images show that Wnt inhibition changes the morphology of the organoids, including size and

shape. Control-treated organoids are larger and have an irregular shape. (c) The roundness of full organoids. Single data

points, mean, and standard deviation are depicted. p -value from statistical t -test is shown.

cell layer and lumen object, nuclei were then

further separated into cell layer nuclei and

luminal nuclei (see Figure 5b ). Cell bodies were

segmented by region, growing from cell layer

nuclei. By object filtering, they were restricted

to the organoid cell layer (see Figure 5c ).

Wnt inhibition affects the morphology of

organoids

Analysis of organoid morphology showed

a trend for larger volumes and particularly a

larger spread of volumes in the control group,

suggesting that Wnt inhibition interferes

with the proper growth of the spheroids (see

Figure 6 ). However, none of these trends were

significant in a statistical t-test.

The control-treated organoids formed more

amorphous shapes, while organoids treated

with Wnt inhibitor remained spherical.

ZEISS arivis Pro offers several morphological

parameters to analyze such observations.

Statistical analysis of ‘roundness’ showed a

significant drop in control-treated samples

(see Figure 6c ). Thus, Wnt inhibition indeed

interferes with the formation of amorph

organoid shapes.

Cell numbers in different organoid

compartments

The number of cells in the different organoid

compartments were analyzed based on

nucleus object counts. There was a significant

increase in cell numbers for control-treated

organoids compared to organoids exposed

to Wnt inhibition (p < 0.05 each in statistical t-tests), indicating that Wnt inhibition interferes

with proper organoid outgrowth.

Aldolase B is a marker for enterocyte

differentiation and mainly localizes to the

cytosol, making the cell body objects the

best suited for analysis (see Figure 7a ). Using

ZEISS arivis Pro to extract channel intensities

from different hierarchical layers, aldolase B

expression was measured for the complete

organoid (see Figure 7b ), and the single-cell

mean aldolase B intensities measured

independently on every cell (see Figure 7c ). In

both cases, there is a strong and significant

increase (p < 0.001 in statistical t-tests) in

organoids that were mock-treated compared

to organoids treated with Wnt inhibitor, adding

further evidence that Wnt inhibition interferes

with organoid maturation.

Determining aldolase B-positive cells as

an alternative read out

More realistically, cells are either ‘positive’ or

‘negative’ for aldolase B, as can be observed

in a typical organoid cross section (see Figure

8). Therefore, a more suitable analysis strategy

stratifies cells into aldolase B-positive and

-negative groups, then evaluates the fraction of

positive cells within an organoid.

Using a mean pixel intensity of 15 as a

threshold for aldolase B-positive cells, positive

and negative cells were generated that match

well with the visual impression of aldolase B


— — — — — — — — — — — — — — —


Page 79


77

Case Studies: Examples from Life ScienceFigure 7: Localization of aldolase B expression in the organoids. (a) Aldolase B expression (gray) is localized to entire

cell bodies (green) rather than the nuclei (red). (b) Total organoid aldolase B expression. Single data points, mean, and

standard deviation are depicted. p -value from statistical t -test is shown. (c) Average cellular mean aldolase B intensity.

Single data points, mean, and standard deviation are depicted. p -value from statistical t -test is shown.

Figure 8: Determining aldolase positivity. (a) Localization of aldolase B expression in the organoids. Aldolase B expression

(gray) is localized to the entire cell bodies (green) rather than the nuclei (red). (b) Number of aldolase B-positive cells per

organoid. Single data points, mean, and standard deviation are depicted. p -value from statistical t -test is shown. (c)

Percentage of aldolase B-positive cells per organoid. Single data points, mean, and standard deviation are depicted.

p-value from statistical t-test is shown.

distribution in the example cross section (see

Figure 8 ). Results are shown as total positive

cells per organoid (see Figure 8b ) and as the

percentage of positive cells per organoid (see

Figure 8c ). Again, control-treated organoids

had significantly more aldolase B-positive cells,

indicating better organoid maturation.

Summary

This study highlights how combining a ZEISS

Celldiscoverer 7 and ZEISS arivis Pro for image

analysis allows easy analysis of organoids

and can help uncover biological insights,

such as the role of Wnt signaling in intestinal

organogenesis. Only 30 organoids per sample

were analyzed, which is insufficient for a

professional study and statistically relevant

conclusions. This kind of ‘real-world’ use case

helps users to learn about image analysis

strategies they can use for their data.


— — — — — — — — — — — — — — —


Page 80


78

Case Studies: Examples from Life ScienceFigure 9: (a) Phase contrast image of HeLa cells captured at 10x magnification. (b) Entropy-filtered image revealing

subtle variations in texture and tone from panel (a). (c) Segmented regions containing cells against the background

after applying a threshold to the image in (b). Note that while the cellular region is segmented, individual cells are not

separated.

Cell tracking is a commonly used assay in

biotech research, as it provides valuable

insights into a wide range of diseases and

conditions. For example, it can be used to

monitor the behavior of cancer cells, including

their proliferation, migration, and invasion,

thus helping researchers to develop new

cancer therapies and evaluate the effectiveness

of existing treatments. While fluorescent

labeling facilitates cell segmentation and

tracking, researchers often choose to image

cells in brightfield or phase contrast. This is

because these imaging techniques can provide

valuable information about cell morphology

and structure, including the size, shape, and

texture of the cell. Also, they do not require

any additional preparation of the cells, such as

labeling or staining, which means that the cells

can be imaged directly in their natural state,

without being altered by the labeling process.

This is particularly important for studying

certain cellular processes or phenomena, as

adding fluorescent labels may interfere with or

alter the behavior of the cells.

The benefits of object-based

segmentation in biomedical applications

Both conventional Machine Learning and

Deep Learning techniques (such as the use of

U-net [1) share a similar limitation: they cannot separate individual cells, which is essential

for accurate tracking algorithms. While these

methods may produce satisfactory results by

defining an additional border class, a more

reliable approach is to use object-based

segmentation algorithms, also known as

‘instance segmentation’ in the AI community.

This method is more effective in accurately

segmenting individual cells, allowing for more

precise tracking and analysis of their behavior.

Instance segmentation is a computer vision

technique used for identifying and outlining

individual objects within an image. Unlike

semantic segmentation, which assigns a

single label to each pixel in an image, instance

segmentation identifies and separates objects

based on their unique characteristics, such

as shape, size, and color. It is particularly

useful for biomedical applications, such as cell

segmentation in brightfield and phase contrast

microscopy images.

The challenges of segmenting brightfield

micrographs

However, segmenting cells in brightfield and

phase contrast images can be challenging,

primarily because the average gray level of the

cells is often equal to the average gray level

of the background. This makes it impossible Enhancing single-cell analysis with instance segmentation in phase

contrast microscopy images


— — — — — — — — — — — — — — —


Page 81


79 Case Studies: Examples from Life ScienceFigure 10: A screenshot of the annotation interface from arivis AI displaying a partially annotated training image of HeLa

cells. The cells are clearly labeled in red, while the background is labeled in dotted purple.

for demonstration purposes to illustrate the

diff erences between the semantic and instance

approaches. Figure 11 shows the input image

and its corresponding semantic segmented

images. The segmentation successfully

distinguished the cellular region and the

background, but failed to separate the cells.

Semantic segmentation is adequate if only the

area fraction of the cellular region is required,

but instance segmentation is the appropriate

tool for tracking and extracting individual

cellular information.

There are various Deep Learning-based

algorithms available for instance (object-based)

segmentation such as a modifi ed version of

U-net, but the most widely known algorithms

are Mask R-CNN [2] and Mask2Former [3].

arivis AI uses a Mask2Former approach, which

has been adapted to work with microscopy

data and is capable of segmenting images

with multiple input channels. The loss function

is also customized for training with partial

annotations, further improving the effi ciency

and accuracy of the training process. The

annotations shown in Figure 10 . were used

to train the initial instance model, and further to segment cells using conventional threshold

techniques. One solution is to apply digital

fi lters to generate fi ltered images that can then

be segmented using threshold techniques. For

example, an entropy fi lter can highlight regions

of high texture (see Figure 9b ), which can help

separate cells from the background.

However, this approach fails at properly

separating cells from each other (see Figure

9c). Watershed-based separation is often

used to address this issue, but it can lead to

inconsistent results between frames, potentially

making cell tracking discontinuous between

frames.

Instance segmentation of HeLa cells to

track their movement, shape and size

In this case study, HeLa cells grown over time

in a multi-well plate were imaged under phase

contrast mode using a ZEISS Celldiscoverer 7

microscope with a Plan-Apochromat 20X/0.95

objective and 0.5x Tube lens yielding an

eff ective magnifi cation of 10x. To study the

cells at a single-cell level, including tracking

over time, they were segmented using the

instance segmentation approach. ZEISS arivis

Cloud platform was used to annotate the

training images (see Figure 10 ).ZEISS arivis Cloud off ers tools for both semantic

segmentation and instance segmentation.

A semantic model was initially trained only


— — — — — — — — — — — — — — —


Page 82


80

Case Studies: Examples from Life Science

Figure 11: (a) Phase contrast

image of HeLa cells captured at

10x magnification. (b) Semantic

segmentation result using a U-net-

based Deep Learning architecture.

The pink area represents the

cellular region, which has been

successfully segmented from the

background. However, it should

be noted that individual cells are

not separated by this approach.

Figure 12: (a) Phase contrast

image of HeLa cells captured at

10x magnification. (b) Result of

instance segmentation using the

Mask2Former Deep Learning

method, clearly separating

individual cells and enabling

direct use of the result in

applications such as cell tracking.annotations were added based on the results

to better segment regions where the model

encountered difficulty, primarily the regions

with high density of cells. This data-centric

approach saves time by focusing on annotating

challenging areas instead of wasting time

on simple ones. Figure 12 illustrates the

results of instance segmentation on the same

input image as in Figure 11 . The instance

segmentation effectively separated individual

cells, allowing for the tracking of cells in the

time series image dataset.

All images from the time series underwent

segmentation using the trained model. The

resulting masks were imported into ZEISS

arivis Pro for further analysis, where cells were

tracked and followed individually throughout

the time course. Tracking was made easy

by the well-separated, segmented masks

generated through instance segmentation.

Even cell division events were detectable in

tracking, with daughter cells retaining their

tracking identity. Figure 13c displays the first

image in the time series with tracks overlaid to

show the cell center positions at each

time point.While this particular use case focused on the

use of instance segmentation for cell tracking,

the instance segmentation approach can

provide insights from images in many other

ways. For example, the size and shape of cells

can provide crucial information about their

state and behavior, enabling the monitoring

of the effects of various treatments on cells.

As instance segmentation separates individual

cells, they can be sorted based on size (see

Figure 14b ) or shape (see Figure 14c ) with ease.

Summary

In summary, instance segmentation enables

the easy segmentation and separation of

individual objects, allowing for various insights

to be extracted through object tracking and

sorting based on size and shape, among other

methods. arivis AI’s data-centric approach

saves time and ensures efficient annotation of

complex features for instance segmentation

model training. The resulting trained model can

then be used in end-to-end applications such

as cell tracking.


— — — — — — — — — — — — — — —


Page 83


81

Case Studies: Examples from Life ScienceFigure 13: (a) Phase contrast image of HeLa cells captured at 10x magnification. (b) Result of instance segmentation

using the Mask2Former Deep Learning method. (c) Result of the tracking algorithm showing cell tracks overlaid on the

original image from (a). The tracking analysis was performed using ZEISS arivis Pro.

Figure 14: (a) Phase contrast image of HeLa cells captured at 10x magnification. (b) Cells are color-coded by size, with

smaller cells in green and larger cells in pink. (c) Cells color-coded by shape, with rounded cells in green, less rounded cells

in purple, and cells with medium sphericity in cyan.

References

1. Ronneberger O, Fischer P and Brox T. (2015). U-Net: Convolutional Networks for Biomedical

Image Segmentation. In: Navab N, Hornegger J, Wells W, and Frangi A. (eds) Medical Image

Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in

Computer Science (Vol. 9351, pp. 234-241). Springer, Cham. doi:10.1007/978-3-319-24574-4_28.

2. Kaiming H, Georgia G, Piotr D, Ross G. Mask R-CNN. (2018) arXiv:1703.06870 doi

org/10.48550/arXiv.1703.06870.

3. Bowen C, Ishan M, Alexander S, Alexander K, Rohit G. Masked-attention Mask Transformer for

Universal Image Segmentation. (2021) arXiv:2112.01527


— — — — — — — — — — — — — — —


Page 84


82

Case Studies: Examples from Life ScienceFocused ion beam scanning electron

microscopy (FIB-SEM) is a powerful imaging

tool that achieves resolutions of under 10

nm and produces highly detailed 3D image

volumes. FIB-SEM highlights the entirety of

the cell, generating images dense with cellular

features, structural edges, and varying pixel

combinations. The complexity of these images

makes it difficult to use standard image

processing segmentation algorithms to detect

many cellular structures of interest. Therefore,

quantitative analysis of FIB-SEM data often

relies on the tedious and time-consuming

manual drawing of features of interest on 2D

slices of a 3D image volume.

AI-assisted volume EM (vEM) analysis using

Deep Learning approaches offer a way to

move beyond reliance on manual annotation

for segmenting cellular structures [1]. Such an

approach was used to develop a cell-profiling

workflow using neural network training and

image analysis tools that are readily accessible

to researchers and do not require coding.

The first step was training the Deep Learning

model. Using the ZEISS arivis Cloud platform,

subsets of organelles (mitochondria and

nucleus) within a FIB-SEM image of a HeLa

cell (see Figure 15 ) were manually drawn

and used to train neural network models to

identify these large organelles successfully

(see Figure 16 ). These arivis AI-trained Deep

Learning models were initially used to infer

mitochondria and the nucleus in ZEISS arivis Pro

before analysis pipelines were built to filter and

improve the initial inferences into usable 3D

segments.

Segmentation and measurements of

organelles

The neural network models developed from

the arivis AI training allowed the automated

measurement of organelle volume (see Figure

17). ZEISS arivis Pro computes the volume

Figure 15: Overview of HeLa cell image set. The image

set was collected using a ZEISS Auriga Crossbeam FIB-SEM.

(a) nm-resolution image volume of the HeLa cell. (b) Pixel

intensities were inverted to achieve positive signals in a

dark background. (c and d) 3D volumetric renderings of the

image volume, which do not make sense without a positive

signal in black background.

for all 3D objects, making it easy to calculate

the percentage of total cell volume occupied

by each organelle (see Figure 17c ). The

profiling results were consistent with previous

measurements, showing that mitochondrial

volume is ~10% of the cytoplasm volume

within HeLa cells [2].

Mitochondrial characterization and

spatial classification

Once the organelles were segmented, their

distribution and surface-to-volume ratios were

characterized ( Figure 18 ). Analysis pipelines Analysis of FIB-SEM volume electron microscopy data


— — — — — — — — — — — — — — —


Page 85


83 Case Studies: Examples from Life Science

Figure 16: Generation of Deep Learning models for organelles using the ZEISS arivis Cloud platform. Mitochondria and

the nucleus were painted as individual classes for training.

in ZEISS arivis Pro computed the distances of

mitochondria to cellular structures. While the

distances of each mitochondrion’s center of

geometry were not signifi cantly correlated

to the nuclear membrane (see Figure 18c )

or the plasma membrane (see Figure 18d ),

the minimum distance of each mitochondrial

center of geometry to either membrane did

show a significant correlation (see Figure 18e ).This method can be used with any cell

structures that have been segmented and can

measure distances between object surfaces

or centers of geometry. It is also possible to

scale this method using the ZEISS arivis Hub to

allow the analysis of multiple cell image sets in

parallel and produce automated, high-quality

profiles.

Figure 17: Segmentation results from a Deep Learning trained model can predict the percent of cell volume for organelles.


— — — — — — — — — — — — — — —


Page 86


84

Case Studies: Examples from Life ScienceInitial 3D segmentation of nuclear pore

complex regions

3D segmentation of nuclear pore complex

(NPCs) regions was limited by the image

resolution (100–150 voxels per pore) and the

3D structure of each pore uniquely oriented

to the curvature of the nuclear membrane.

Extremely tedious annotation of the NPCs in

all possible orientations would be required

to segment and measure the nuclear pores.

Instead, the relatively large (~400–2000 voxels)

pockets under the pores were analyzed.

The under-NPC objects were used to derive

objects representing the actual pores to create

ground truths for a new 3D-aware Deep

Learning neural network that can segment the

NPCs directly (see Figure 19 ).

Once the segmentation of the NPCs

was complete, the image stack and the

corresponding NPC mask were rotated 30°,

60°, and 90° on the X and Y axes, and the

resulting stacks were resampled to provide the 3D-aware augmented images of the 2D Deep

Learning algorithm on the ZEISS arivis Cloud

platform.

The trained model was used to segment

the nuclear pores on the entire nucleus

to characterize their spatial distributions

(see Figure 20 ). Approximately 80% of the

total NPCs in the nucleus were successfully

segmented.

Distribution and density analysis of

nuclear pores

The segmented NPCs were used to view

and quantify the 3D distribution of NPCs

throughout the nuclear membrane using two

approaches: (1) the ZEISS arivis Pro Distances

operator and (2) the ZEISS arivis Pro Python

application program interface (API) (see

Figure 21 ). Both the ZEISS arivis Pro Distance

operator and the kernel density Python script

were capable of consistently identifying clusters

of pores. Further characterization of the NPC

distribution across the nuclear membrane Figure 18: Mitochondrial surface area-to-volume ratios are negatively correlated with the distance to membranes.


— — — — — — — — — — — — — — —


Page 87


85

Case Studies: Examples from Life Science

Figure 19: NPCs have variable density distribution across areas of the nucleus. Several processing steps were done to

create masks of NPCs from the pocket objects. Taking the pocket objects (a), a binary masked image was generated (b),

followed by a closing operation of the pockets to the nuclear membrane (c). Next, the nuclear membrane and pockets

were used to mask the white space shown in panel c (d). These objects were then dilated (e). Masking using these objects

enhances the visualization of NPCs (f).

found that NPC density is higher within the

smaller nucleus section with higher curvature

(see Figure 21d ). In contrast, the larger section

with a lower curvature degree has more

low-density regions for nuclear pores.

The benefits of Deep Learning for

analysis of FIB-SEM imaging

The combination of traditional and Deep

Learning algorithms with prior biological

knowledge can produce powerful workflows,

Figure 20: Training a 3D-aware neural

network for nuclear pore segmentation.

Several processing steps were done to

create masks of NPCs from the pocket

objects. Taking the pocket objects, a

binary masked image was generated,

followed by the 3D-aware resampling

in preparation for arivis AI training

(a). The resulting CZANN model was

used to create the probability map in

ZEISS arivis Pro with the Deep Learning

Reconstruction operator (b). This 3D

stack was filtered using the ‘Preserve

bright particles’ operator, and the

objects were segmented using the

Watershed algorithm with a strict

threshold (c). In the following step,

the smaller subset of the particles was

expanded by region-growing, while the

largest particles were split and filtered

with the segment feature filter (d).

as demonstrated in this chapter. By generating

objects in the vicinity of NPCs, we can more

accurately identify nuclear pores in 3D regions,

which may not be clearly visible through 2D

analysis alone. These 3D objects, representing

nuclear pores, can then serve as ground truths

for neural network training in Deep Learning.

Overall, this approach can lead to more

precise and comprehensive analyses

of cellular structures.


— — — — — — — — — — — — — — —


Page 88


86

Case Studies: Examples from Life ScienceFigure 21: NPCs have variable density distribution across areas of the nucleus. (a) The average distance of each nuclear

pore object to the nearest eight nuclear pore objects was measured using the Distance operator in ZEISS arivis Pro. The

nuclear pore objects were then color-coded according to these distance measurements to represent the density of nuclear

pores across the nuclear membrane. (b) As an alternative method of analyzing the distribution of the pore objects, the

densities of NPCs were determined by taking the 3D centroid of each NPC object and calculating a Gaussian kernel density,

with a kernel radius of 0.1 µm, using a custom Python script. (c) The density distribution of NPCs is significantly different

across separate areas of the nucleus. Sectioning the nucleus into two sections, a larger and a smaller section, based on

the nuclear cleavage furrow, reveals significant differences in kernel density scores. (d) Two-tailed t-test was performed to

calculate the significance of differences between the kernel density scores in these two sections of the nucleus.

References

1. Parlakgül G, Arruda AP, Pang S, et al . Regulation of liver subcellular architecture controls

metabolic homeostasis. Nature (2022) 603 (7902):736–742. doi: 10.1038/s41586-022-04488-5

2. Posakony JW, England JM and Attardi G. Mitochondrial growth and division during the cell cycle

in HeLa cells. J Cell Biol . (1977) 74 (2):468–491 doi: 10.1083/jcb.74.2.468.


— — — — — — — — — — — — — — —


Page 89


87

Case Studies: Examples from Life Science


— — — — — — — — — — — — — — —


Page 90


88

Case Studies: Examples from Life Science

Figure 22: Manual annotation of control and swollen

mitochondria phenotypes (in yellow) of TEM images of

hippocampus tissue sections to create ground truths for

training the Deep Learning model. Original imaging data

was kindly provided by Dr. Wendy Bautista, MD PhD,

Barrow Neurological Institute, Phoenix Children’s Hospital.To understand the effects of hypoxic

conditions on mitochondria in brain tissue,

researchers from the Barrow Neurological

Institute, Phoenix Children’s Hospital used

the ZEISS arivis Pro pre-trained Deep Learning

model to segment all the mitochondria objects

on the hippocampal tissue section. Exposure to

hypoxic conditions means the mitochondria in

these tissue samples have varying morphology:

some appear normal, and some have

‘swollen’ morphology. Creating one Deep

Learning model to recognize all mitochondria

phenotypes in a single step posed an additional

challenge.

Training the Deep Learning model

30 TEM serial sections were used with 309

mitochondria objects, annotated manually

with the ZEISS arivis Pro drawing tool to create

ground truths for training the Deep Learning

model (see Figure 22 ). The U-net model,

with architecture very similar to the original

publication [1], was used.

Using Deep Learning to segment and

classify mitochondria

The Deep Learning model was applied to the

whole dataset in ZEISS arivis Pro for automated

segmentation (see Figure 23a ). ZEISS arivis Pro

has an extensive list of quantitative features

that characterize each object. In addition,

it is possible to create custom features or

import them from external sources. A custom

object feature that computes the ratio of the

mean intensity of each object to its volume

was created to classify the objects into the

‘control’ and ‘swollen’ groups. For visualization

purposes, each object was color-coded

according to the value of the mitochondria

phenotype custom feature (see Figure 23b ).

Comparing the Deep Learning segmentation

with the manual segmentation (see Figure

23) shows the accuracy of the Deep Learning

model for segmenting mitochondria and how Analysis of mitochondria using Deep Learning

this segmentation, combined with the ability

to create custom object features, can be used

to classify individual mitochondrial phenotypes,

simplifying the investigation of the effects of

hypoxic conditions on mitochondria in brain

tissue.

Segmenting mouse muscle 3D

ultrastructure

Unraveling the architecture of muscle fibers

is crucial for understanding their functional

properties and underlying physiological

processes. Electron microscopy (EM) imaging

plays a pivotal role in this effort, enabling

researchers to visualize the detailed subcellular

organization within muscle tissue. However, the

complexity of muscle samples poses significant

challenges for accurate segmentation and

analysis of EM data.

To address these challenges, an approach

leveraging advanced AI-powered segmentation

techniques was employed to study the

ultrastructure of mouse muscle samples,


— — — — — — — — — — — — — — —


Page 91


89 Case Studies: Examples from Life ScienceFigure 23: Deep Learning segmentation and classifi cation of mitochondria objects. Left image shows manually

segmented mitochondria (yellow objects) and the Deep Learning inference results (cyan objects) overlayed to illustrate the

accuracy of the predictions. Right image shows the spectrum of the mitochondria phenotypes, which is reflected in the

color of the corresponding objects [purple (normal) to red (extremely swollen)]. The phenotype is quantified as the mean

intensity of the object divided by its volume and stored in the custom feature value. Original imaging data was kindly

provided by Dr. Wendy Bautista, MD PhD, Barrow Neurological Institute, Phoenix Children’s Hospital.

Using AI to Overcome Challenging

Segmentation

www.zeiss.com/microscopy/ai-mitochondriaunlocking unprecedented insights into the

organization of muscle fi bers and their cellular

components.

Challenges of imaging of muscle samples

using EM

Muscle samples present several unique

challenges. Firstly, capturing the complex

details of muscle fi bers at high resolution

demands specialized equipment and

meticulous sample preparation. Additionally,

preserving the delicate ultrastructure of the

sample during fi xation and embedding is

crucial to avoid artifacts in the fi nal images.

Another challenge posed by muscle samples

is the large size of muscle fi bers, which

makes it diffi cult to capture a comprehensive

overview while maintaining the high resolution

necessary to image individual fi laments within the tissue. This requires custom imaging

strategies and advanced equipment capable of

handling such large samples.

In this case study, three-dimensional (3D)

volumetric data of the mouse muscle samples

was collected using the ZEISS Crossbeam 550

FIB-SEM system at room temperature.

The samples were prepared using the rOTO

(reduced osmium-thiocarbohydrazide-osmium)

protocol which is a technique that helps

preserve the ultrastructural details of muscle

tissues.

Segmentation inhibits EM analysis of

muscles

One of the most signifi cant hurdles in the

analysis of EM data for muscle samples is the

segmentation of cellular components. Unlike

fl uorescence light microscopy images, which

allow for the separation of diff erent labeled

targets, EM images show various objects of

interest, all in a single grayscale image. The

mean gray values for these objects frequently

overlap. This makes it particularly challenging


— — — — — — — — — — — — — — —


Page 92


90

Case Studies: Examples from Life ScienceFigure 24: Analysis of mouse muscle ultrastructure. (a) 2D slice from a volumetric FIB-SEM dataset of mouse muscle

ultrastructure acquired using a ZEISS Crossbeam 550 FIB-SEM. (b) Segmentation result of the image in panel (a), showing

various components of the muscle tissue in different colors: filaments (green), capillary (yellow), myofibrils (blue),

mitochondria (cyan), and sarcoplasmic reticulum (pink). The segmentation was performed using a multiclass Deep

Learning model trained on ZEISS arivis Cloud and implemented in ZEISS arivis Pro software.

to accurately delineate their boundaries.

Muscle fibers consist of a dense network

of myofibrils, mitochondria, nuclei, and

other subcellular structures. This complexity

exacerbates the segmentation difficulties.

Furthermore, traditional segmentation

algorithms often fail to cope with the high

complexity and minimal contrast within and

between these intricate structures.

AI-powered segmentation:

A transformative approach

To overcome these challenges, a combination

of Deep Learning models and cloud-based

processing was employed to tackle the

segmentation of key cellular components,

including the cell, mitochondria, myofibrils, and

filaments.


The AI-assisted segmentation process involved

partially annotating a handful of images

from the volumetric data by painting regions

of interest for each cellular component in

different colors. The annotation and Deep

Learning training for semantic segmentation

were performed using the ZEISS arivis Cloud

software. The trained model was downloaded

to the ZEISS arivis Pro software to segment the

entire volume, render the 3D reconstruction,

and perform further analysis. The volumetric data was accurately segmented

down to the pixel level by leveraging the

powerful texture recognition capabilities

of Deep Learning, despite the high image

complexity and minimal contrast. Using both

local and cloud-based processing solutions

allowed for efficient handling of the large

dataset sizes, further enhancing the accuracy

and speed of the segmentation process.

The segmentation results are shown in Figure

24, with panel (a) depicting a 2D slice from the

original FIB-SEM data and panel (b) depicting

the segmented components, including

filaments, capillary, myofibrils, mitochondria,

and sarcoplasmic reticulum, in different colors.

The successful implementation of AI-powered

segmentation enabled the acquisition of

unprecedented insights into the organization

of mouse muscle fibers. Figure 25 presents

a 3D volume rendering of the segmented

components from Figure 24b overlaid on

the original FIB-SEM data, allowing for a

comprehensive visualization of the muscle

ultrastructure in its native context.

This level of detailed visualization and

quantification of muscle ultrastructure

is essential for understanding muscle


— — — — — — — — — — — — — — —


Page 93


91

Case Studies: Examples from Life ScienceFigure 25: 3D volume rendering of segmented mouse muscle ultrastructure overlaid on the original EM data. The

rendering visualizes the segmented components of the muscle tissue, including filaments (green), capillary (yellow),

myofibrils (blue), mitochondria (cyan), and sarcoplasmic reticulum (pink), in their native 3D context within the EM volume,

which was acquired using a ZEISS Crossbeam 550 FIB-SEM. The segmentation was performed using a multiclass Deep

Learning model trained on ZEISS arivis Cloud and implemented in ZEISS arivis Pro software.

Summary

The challenges posed by EM imaging of muscle

samples are formidable, but the application of

AI-powered segmentation has demonstrated

transformative potential in overcoming these

barriers. By leveraging Deep Learning and

cloud-based processing, the complex cellular

components within mouse muscle fibers were development, identifying pathological

alterations in muscle diseases, and designing

targeted therapeutic interventions. The findings

have the potential to significantly advance the

field of muscle biology research and pave the

way for groundbreaking discoveries.

References

1. Ronneberger O, Fischer P and Brox T. (2015). U-Net: Convolutional Networks for Biomedical

Image Segmentation. In: Navab N, Hornegger J, Wells W, and Frangi A. (eds) Medical Image

Computing and Computer-Assisted Intervention – MICCAI 2015. MIC CAI 2015. Lecture Notes

in Computer Science (Vol. 9351, pp. 234-241). Springer, Cham. doi:10.1007/978-3-319-24574-

4_28.accurately segmented, unlocking a new level

of insights into muscle ultrastructure and

function.

This case study highlights the transformative

impact of AI in the field of microscopy image

analysis, showcasing how cutting-edge

technology can empower researchers to

unravel the complexities of biological systems

and drive scientific progress.


— — — — — — — — — — — — — — —


Page 94


92

Case Studies: Examples from Life ScienceUsing zebrafish as a model for biomedical

research is well established. This case study

explores how zebrafish are used for in vivo

research of Shigella and other bacterial

pathogens. We will review the challenges

of sample throughput demands, and how

they were resolved using AI-driven solutions

for more efficiency, higher microscope

performance, and enhanced image analysis

capabilities.

The importance of zebrafish for

biomedical research

Zebrafish are highly amenable to laboratory

research, producing hundreds of embryos per

day and they are considered a close model of

the human immune system. Their genome is

fully sequenced and can be easily manipulated.

Combined with their optical accessibility,

this makes them useful for quantitative

microscopy approaches and drug screening.

This is why they serve as a great model for

disease characterization, researching biological

processes in depth in vivo , and identifying

Figure 26: Imaging workflow overview. (a) Flow chart describing the AI-based workflow. (b) Whole zebrafish is segmented

in a well plate. (c) AGM region segmented in a well plate.Enhancing the utility of zebrafish models to study infectious diseases

using Deep Learning

new treatment methods. Moreover, their rapid

growth makes it easy to observe diverse effects

over time.

The Mostowy Lab at the Department of

Infection Biology, London School of Hygiene

and Tropical Medicine, led by Dr. Serge

Mostowy, aims to deepen our understanding

of cellular immunity and illuminate innovative

therapeutic approaches using zebrafish models

[1]. Their current focus is on deciphering the

molecular and cellular mechanisms underlying

host defense against Shigella , an important

human pathogen, which today lacks an

effective vaccine.

The challenges of in vivo research

High numbers of zebrafish embryos are

readily available for laboratory study, but

the bottleneck lies in processing samples

for quantitative microscopy. In addition, it is

challenging to study the whole animal while

it is alive. This calls for higher throughput in

image acquisition and analysis over time.


— — — — — — — — — — — — — — —


Page 95


93

Case Studies: Examples from Life ScienceFigure 27: AI-based predictions of zebrafish embryos in well plates, as seen in ZEISS arivis Cloud. The AI model overcomes

diverse shapes and positions, even in low-resolution images.

Acquiring the necessary images from an entire

96-well plate of zebrafish at high resolution

is too time- and resource-consuming. To

overcome this, the lab must first find the

embryos in the wells and segment regions of

interest (ROI) in low-resolution images, before

investing time in high-resolution imaging of

the identified regions. Furthermore, traditional

methods of image segmentation are often

insufficient due to low contrast between the

zebrafish and its surroundings, leading to

time-consuming manual annotation. Fully

manual approaches are susceptible to human

bias and frequently take too much time.

This is why the lab sought a more efficient

solution, focusing on automation and AI. The

lab was looking for an enhanced workflow

that not only automated image acquisition, but

also eliminated the need for tedious manual

drawing.

How AI-based automation helps

To enable high-resolution imaging to be

targeted specifically at the zebrafish and its

ROIs, the ZEISS Solutions Lab collaborated with the Mostowy Lab to develop a customized,

AI-driven automated solution that can detect

these ROIs within an entire well (see Figure 26 ).

The joint efforts resulted in the fully automated

acquisition of a time series of high-resolution

z-stacks of both zebrafish and specific ROIs.

The structure of the new workflow is:

■Acquire an image of the entire well at low

resolution.

■Segment the fish and the ROI using

AI-trained model on ZEISS arivis Cloud

(Figure 27 ).

■Integrate AI models into ZEN and analyze

the images to recognize the ROIs.

■Use automated guided acquisition for

high-resolution ROI imaging.

■Automatically trigger the guided acquisition

to acquire multiple images over time (e.g.,

2–4 days).


— — — — — — — — — — — — — — —


Page 96


94

Case Studies: Examples from Life Science

Figure 28: The ZEN software displaying the zebrafish larvae in the wells of a 96-well plate.

The customized solution saves time on fast

imaging of the zebrafish larvae in 96-well

plates ( Figure 28 ).

The Deep Learning model automatically

segments both the entire zebrafish and the

aorta-gonad mesonephros (AGM) region.

High-resolution imaging is then targeted only

to the recognized ROIs. The imaging occurs

automatically over time at defined time points.

The custom Deep Learning model is capable

of detecting zebrafish embryos at different

developing stages, from 1 to 4 days post-

fertilization, which is critical to the research

application. The new workflow results in

high-quality z-stacks of multiple channels

and time series images and reduces human

involvement in the acquisition process. This

enables non-biased image acquisition and

analysis. The entire process is user-friendly and

faster than traditional methods.

Historically, the lab would rely on imaging

3–20 larvae for most experiments. With the

Celldiscoverer 7 microscope (see Figure 29 ) for

automated, AI-enhanced well plate imaging

and analysis, they can now study significantly

more samples. This has added much more depth to the research performed in the

Mostowy Lab by transforming the methods of

using the zebrafish model.

Results with the AI-based workflow

The Mostowy Lab has already published

two papers working with the automated,

AI-enhanced workflow.

In one study, they tracked Shigella infection

and tested the role of diverse antibiotics on

various Shigella strains [2,3]. The team could

observe the infection over time and test the

synergies between antibiotics and the immune

system when combating it.

In a subsequent study, the lab used the AI

workflow to test if mutations in the septin

cytoskeletal protein family affect zebrafish

larvae development, as well as a means of

developing host-directed therapies to control

Shigella infection [4].

AI enables new zebrafish research

ambitions

The AI-enhanced workflow has transformed

ambitions for using the zebrafish model,

according to Dr. Mostowy. The automated

workflow allows the team to work in 96-well


— — — — — — — — — — — — — — —


Page 97


95 Case Studies: Examples from Life Science

Figure 29: View of the ZEN software used to control the Celldiscoverer 7 microscope.plates, generating faster results that enable

applications such as screening pharmacological

compounds.

The new tools will enable monitoring both

Shigella infection over time, and the zebrafi sh

immune system in more detail. This means not

only studying the zebrafi sh as a whole animal

but performing analysis at the single cell level

in vivo , to capture infection events and study

single cells over time.

Watch the Video to Learn More

www.zeiss.com/zebrafi sh-shigellaPossible research advancements

Overall, the advancements mentioned in the

previous section could potentially lead to:

■Faster results for a more immediate clinical

impact.

■Drug discovery and genetic screening.

■Diverse infection research (not just Shigella ).

■A new level of detail for in vivo infection

research (cellular and subcellular level).

■Enhanced models for studying zebrafi sh

development and underlying mechanisms.


— — — — — — — — — — — — — — —


Page 98


96

Case Studies: Examples from Life ScienceReferences

1. Mostowy Lab. Department of Infection Biology. London School of Hygiene and Tropical

Medicine. URL: https://themostowylab.org/research/ (accessed 03 May 2024).

2. Lensen A, Gomes MC, López-Jiménez AT, and Mostowy S. An automated microscopy workflow

to study Shigella–neutrophil interactions and antibiotic efficacy in vivo . Dis Model Mech . (2023)

16(6):dmm049908. doi: 10.1242/dmm.049908.

3. First person – Arthur Lensen and Margarida C. Gomes. URL: https://journals.biologists.com/

dmm/article/16/6/dmm050255/308934/First-person-Arthur-Lensen-and-Margarida-C-Gomes

(accessed 03 May 2024).

4. Torraca V, Bielecka MK, Gomes MC, Brokatzky D, Busch-Nentwich EM, Mostowy S. Zebrafish

null mutants of Sept6 and Sept15 are viable but more susceptible to Shigella infection.

Cytoskeleton . (2023) 80 :266–274. doi: 10.1002/cm.21750.

Acknowledgment

This case study and images used are courtesy of Dr. Serge Mostowy and Dr. Margarida C. Gomes

from the Mostowy Lab at the Department of Infection Biology, London School of Hygiene and

Tropical Medicine.


— — — — — — — — — — — — — — —


Page 99


97

Case Studies: Examples from Life Science


— — — — — — — — — — — — — — —


Page 100


98

Case Studies: Examples from Life ScienceMouse models are valuable tools in genetic

research since they closely resemble humans

in terms of physiology and genetics. These

qualities render them indispensable for

researching human diseases, developmental

biology, genetic abnormalities, and toxicity. All

these research fields benefit from a thorough

understanding of mouse embryo development

and the impact and function of different genes

and proteins in this process.

Comparing the phenotype of mouse

embryos from different genetic lines enables

researchers to examine the consequences

of targeted gene alterations. This enhances

comprehension of gene function, genetic

disorders, developmental processes, and

prospective therapeutic targets [1]. Capturing a

digital record of the observable characteristics

of the internal structure of mouse embryos

provides a unique way of comparing these

different genetic lines. Scientists can use the 3D

datasets from different stages of development

to discern phenotypic patterns, genetic

aberrations, and their associations with human

illnesses [2].

Figure 30: Iodine contrasted E15.5 mouse embryo imaged using Zeiss Xradia Context microCT. (a) 3D rendering of the

reconstructed dataset. (b) Digital section through the 3D rendered dataset to show the internal embryo components in the

chosen embryo cross-section. Sample courtesy of Chih-Wei Logan Hsu, Baylor College of Medicine.Exploring mouse embryo development with microCT and AI

MicroCT imaging of mouse embryos:

Non-destructive insights into

developmental anatomy

Micro-computed tomography (microCT) is

an ideal imaging technology to analyze the

physical characteristics of mouse embryos.

MicroCT is a non-destructive method, allowing

the capture of both exterior and interior

structures without needing to physically

section the sample (see Figure 30 ).

For optimal microCT imaging of mouse

embryos, fixation, and staining procedures

are necessary to improve tissue contrast.

Hydrogel can be used to provide stabilization

and support to maintain tissue shape during

imaging [3]. The microCT scan generates

a precise and accurate 3D depiction of the

specimen, facilitating the visualization of

intricate anatomical features. A comprehensive

view of embryonic progression can be

obtained by examining several embryos at

various developmental stages [4].


— — — — — — — — — — — — — — —


Page 101


99 Case Studies: Examples from Life ScienceHarnessing AI for microCT analysis

Despite the remarkable contrast achieved by

microCT in biological samples, analyzing 3D

volumes remains a complex task due to the

high degree of tissue similarity. This challenge is

particularly evident when managing numerous

specimens that require consistent analysis

results.

AI algorithms, particularly those based on Deep

Learning, provide robust methods to facilitate

the examination of mouse embryo microCT

datasets by automating and streamlining the

segmentation process for organs and tissues.

They accurately distinguish between diff erent

structures, even when they share similar

characteristics.

Figure 30b presents a digital cross-section of a

3D rendering showcasing an iodine-contrasted

E15.5 mouse embryo. In this representation,

internal components such as the liver, heart,

and eyes are clearly discernible. However,

the image lacks suffi cient grayscale contrast

between these organs, hindering their

segmentation through traditional histogram

thresholding-based approaches.

Furthermore, these regions exhibit minimal

texture diff erences at the pixel level, rendering

conventional feature engineering-based

Machine Learning approaches ineff ective.

Crafting specifi c features for this task may

prove time-consuming, even for seasoned

Machine Learning experts. Deep Learning, with

its millions of tunable parameters, is particularly

suited to accurately model the intricate

distinctions among these regions.

For organ segmentation, we employed a U-net

based semantic segmentation approach on

the ZEISS arivis Cloud platform that facilitates

data-driven training of Deep Learning models

designed explicitly for image segmentation.

The semantic segmentation method on

ZEISS arivis Cloud employs a modifi ed U-net

with an Effi cientNet encoder, providing adaptability across various applications. In

addition, it integrates Focal Loss to address

challenges related to class imbalance and the

segmentation of diffi cult classes versus easy

classes. Figure 31a showcases the results,

illustrating clear segmentation of the brain and

spinal cord, heart, liver, kidney, and eyes, each

depicted in distinct colors. Figure 31b displays

the same regions using the original pixel values,

essentially off ering a digital extraction of these

organs from the encompassing 3D dataset.

3D visualization of mouse

embryo segmentation

www.zeiss.com/mouse-embryo

Summary

The integration of microCT technology and

advanced artifi cial intelligence methodologies

can enhance the exploration of the complex

landscape of mouse embryo development. The

U-net based semantic segmentation approach

was instrumental in overcoming the challenges

posed by complex anatomical structures in

the 3D volumetric dataset. As technology

continues to evolve, the synergy between

imaging technologies and AI promises to

further enhance our understanding of mouse

embryo development and the infl uence of

genes and external factors on this process.


— — — — — — — — — — — — — — —


Page 102


100

Case Studies: Examples from Life ScienceReferences

1. Dickinson ME, Flenniken A, Ji X, et al. High-throughput discovery of novel developmental

phenotypes. (2016) Nature 537(7621): 508–514 doi: 10.1038/nature19356.

2. Hsu CW, Wong L, Rasmussen TL, Kalaga S, McElwee ML, Keith LC, Bohat R, Seavitt RJ, Beaudet

AL, and Dickinson ME. Three-dimensional microCT imaging of mouse development from early

post-implantation to early postnatal stages. (2016) Dev Biol. 419 (2):229–236 doi: 0.1016/j.

ydbio.2016.09.011.

3. Wong MD, Spring S, and Henkelman MR. Structural Stabilization of Tissue for Embryo

Phenotyping Using Micro-CT with Iodine Staining. (2013) PLoS ONE 8(12):e84321 doi: 10.1371/

journal.pone.0084321.

4. Hsu CW, Kalaga S, Akoma U, Rasmussen TL, Christiansen AE, and Dickinson ME. High

resolution imaging of mouse embryos and neonates with X-ray micro-computed tomography.

(2019) Curr Protoc Mouse Biol . 9:e63 doi: 10.1002/cpmo.63.Figure 31: Iodine contrasted E15.5 mouse embryo imaged using Zeiss Xradia Context microCT. (a) Segmentation of

internal organs was performed for the brain and spinal cord (turquoise), heart (red), liver (yellow), kidney (purple) and

eyes (blue). The image segmentation process involved training a Deep Learning model on ZEISS arivis Cloud. Subsequently,

the trained model was applied in ZEISS arivis Pro software to perform segmentation and visualize the complete volume,

providing comprehensive insights into the segmented structures. (b) The segmented organs were subsequently digitally

extracted from the whole dataset for separate visualization. Sample courtesy of Chih-Wei Logan Hsu, Baylor College of

Medicine.


— — — — — — — — — — — — — — —


Page 103


101

Case Studies: Examples from Life Science


— — — — — — — — — — — — — — —


Page 104


102

Case Studies: Examples from Materials ScienceThe importance of investigating the

microstructure of aluminum oxide

Aluminum oxide (Al2O3) is a highly versatile

material with excellent mechanical, electrical,

and thermal properties. Its high resistance

to wear, corrosion, and oxidation further

contributes to its widespread use. The

microstructure of aluminum oxide, which

includes the size, shape, and distribution of

its grains, inclusions, and grain boundaries,

can significantly impact its physical and

mechanical properties. For instance, the size

and distribution of the grains can affect the

strength, toughness, and hardness. The grain

boundaries can influence its behavior under

different conditions, such as temperature,

stress, and corrosion. Thus, investigating

the microstructure of aluminum oxide can

help researchers and engineers optimize

its properties for specific applications and

understand its behavior under varying

conditions.

Figure 32: Aluminum oxide grains partially annotated

on the ZEISS arivis Cloud platform for Machine Learning

and Deep Learning training. The green areas indicate the

aluminum oxide grains, the blue outlines correspond to the

grain boundaries, and the red areas represent inclusions

and pores.

Figure 33: Conventional Machine Learning settings

in ZEN for the aluminum oxide grain segmentation

training. ‘Deep Features 64’ setting extracts 64 features

from the training regions, and the ‘Conditional Random

Field’ postprocessing refines the segmentation result by

incorporating contextual information.Case studies

Examples from Materials Sciences

Improving microstructure analysis of aluminum oxide with Deep

Learning

Segmentation of aluminum oxide grains:

Machine Learning vs. Deep Learning

The efficiency of conventional Machine

Learning and Deep Learning approaches for

image segmentation of aluminum oxide grains

were compared using images collected from

a polished aluminum oxide sample (courtesy

of Bernthaler group at Hochschule Aalen).

Images were captured using a ZEISS Crossbeam

550 focused ion beam scanning electron

microscope with a pixel size of 0.03 μm x

0.03 μm and 2048 x 1536 pixels in x and y

dimensions.

A backscattered electron detector provided

the necessary contrast between the aluminum

oxide grains and grain boundaries, where grain

boundaries appear darker than the grains. A

single random image from the image stack was


— — — — — — — — — — — — — — —


Page 105


103 Case Studies: Examples from Materials Science

Discover further information on the

features used in ZEN

www.zeiss.com/zen-intellisis-feature-extractors

Figure 34: (a) Electron microscopy image of aluminum oxide microstructure. (b) Segmentation result of (a) obtained by

applying a conventional Machine Learning model trained using the annotations from Figure 32 . (c) Close-up of the area

outlined by the square in (b). Although conventional Machine Learning methods produce results that appear satisfactory,

upon closer examination, it becomes evident that numerous grain boundaries are not continuous. As a result, any attempt

to measure grain size using this image would result in erroneous fi ndings that are biased toward larger grain sizes. (d)

Segmentation result of (a) obtained by applying a Deep Learning model trained using the annotations from Figure 32 .

(e) Close-up of the area outlined by the square in (d) . Deep Learning segmentation resulted in continuous grain boundaries,

which will yield more reliable grain size measurements.selected for training. The image was partially

annotated on the ZEISS arivis Cloud platform,

where pixels corresponding to the grains, grain

boundaries, and inclusions were painted using

a digital pen to defi ne the ground truth (see

Figure 32 ).

The annotations were used to train a Deep

Learning model on the ZEISS arivis Cloud

platform. arivis Cloud employs the widely

recognized U-net architecture [1] for image

segmentation but with encoder and decoder

modifi cations to increase speed and accuracy.

Additionally, the annotations were exported

to ZEN for use as ground truth labels for

conventional Machine Learning training.

Features from the training regions were

extracted using the ‘Deep Features 64’ setting

(see Figure 33 ). This setting extracts 64 features

by applying ‘layer 1’ from the VGG19 network

[2], pretrained on over 14 million images from the ImageNet database. It’s important to note

that no Deep Learning training occurs during

the Machine Learning training process. Instead,

the pre-trained Deep Learning network is being

used to extract features, which then serve

as input to a conventional Machine Learning

algorithm, Random Forest.

Deep Learning outperforms Machine

Learning for grain segmentation

The results from both the Machine Learning

and Deep Learning segmentation, respectively,

for a random image in the dataset are shown

in Figure 34 . Similar to the training annotations,

the segmentation result shows aluminum oxide

grains in green, grain boundaries in blue, and

inclusions in red. While the Machine Learning

segmentation (see Figure 34b ) appears to be


— — — — — — — — — — — — — — —


Page 106


104

Case Studies: Examples from Materials Science

Figure 35: (a) Electron microscopy image of aluminum oxide microstructure, identical to that shown in Figure 34a . (b)

Grain Size Analysis using the image segmented by conventional Machine Learning incorrectly assigns the bulk of the pixels

to a single large grain, shown in red. (c) Analysis using the Deep Learning-segmented image demonstrates that the grains

are correctly identified, offering more precise grain size distribution data when compared to Machine Learning.acceptable at first glance, many discontinuous

grain boundaries are observed on closer

inspection (see Figure 34c ). This is due to the

inability of the pre-engineered features to

properly present the grain boundary features to

the Machine Learning algorithm, despite being

pre-trained on 14 million images. Any grain

analysis using this approach will lead to an

overestimated grain size distribution. Feature

learning via Deep Learning training helps here,

as it can learn the appropriate features needed

to represent the grain boundaries accurately.

Deep Learning successfully segmented the

grain boundaries (see Figure 34d ), whereas

conventional Machine Learning failed (see

Figure 34c ).

Segmentation is often an intermediate step

in a bigger analysis goal, such as Grain Size

Analysis. Figure 35 shows the results from

Grain Size Analysis using the respective

segmented images from Machine Learning and

Deep Learning approaches. The analysis was

performed using the ZEN software by assigning

all enclosed regions within continuous grain

boundaries to a specific grain.

The Deep Learning-based segmentation

produces continuous grain boundaries that

accurately represent the true grain structure

in the aluminum oxide micrograph. However,

the porous grain boundaries from the Machine

Learning segmentation resulted in the bulk

of the image being detected as a single grain

(shown as the red region in Figure 35b ). Any subtle changes in image quality can result in

significant differences in quantitative results

if image segmentation is inconsistent. Deep

Learning has better generalization ability

and can forgive image variability to some

extent, making it ideal for tasks where even

subtle image variability is expected, and for

applications that need highly reproducible

results with minimal human intervention.

Importance of grain size measurement in

aluminum

Measuring grain size in aluminum is critical

to ensuring material quality and performance

in aerospace, automotive, and construction

industries. Aluminum’s widespread use in

various industries is due to its exceptional

mechanical properties, including strength,

ductility, and toughness, all of which are

significantly influenced by grain size.

Enhancing grain size measurement in

aluminum

Smaller grain sizes generally result in higher

strength and improved ductility, while larger

grain sizes tend to have the opposite effect.

Precise measurement of grain size is, therefore,

crucial to ensure the quality and performance

of aluminum materials across diverse

applications.

Optical microscopy coupled with chemical

etching using Barker’s reagent is the traditional

method for measuring grain size in aluminum.

This process involves polishing a sample,


— — — — — — — — — — — — — — —


Page 107


105

Case Studies: Examples from Materials Scienceetching it to reveal grain boundaries, and

examining it under a microscope with polarized

light. The grain size is determined by counting

the number of grains per unit area or by

manually measuring the average grain diameter

via point counting or intensity thresholding.

Despite its effectiveness, these approaches are

challenging because accurately segmenting

colorful images obtained from the color

etching process is unreliable and often

necessitates inefficient manual calculations.

Challenges with segmenting color-etched

aluminum images

Image segmentation of color-etched aluminum

samples poses several unique challenges. These

include:

■Polishing artifacts: Aluminum alloys

can be challenging to polish, leading

to the presence of micro scratches and

contamination in samples. These flaws,

combined with other artifacts from

the polishing process, complicate grain

segmentation.

■Non-uniform coloring: The color etching

process may not uniformly color all grains of

the same size, resulting in variations in the

color contrast between adjacent grains.

■Grain boundary interference: The color

contrast between adjacent grains may not

be distinct enough to accurately identify the

grain boundaries.

■Overlapping grains: In some cases, adjacent

grains may overlap or appear connected,

making it difficult to accurately distinguish

their boundaries.

■Anisotropy: The color etching process may

reveal different colors depending on the

crystallographic orientation of the grains,

resulting in anisotropy in the color contrast.These challenges underscore the need for

advanced automated techniques.

Limitations of intensity-based

segmentation

Traditional segmentation methods based on

intensity analysis typically involve dividing

images with bimodal intensity profiles into

distinct regions using pixel intensity values. For

example, a predetermined threshold is applied

to differentiate between pixels representing

grain boundaries, which typically exhibit

lower intensity values, and those representing

grains, characterized by higher intensity values.

However, this approach is inadequate for

accurately segmenting images obtained from

Barker-etched aluminum, even when they are

of high quality.

One notable limitation is its tendency to

misclassify darker areas within grains, such as

pitting and cracks, as grain boundaries. This

misclassification can be observed in Figure 36b,

where, in addition to identifying actual grain

boundaries, all dark pixels within the interior

region of the grains are erroneously classified

as grain boundaries. For accurate segmentation

of these grains, it becomes imperative to

account for additional features beyond pixel

intensities alone. Machine and Deep Learning

approaches can use multiple image attributes

to train algorithms that effectively detect and

delineate grains.


— — — — — — — — — — — — — — —


Page 108


106

Case Studies: Examples from Materials ScienceFigure 36: Limitations of threshold-based segmentation. (a) Original image of an aluminum sample etched with Barker’s

reagent and imaged under a microscope with polarized light. (b) Segmentation of grain boundaries using a threshold-

based approach on pixel intensity. Dark regions within grains, such as pitting and cracks, are erroneously segmented as

grain boundaries, shown in yellow.

Figure 37: Limitations of conventional Machine Learning. (a) Original image of an aluminum sample etched with Barker’s

reagent and imaged under a microscope with polarized light. (b) Segmented image illustrating different grains in various

colors. The segmentation method is suboptimal as large regions are incorrectly identified as single grains.

Integrating AI into segmentation

Conventional Machine Learning techniques

have proven efficient at image segmentation

tasks. The process involves the extraction of

diverse features from images through the

application of digital image filters. These

extracted features are subsequently input

into Machine Learning algorithms, such as

Random Forest, to facilitate segmentation.

However, even with these methods, achieving

satisfactory results for grain segmentation in

aluminum alloys treated with Barker’s solution

remains a challenge, as depicted in Figure 37b. The inadequate performance of conventional

Machine Learning can be attributed to the

challenges outlined earlier. The limited set of

image attributes used in conventional Machine

Learning fails to adequately address the

complexity inherent in the segmentation task.

Harnessing Deep Learning for

segmentation of color-etched aluminum

Instance segmentation offers promising

solutions to the challenges encountered in

traditional methodologies. By leveraging Deep

Learning algorithms, this technique enables


— — — — — — — — — — — — — — —


Page 109


107

Case Studies: Examples from Materials Scienceaccurate detection and segmentation of

individual grains in color-etched aluminum

samples.

Unlike conventional methods, instance

segmentation can effectively handle irregular

shapes, overlapping grains, and anisotropic

color contrasts. Moreover, its automation

capability ensures consistent and objective

grain size measurements, minimizing human

error and enhancing overall efficiency.

Figure 38: Accurate grain segmentation through instance segmentation. (a) Original image of an aluminum sample

etched with Barker’s reagent and imaged under a microscope with polarized light. (b) Segmented image resulting from

instance segmentation, accurately delineating grains and grain boundaries. Grains and grain boundaries are depicted in

random colors for visualization purposes. The instance segmentation model used for this segmentation was trained on

ZEISS arivis Cloud.

The benefits of instance segmentation

Adopting instance segmentation brings

several benefits to grain size measurement in

aluminum:

■Accurate detection of individual grains, even

in complex or irregular structures.

■Precise measurement of grain size and

shape, enhancing data accuracy.

■Clear identification of grain boundaries,

facilitating accurate segmentation.

Figure 39: Enhanced grain segmentation in a challenging aluminum sample. (a) Original image displaying numerous

artifacts, including polishing streaks, contamination, and blurred grain boundaries. (b) The result from instance

segmentation showcasing precise separation of grains and grain boundaries. Grains are depicted in random colors for

visualization purposes. The instance segmentation model used for this segmentation was trained on ZEISS arivis Cloud.


— — — — — — — — — — — — — — —


Page 110


108

Case Studies: Examples from Materials ScienceSummary

In summary, incorporating instance

segmentation into the grain size measurement

process for aluminum offers a transformative

approach, addressing the limitations of

traditional methodologies. AI solutions such

as ZEISS arivis Cloud provide accessible tools

for creating customized instance segmentation

models without the need for coding expertise

and seamlessly integrates AI segmentation

within the image acquisition and analysis

pipeline, streamlining the AI segmentation

process and making advanced image analysis

techniques more accessible. All images used

in this case study are courtesy of IMFAA

Hochschule Aalen. ■Automation of the measurement process,

saving time and reducing the frequency of

errors.

■Consistent and objective measurements,

ensuring reliability and reproducibility.

It yields reliable results suitable for further

downstream analysis, such as evaluating grain

size distribution. Figure 38b shows grains from

the original image accurately segmented by

accurately defining the boundaries between

them.

Training custom instance segmentation

models using ZEISS arivis Cloud

A custom model tailored for grain

segmentation in Barker-etched aluminum

polarized images has been trained on ZEISS

arivis Cloud. This trained model was then

applied to segment a challenging image from

a sample showing various artifacts, including

polishing streaks, contamination, and blurred

grain boundaries. Figure 39b demonstrates the

excellent grain segmentation achieved using

this approach and highlights the superiority

of instance segmentation compared to other

methods.

References

1.Ronneberger O, Fischer P, and Brox, T. U-Net: Convolutional Networks for Biomedical Image

Segmentation. (2015) arX iv:1505.04597 doi: 10.48550/arXiv.1505.04597.

2.Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image

Recognition. (2014) arX iv:1409.1556v6 doi: 10.48550/arXiv.1409.1556


— — — — — — — — — — — — — — —


Page 111


109

Case Studies: Examples from Materials Science


— — — — — — — — — — — — — — —


Page 112


110

Case Studies: Examples from Materials ScienceC45 steel, also known as AISI 1045 steel

or S45C steel, holds significant importance

in various industries due to its exceptional

properties and versatile applications:

■Shaft manufacturing: C45 steel is

chosen for shafts due to its high tensile

strength and fatigue resistance, crucial for

enduring mechanical forces and prolonging

operational lifespan.

■Gears and sprockets: The wear resistance

of C45 steel suits gears and sprockets, vital

for power transmission systems, enduring

constant friction and abrasion to maintain

efficiency.

■Machine parts: The superior machinability

of C45 steel makes it ideal for various

machine parts like bolts, nuts, and studs,

enabling easy shaping and machining.

■Automotive parts: C45 steel is favored in

automotive components such as crankshafts

and axles for its high tensile strength

and toughness, ensuring reliable engine

performance and longevity.

■Construction machinery: The strength

and durability of C45 steel make it a

top choice for construction machinery

components like excavators and cranes,

capable of withstanding heavy loads and

harsh environments.

This case study explores the challenges of

segmenting ferrite and pearlite phases in

C45 steel microstructures, which is crucial

for understanding its mechanical properties

and performance. We discuss the importance

of grain size measurement in steel, common

methods used, and how AI-driven instance

segmentation emerges as a solution to Instance segmentation in C45 steel analysis: Improving microstructural

insights with AI

the challenges encountered in accurately

segmenting ferrite and pearlite grains.

Importance of grain size measurement

in steel

C45 steels consist of various grains belonging

to primarily two phases: ferrite and pearlite.

Nital etching is a common practice to reveal the

grain structure in these alloys. Measuring the

grain size of C45 steel is vital as it significantly

influences its mechanical properties, including

strength, toughness, ductility, and fatigue

resistance. Plus, grain size affects machinability

and weldability.

By measuring grain size, manufacturers can

optimize the manufacturing process and ensure

materials meet required specifications, aiding in

quality control and failure analysis.

Common grain size measurement

methods

Metallography employs various methods for

grain size measurement, including:

Comparison chart method

This method relies on comparing the

microstructure of the sample with a standard

chart or image containing known grain

sizes. By visually matching the sample’s

microstructure to the closest standard, the

grain size can be estimated. While relatively

straightforward, this method is subjective

and depends heavily on the observer’s

interpretation.

Linear intercept method

In this method, a line is drawn across the

microstructure, intercepting a specified

number of grains. The length of each intercept

is measured, and the average grain size is

calculated based on these measurements.

Although it provides statistical data, this


— — — — — — — — — — — — — — —


Page 113


111

Case Studies: Examples from Materials Sciencemethod may not account for grains that lie

outside the intercept lines, potentially leading

to inaccuracies.

Planimetric method: This method involves

measuring the area of a specified number of

grains within the microstructure. By dividing

the total area by the number of grains, the

average grain size can be determined. While

offering a more comprehensive assessment

of grain size distribution, this method can be

time-consuming and may require advanced

image analysis techniques for accurate

segmentation.

Each method has its strengths and limitations,

and the choice depends on factors such as the

complexity of the microstructure, the desired

level of detail, and the available resources for

image analysis.

Challenges in ferrite and pearlite

segmentation

Segmenting ferrite and pearlite in steel

microstructures poses several challenges:

■Similar appearance under optical

microscopy.

■Complex morphologies and orientations.

■Interference from other constituents like

carbides and martensite.

■Image quality issues such as poor lighting

and low contrast.

These challenges necessitate advanced image

analysis techniques for accurate segmentation.

AI-based segmentation techniques

Artificial intelligence has revolutionized

image analysis. Deep Learning, in particular,

has demonstrated remarkable capabilities

in achieving precise and reproducible

segmentation results across diverse datasets and is particularly suited to measuring grain

size in steel.


Semantic segmentation

Semantic segmentation, a key approach

within Deep Learning-based segmentation,

involves classifying each pixel in an image.

This technique enables the segmentation of

contiguous pixels representing distinct phases

or regions within the image.

Semantic segmentation is invaluable for

accurately delineating structural elements

such as ferrite and pearlite phases in C45 steel

microstructures. This approach is appropriate

for area fraction measurements, where details

down to the grain level are not required.

Instance segmentation

Instance segmentation represents a further

refinement of semantic segmentation, as it

not only classifies pixels but also identifies

and delineates individual grains or objects.

This approach provides detailed insights into

microstructural characteristics, facilitating

advanced analysis such as grain size distribution

for various phases in the material.

Analysis of Nital-etched polished C45

steel samples

In this study, images of Nital-etched polished

C45 steel samples were examined. These

images were captured using a light microscope

under brightfield imaging. The images exhibit

three distinctive regions:

1. Bright areas with higher pixel values.

2. Dark areas with lower pixel values.

3. Regions with pixel values between bright

and dark.

Bright regions correspond to ferrite, while

darker regions represent pearlite. Additionally,

the intermediate regions with medium


— — — — — — — — — — — — — — —


Page 114


112

Case Studies: Examples from Materials Scienceintensities were attributed to the pearlite phase

for the purposes of this study.

Segmentation of grains in these images

presents challenges even for human observers

due to the ambiguity in assigning regions to

specific phases based on intensity, especially

in areas with medium intensities. Furthermore,

identifying grain boundaries that separate

grains can be challenging, particularly for

pearlite, where the contrast around grain

boundaries is not discernible against the busy

texture of pearlite.

C45 steel grain analysis results using

AI-based methods

To overcome these challenges, an instance

segmentation model was trained on ZEISS

arivis Cloud. The model was trained using

annotations of a handful of random grains

from a selection of images, providing ground

truth for both ferrite and pearlite grains

separately.

This approach ensures that the phases

are segmented accurately and that the corresponding grains are identified, allowing

for grain size distribution analysis of the

respective phases.

The trained model was then imported into

ZEISS arivis Pro and used to segment and

analyze multiple images, including the one

shown in Figure 40a. It is important to note

that ZEISS arivis Cloud-trained models can

be seamlessly imported into various ZEISS

software packages, including ZEN, ZEN

core, and ZEISS arivis Pro. In this study, ZEISS

arivis Pro was chosen for its capability to

accommodate and automate customized

downstream image analysis routines.

Figure 40b illustrates the instance

segmentation result, with ferrite and pearlite

phases clearly separated and overlaid on the

original image in blue and yellow, respectively.

This image closely resembles the result

obtained from semantic segmentation, which

aims to segment individual phases, thus

allowing the quantification of area fractions for

each phase, as illustrated in Figure 41a .

Figure 40: Instance segmentation results. (a) Original image of Nital-etched polished C45 steel sample captured under an

optical microscope using brightfield illumination. (b) Instance segmentation result overlaying ferrite and pearlite phases on

the original image. Ferrite is depicted in blue, while pearlite is shown in yellow. (c) Color-coded visualization of all instance-

segmented grains based on their size, with blue indicating small grains, red representing large grains, and intermediate

sizes represented by colors spanning the spectrum between blue and red. (d) Visualization similar to panel (c) but focusing

solely on grains belonging to the ferrite phase.


— — — — — — — — — — — — — — —


Page 115


113

Case Studies: Examples from Materials ScienceSummary

In summary, training an AI model on ZEISS

arivis Cloud and importing it to ZEISS arivis Pro

enables efficient segmentation and analysis of

multiple images at scale. This approach offers

comprehensive insights into the microstructural

features of Nital-etched polished C45 steel

samples, facilitating accurate analysis through

an automated pipeline.

Figure 41: Analysis of segmented phases. (a) Pie chart illustrating the area fractions of ferrite and pearlite phases

calculated from the instance segmentation result (Figure 40b). (b) Scatter plot demonstrating the relationship between

grain areas and mean intensities. The plot showcases distinctions in mean intensities between pearlite (yellow data points)

and ferrite (blue data points) grains.

In addition to determining area fractions, our

instance segmentation approach provides

information down to the grain level, facilitating

grain distribution analysis for all grains, both

phases combined, and individually. Figure 40c

displays all grains color coded according to

their size, with colors ranging from blue for

small grains to red for large grains. Figure 40d

presents a similar visualization but only for

grains belonging to the ferrite phase.

Beyond visualization, the grain data enables

further analysis, as demonstrated in Figure

41b, where grain areas are plotted against

the corresponding mean intensities. The plot

reveals differences in mean intensities between

pearlite (yellow data points) and ferrite (blue

data points) grains.


— — — — — — — — — — — — — — —


Page 116


Summary 114Summary

This book provided a comprehensive overview

of the importance of AI in image analysis,

presenting a diverse array of use cases and

demonstrating how to leverage this technology

effectively.

The first chapter introduced readers to the

concept of AI and its growing significance

in research, particularly in image analysis.

It explained the distinctions between AI,

Machine Learning, and Deep Learning,

emphasizing Deep Learning’s suitability for

challenging image analysis tasks. The chapter

also introduced ZEISS software products that

make AI accessible to a wide range of users.

Chapter two focused on image segmentation,

offering a historical perspective on various

approaches, from Otsu thresholding to

Deep Learning. This chapter provided the

necessary background about AI-based image

segmentation, laying the groundwork for

subsequent chapters where readers would

learn about the use of this technology for

diverse applications using various ZEISS

software packages.

The third chapter, new to this edition, explored

ZEISS arivis software for AI-powered image

analysis. It explained how ZEISS arivis Cloud

simplifies the training of custom image

segmentation models, which can be imported

into ZEISS arivis Pro for automated analysis of

multi-dimensional large images. The chapter

also discussed scaling image analysis using

multiple processors on ZEISS arivis Hub and the

ability to create ground truth labels in 3D using

the immersive ZEISS arivis Pro VR environment.

Chapter four, another new addition, focused

on AI integration in ZEN and ZEN core

software. It explored how AI can guide image

acquisition, enabling smart microscopy. The

chapter detailed various AI-powered tools

within these software packages, including AI-based denoising, object classification,

and pre-packaged applications in BioApps

and Material Apps for tasks ranging from cell

counting to Grain Size Analysis.

In the fifth chapter, the book discussed

how AI tools could be used in routine

image analysis applications. Integration of

AI was demonstrated using examples from

microscopy, such as tissue and blood sample

analysis for atypical cells and cell morphologies.

Furthermore, it showed how AI tools help

with repetitive and time-consuming tasks and

eliminate human error. The chapter reviewed

the ZEISS Labscope imaging app and showed

how its AI modules benefit these applications.

Chapter six, a new addition, explored the

use of Deep Learning for X-ray microscopy

reconstruction. It covered X-ray microscopy

basics, including how dual-stage magnification

achieves high resolution from large samples.

The chapter then explained how Deep

Learning-based reconstruction can increase

throughput without compromising resolution

compared to traditional FDK reconstruction.

Since the use of AI technology had become

increasingly significant in science and industry,

the seventh and final chapter of the book

centered on an expanded collection of case

studies. These case studies highlighted how AI-

enabled analysis of microscope image datasets

provided new and faster answers to research or

engineering problems. One of the case studies

demonstrated the potential application of AI

tools in segmenting and measuring organelles,

characterizing mitochondria, and classifying

the spatial distribution of nuclear pores using

a volumetric FIB-SEM dataset. Another case

study demonstrated how AI image analysis

assisted in understanding Wnt inhibition

in organoid formation. New case studies

included examples such as segmenting mouse

muscle 3D ultrastructure, enhancing grain size


— — — — — — — — — — — — — — —


Page 117


Summary 115measurement in aluminum, and segmenting

phases in C45 steels, among others.

For readers looking to apply AI technology to

their own image analysis, here are a couple of

additional tips and best practices to keep in

mind.

■Carefully consider the problem and

determine whether AI is the appropriate tool

to use.

■Have a clear understanding of the data and

ensure that there is sufficient high-quality

data to train the models.

■Select the AI tools carefully, as different

algorithms may be better suited to

different types of data and analysis tasks.

For example, the instance segmentation

algorithm is better suited to segment

individual cells separately, while the semantic

segmentation algorithm is more appropriate

when cells need to be segmented collectively

from the background.

■Continually evaluate and validate the models

and incorporate feedback from domain

experts to ensure accurate and meaningful

results.

In conclusion, this updated edition offered

readers an in-depth exploration of AI’s

capabilities in image analysis, inspiring them

to further investigate these techniques in their

own research and work. As AI continues to

evolve rapidly, this book serves as a valuable

resource for unlocking new insights and

capabilities across various scientific disciplines.

Thank you for your time and interest in this

book.

Dr. Sreenivas Bhattiprolu


— — — — — — — — — — — — — — —


Page 118


116Our software is powerful, fl exible, and easy

to use, making it easy to get started with your

image analysis. It is the perfect solution for

researchers, engineers, and scientists.The ZEISS arivis image analysis platform off ers

scalable software tools on a desktop, server,

and in the cloud. With the ZEISS arivis product

portfolio, researchers can easily perform

advanced image analysis to extract information

from image data, regardless of its complexity.

No matter the source and format of the image,

our products are highly integrated, providing

users in academia and across varied industries

with a streamlined image processing and

analysis process for enhanced effi ciencies due

to automation and user-friendliness.

The ZEISS arivis family of products

www.zeiss.com/arivisZEISS arivis family of products

ZEISS arivis Pro

With ZEISS arivis Pro, you can unlock the

full potential of your scientifi c images. Our

powerful tools help you create seamless

analysis pipelines, eff ortlessly process massive

multidimensional datasets, and get the insights

you need to make better decisions.

Here are some of the features of ZEISS arivis

Pro:

■Automated end-to-end image analysis

pipelines, created with just a few clicks.

■Multi-dimensional image analysis is made

easy with an easy-to-use interface.

■Numerous AI-powered tools for automated

image analysis.

■Effi cient handling of large quantities of data,

with the capability to load millions of objects

seamlessly.

■Optional VR toolkit for an even more

immersive experience.

ZEISS arivis Pro

www.zeiss.com/arivis-pro

ZEISS Microscopy Software Solutions

ZEISS Microscopy Software SolutionsVisit our website to learn more:


— — — — — — — — — — — — — — —


Page 119


117The ZEISS arivis family of products

www.zeiss.com/arivis

Designed for biotech, pharma, materials

science, electronics, and more.

Upgrade your image analysis capabilities

with ZEISS arivis Cloud. Collaborate and train

custom Deep Learning models from anywhere

with ease. Get reproducible and reliable results

faster.

ZEISS arivis Cloud

ZEISS arivis Cloud provides the tools necessary

to train custom Deep Learning models for

semantic (pixel-level) and instance (object-level)

segmentation. These models can then be

used as part of image analysis pipelines in

ZEN and ZEISS arivis Pro to power automated

smart image acquisition and analysis of large

datasets.

Key features:

■Customizable Deep Learning models for

pixel and object segmentation in images.

■Export models to automate image analysis

using ZEN and ZEISS arivis software.

■Easy portability and collaboration.

■No coding required!ZEISS arivis Hub

ZEISS arivis Hub has got you covered when

you want to scale up your image analysis. This

powerful platform enables you to optimize

your computing resources, import and organize

your datasets, and manage your data access

and identifi cation with ease, making it ideal

for 2D and 3D High Content Analysis (HCA)

applications.

Key capabilities include the ability to:

■Parallelize your computations for enhanced

scalability.

■Easily create workfl ows with one or multiple

pipelines for connecting various analysis

tasks into one streamlined process.

■View your spatially resolved results directly

on your raw datasets, saving you time and

increasing cost effi ciency.

ZEISS arivis Cloud

www.zeiss.com/arivis-cloud

ZEISS arivis Hub

www.zeiss.com/arivis-hub

ZEISS Microscopy Software SolutionsWhether your images are already stored or

currently being generated, ZEISS arivis Hub

onboards them and schedules analysis jobs for

optimized and maximized throughput.


— — — — — — — — — — — — — — —


Page 120


118ZEISS provides end-to-end microscopy

software solutions that are fully integrated

with every imaging system from ZEISS. No

matter the complexity of your imaging needs

or application, ZEISS will fi nd the hardware and

software solution you need. ZEISS ZEN family of products

ZEISS ZEN

www.zeiss.com/zen

ZEISS Microscopy Software SolutionsZEN microscopy software

ZEN is your complete solution from sample to

knowledge. Whether you’re a beginner or an

expert, ZEN has everything you need to get the

most out of your microscopy experiments.

ZEN is the universal user interface on every

ZEISS imaging system. It provides intuitive

tools and modules to assist you with all your

microscopy tasks. Whether you need to:

■Quickly and easily acquire high-quality

images using smart automation.

■Process images using scientifi cally proven

algorithms.

■Visualize big data with a GPU-powered 3D

engine.

■Analyze images using Machine Learning-

based tools.

■Correlate between light and electron

microscopes to gain a deeper understanding

of your samples.With ZEN, you can design multi-dimensional

workfl ows exactly the way you want. ZEN’s

intuitive tools and modules make it easy to

accomplish simple tasks, while still off ering

the fl exibility to tackle even the most complex

research experiments.

ZEISS Light Microscopy Software

www.zeiss.com/light-microscopy-software


— — — — — — — — — — — — — — —


Page 121


119

ZEISS ZEN core

www.zeiss.com/zen-coreZEISS ZEN core

ZEISS ZEN core is your ultimate software suite

for connected microscopy from materials lab

to production. It off ers a range of imaging,

segmentation, analysis, and data connectivity

tools that make it the most comprehensive

solution for multi-modal microscopy in

connected material laboratories.

With ZEN core, you get:

■An adaptive user interface that’s easy to

confi gure and use.

■Advanced imaging and automated analysis

tools.

■Data connectivity features that are designed

to work seamlessly across all your connected

devices and equipment.

ZEISS Microscopy Software SolutionsZEN core is the perfect software suite for

anyone who needs comprehensive microscopy

capabilities, from materials lab researchers to

production teams. With ZEN core, you can take

your microscopy experiments to the next level

and get the insights you need to make better

decisions.


— — — — — — — — — — — — — — —


Page 122


120Other software solutions

ZEISS Labscope

ZEISS Labscope is your easy-to-use imaging

app. With it you can connect all the

microscopes in your lab or classroom to a

digital network and display their live images

simultaneously from anywhere in the room.

Getting reproducible results faster has never

been easier or more fun.

Here’s how Labscope can help you:

■Eff ortlessly observe and share images in

real-time in your digital network.

■Snap images, record videos, and measure

samples with a push of a button. Increase

effi ciency with dedicated features that are

targeted at routine tasks.

■Collaborate and teach with ease as you

observe your students in real-time. Switch

easily between microscopes in the lab

and in class, turning each lesson into a

demonstration.

ZEISS Microscopy Software Solutions

ZEISS Labscope is the perfect solution for

connecting and managing all your microscopes

in one place. Say goodbye to manual juggling

and hello to easy digital networking, fast

results, and collaborative teaching with

Labscope.

ZEISS Labscope

www.zeiss.com/labscope


— — — — — — — — — — — — — — —


Page 123


121ZEISS DeepRecon Pro

Part of the Advanced Reconstruction Toolbox

(ART), ZEISS DeepRecon Pro leverages AI

to tackle complex imaging challenges with

innovative solutions. As the fi rst commercially

available Deep Learning reconstruction

technology for X-ray microscopes (XRM), it

transforms how you handle big data.

Key benefi ts of ZEISS DeepRecon Pro:

■Unlock the potential of big data generated

by your XRM.

■Increase throughput by up to 10× without

compromising resolution or image quality.

■Intuitive interface, enabling even novice

users to operate with ease.

■Support for diverse sample types, sizes, and

shapes.

ZEISS DeepRecon Pro

www.zeiss.com/art

Experience the robust and continuously

evolving innovations from ZEISS X-ray

Microscopy with the Advanced Reconstruction

Toolbox.

ZEISS Microscopy Software Solutions


— — — — — — — — — — — — — — —


Page 124


Contributors 122Contributors

Chapter 1: What is AI and why does it matter?

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Ofra Kleinberger-Riedrich, Sr. Content & Product Marketing Manager, Carl Zeiss Microscopy

GmbH

Chapter 2: How to train custom AI models for image segmentation

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Ofra Kleinberger-Riedrich, Sr. Content & Product Marketing Manager, Carl Zeiss Microscopy

GmbH

■Dr. Simon Franchini, Technical Lead Machine Learning, Carl Zeiss Microscopy GmbH

Chapter 3: AI in ZEISS arivis software for scalable automated analysis

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Maria Marosvoelgyi, Product Manager, Carl Zeiss Microscopy Software Center Rostock GmbH

Chapter 4: AI in ZEN and ZEN core imaging and analysis platform

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Dr. Marion Lang, Product Manager, Carl Zeiss Microscopy GmbH

■Dr. Sebastian Rhode, Software Architect - AI Solutions, Carl Zeiss Microscopy GmbH

Chapter 5: AI for routine image analysis using ZEISS Labscope

■Anke Koenen, Marketing Specialist, Carl Zeiss Microscopy GmbH

■Dr. Michael Gögler, Market Sector Manager, Carl Zeiss Microscopy GmbH

■Dr. Benjamin Schwarz, Market Sector Manager, Carl Zeiss CMP GmbH


— — — — — — — — — — — — — — —


Page 125


Contributors 123Chapter 6: AI for X-ray microscopy with Deep Learning-based reconstruction

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Dr. Nicolas Gueninchault, Product Marketing Manager, Carl Zeiss X-ray Microscopy, Inc.

Chapter 7: Case studies: Examples from Life sciences


Microscopy and Deep Learning for Neurological Disease Research

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Dr. Kevin O’Keefe, Senior Software Sales Biotech Pharma, Carl Zeiss Microscopy, LLC

■Dr. Amita Gorur, Senior Applications Scientist, Carl Zeiss Microscopy, LLC

■Dr. Christopher Zugates, Head of Customer Success, Carl Zeiss Microscopy, LLC

■Dr. Andy Schaber, Product Application Sales Specialist, Carl Zeiss Microscopy, LLC

Organoid analysis

■Dr. Philipp Seidel, Product Marketing Manager Life Sciences Software, Carl Zeiss Microscopy

GmbH

■Dr. Volker Doering, Application Development Engineer, Life Sciences Automation, Carl Zeiss

Microscopy GmbH

Enhancing single-cell analysis with instance segmentation in phase contrast

microscopy images

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Dr. Sandra Lemke, Product Owner - AI and Applications, Carl Zeiss Microscopy GmbH

■Dr. Frank Vogler, Applications Specialist, Carl Zeiss Microscopy Deutschland GmbH

■Dr. Marion Lang, Product Manager, Carl Zeiss Microscopy GmbH


— — — — — — — — — — — — — — —


Page 126


Contributors 124Analysis of FIB-SEM volume electron microscopy data

■Dr. Mariia Burdyniuk, Customer Success Specialist, Carl Zeiss Microscopy, LLC

■Dr. Christopher Zugates, Head of Customer Success, Carl Zeiss Microscopy, LLC

Analysis of Mitochondria Using Deep Learning

■Dr. Mariia Burdyniuk, Customer Success Specialist, Carl Zeiss Microscopy, LLC

■Dr. Wendy Bautista, Physician Scientist, National Cancer Institute (NCI)

■Dr. Mones Abu Asab, Senior Ultrastructural Scientist, National Eye Institute, NIH

Segmenting mouse muscle 3D ultrastructure

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Joy James Costa, Application Engineer, Carl Zeiss Microscopy Software Center Rostock GmbH

■Dr. Federico Ribaudo, Product Manager arivis Pro, Carl Zeiss Microscopy Software Center

Rostock GmbH

Enhancing the utility of zebrafish models to study infectious diseases using Deep

Learning

■Dr. Serge Mostowy, Department of Infection Biology, London School of Hygiene and Tropical

Medicine

■Dr. Margarida C. Gomes, Mostowy Lab, Department of Infection Biology, London School of

Hygiene & Tropical Medicine

■Ofra Kleinberger-Riedrich, Sr. Content & Product Marketing Manager, Carl Zeiss Microscopy

GmbH

Exploring mouse embryo development with microCT and AI

■Joy James Costa, Application Engineer, Carl Zeiss Microscopy Software Center Rostock GmbH

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Rachna Parwani, Product Applications Development Engineer, Carl Zeiss X-ray Microscopy, Inc.

■Dr Rosy Manser, Solution Manager X-Ray Microscopy, Life Science Sector, Carl Zeiss Limited, UK


— — — — — — — — — — — — — — —


Page 127


Contributors 125Case studies : Examples from Materials science


Improving microstructure analysis of aluminum oxide with Deep Learning

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Tim Schubert, Materials Scientist, Institut für Materialforschung (IMFAA)

Enhancing grain size measurement in aluminium

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Torben Wulff, Business Sector Manager, Materials Science, Carl Zeiss Microscopy GmbH

Instance segmentation in C45 steel analysis: Improving microstructural insights

with AI

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.

■Torben Wulff, Business Sector Manager, Materials Science, Carl Zeiss Microscopy GmbH

Summary

■Dr. Sreenivas Bhattiprolu, Director, Digital Solutions, Carl Zeiss X-ray Microscopy, Inc.


— — — — — — — — — — — — — — —


Page 128


_(No extractable text on this page.)_


— — — — — — — — — — — — — — —


Page 129


_(No extractable text on this page.)_


— — — — — — — — — — — — — — —


Page 130


Follow us on social media: Carl Zeiss Microscopy GmbH

07745 Jena, Germany

microscopy@zeiss.com

www.zeiss.com/microscopy

Not for therapeutic use, treatment or medical diagnostic evidence. Not all products are available in every country. Contact your local ZEISS representative for more information.

EN_41_012_303 | Release 2.0 | CZ 09/2024 | Design, scope of delivery, and technical progress subject to change without notice. | © Carl Zeiss Microscopy GmbH

https://zeiss.widen.net/s/dt2svznhnz/en-whitepaper-rms_product-ai-e-book2.0-zeiss-arivis_print
Play

AI in image analysis: White Paper

Learn More About Image Analysis
Understanding SNR and it's Neuroscience Relevance
Understanding SNR and it's Neuroscience Relevance
WEBPAGE - AI SUMMARY
ZEISS arivis: Product Overview
ZEISS arivis: Product Overview
WEBPAGE - AI SUMMARY
Revolutionizing Microscopy with AI
Revolutionizing Microscopy with AI
WEBPAGE - AI SUMMARY
White Paper: Neuron Tracing in arivis Pro
White Paper: Neuron Tracing in arivis Pro
WEBPAGE - AI SUMMARY
AI assisted 3D annotation in arivis Pro
AI assisted 3D annotation in arivis Pro
VIDEO - AI SUMMARY
Content For Core Facility Managers
AI in Microscopy | Deep Learning for Image Analysis
AI in Microscopy | Deep Learning for Image Analysis
WEBPAGE - AI SUMMARY
Microbiology & Microbial Research
Microbiology & Microbial Research
WEBPAGE - AI SUMMARY
How-to-Use-a-Hemocytometer
How-to-Use-a-Hemocytometer
DOCUMENT
AI Case Studies | arivis
AI Case Studies | arivis
WEBPAGE - AI SUMMARY
Tutorial: Using Cellpose SAM in arivis Pro
Tutorial: Using Cellpose SAM in arivis Pro
VIDEO - AI SUMMARY
Similar to AI in image analysis: White Paper
Liver Toxicity | AI Image Analysis
Liver Toxicity | AI Image Analysis
WEBPAGE - AI SUMMARY
Customer Feature: Dr. Mihaela Vlasea
Customer Feature: Dr. Mihaela Vlasea
WEBPAGE - AI SUMMARY
Customer Feature: Dr. James Schiffbauer
Customer Feature: Dr. James Schiffbauer
WEBPAGE - AI SUMMARY
Customer Feature: Dr. Sina Shahbazmohamadi
Customer Feature: Dr. Sina Shahbazmohamadi
WEBPAGE - AI SUMMARY
High-Content Analysis in arivis
High-Content Analysis in arivis
VIDEO - AI SUMMARY
Newest Content
on-the-tip-of-my-tongue.html
on-the-tip-of-my-tongue.html
WEBPAGE - AI SUMMARY
immune-cells-multiplex-tissue-microscopy.html
immune-cells-multiplex-tissue-microscopy.html
WEBPAGE - AI SUMMARY
biofilm-infections-and-whole-slide-imaging.html
biofilm-infections-and-whole-slide-imaging.html
WEBPAGE - AI SUMMARY
A Life for Protozoa: A Biologist and His Fascination for the Microcosm
A Life for Protozoa: A Biologist and His Fascination for the Microcosm
WEBPAGE - AI SUMMARY
LSM Spectral Multiplexing
LSM Spectral Multiplexing
WEBPAGE - AI SUMMARY
Powered by Navless.ai