6.1. Registration Intro

6.1.1. Learning Objectives

It is essential to understand the key registration methods for CAS, and also have an understanding of some of the large body of work done in quantifying the likely error of a registration.

Upon completion of this section, the student will be able to:

  • Understand what a coordinate system is

  • Sketch the main coordinate systems in a CAS system

  • Implement coordinate conversions using 4x4 homogeneous transformations

  • Recall the main registration methods

  • Understand some of the challenges when registering data to physical space

6.1.2. Introduction

Registration is the process of aligning two Coordinate Systems.

6.1.2.1. Medical Image Computing

This example may be more familiar to you if you have done the IPMI course. In medical imaging terms, registration is often done to align image-volumes, e.g. align MR to CT

Like in this example, shown on the AnalyzeDirect channel on YouTube:

Most medical image viewers provide similar functionality to align 3D volumes.

6.1.2.2. Computer Assisted Surgery

In CAS, the problem also exists in intra-device, inter-device or image-device scenarios:

  • image-device: registering a pre-operative volume image to a tracker.

  • inter-device: registering a camera coordinate system to a tracker

  • intra-device: registering 2 poses of a camera at subsequent time points

For example:

  • image-device: registering pre-operative data (CT/MR) scans to patient (tracker/world) space, to display the physical location of the tip of a tracked pointer in the MR/CT scan.

  • inter-device: registering pre-operative data (CT/MR) scans to a laparoscopic video feed. This can be done directly, by matching the CT/MR coordinates to the video camera coordinates [Espinel2020], or indirectly by registering CT/MR to tracker space, and then using tracking and calibration information to work out where the camera is, and hence where the CT/MR is relative to the camera [Thompson2015].

  • intra-device: registering feature points in one video frame to the next, and working out the difference in camera position which would enable triangulating those points.

6.1.3. Methods

Typically, methods in CAS, are sub-divided (e.g. in Peters et al., “Mixed and Augmented Reality in Medicine”) into:

  • Manual

  • Point-based

  • Surface-based (also called Shape-based)

  • Volume-based, (i.e. intra-op CT to pre-op CT, not covered, see [Octay2013].)

  • Calibration-based, covered earlier as examples [Feuerstein2008], [Kang2014].

These are covered in the next sections.

6.1.4. A Note on Coordinate Systems and Rotations

A brief introduction to coordinate transformations is provided in the accompanying Jupyter Notebooks.

In 3D space, we typically consider 6 degrees-of-freedom (DOF):

  • Translations along x, y, z cartesian axis = 3 DOF

  • Rotations about x, y, z cartesian axis = 3 DOF

So, registration and converting coordinates from one coordinate system to another require understanding of how these work.

However:

  • There are several rotation formulations.

  • Euler angles get confusing when you consider extrinsic or intrinsic rotation.

  • Euler angles, Quaternions, Rodrigues (axis-angle) representation (see above links), can be converted between each other, and to a 3x3 rotation matrix.

  • Rotation matrices are not commutative

  • The preferences around ordering of rotation matrices and especially when discussing Euler Angles, is software/application/community/culture specific.

  • Note that the underlying graphics system may use a different convention to a higher level software API.

  • Assume NOTHING. Every time you implement these things, start with a very clear definition of what you are meant to be implementing.

6.1.5. A Note on VTK Coordinate Systems

  • Several pieces of software, including Slicer, MITK, PLUS, NifTK, scikit-surgery all use VTK.

  • Look in vtkProp3D, and at SetOrientation() which says “Orientation is specified as X,Y and Z rotations in that order, but they are performed as RotateZ, RotateX, and finally RotateY”.

  • vtkProp3D therefore suggests that VTK uses “Tait–Bryan angles”, specifically the z-x-y option, which are therefore intrinsic rotations meaning, they move with the object being moved.

This has been implemented in the SciKit-Surgery platform, specifically:

In addition:

  • In vtkTransform, there is a method RotateWXYZ() which sets the rotation as an angle about a world axis. Internally, this uses quaternions and converts the world axis to a homogeneous matrix. This is an extrinsic rotation.

6.1.6. A Note on Homogeneous Coordinate Conventions

As is common (e.g. euclideanspace.com, brainvoyager, opengl) we represent

  • rotations as the upper-left 3x3 matrix in a 4x4 homogeneous transformation matrix.

  • translation as the right-most 3x1 vector in a 4x4 homogeneous transformation matrix.

Note the comment on the tutorial on the opengl website: “This is the single most important tutorial in the whole set. Be sure to read it at least 8 times”.

This is not being facetious. It is good advice.