MRI Visualizations in Blender

Introduction

On 2nd March 2023, I underwent an MRI Brain / IAM scan as part of an investigation into chronic BPPV (Benign Paroxysmal Positional Vertigo) and VM (Vestibular Migraine) episodes, recounted in a video on the main YouTube channel.

Though the radiologist report was generated quickly, a subsequent follow-up appointment with the neurologist, and further obtaining the DICOM scan data, took a while longer due to communication lags, bouncing between departments, etc.

DATA PolicY CHANGE

At the particular hospital where I had the scan performed, I learnt that there had been a policy change in the process for providing patient data on request.
Whereas before they would give the data to the patient on the same day by medium of a CD, now the patient must get in contact with the medical records department (there was no contact information online for this particular hospital due to it changing hands and not updating old information), request a data release form, fill out the form which involves obtaining a signature from a witness who has known the patient for an extended period of time (measured in years), but is not related through family or marriage. Additionally, a form of photo identification must be provided to confirm identity.

I am not sure whether this was a nation-wide policy change relating to data protection acts, or whether this is a policy change actioned by the company looking after the hospital (this was a private medical investigation, not performed through the NHS).

The primary reason for going private for this investigation was a simple matter of time investment. NHS waiting times have been awful in recent years, which is no secret. It’s a point of political tension. I had a throat issue marked as urgent last year (2022) as it was interfering with my ability to record for work. The best time they could give me for a first consultation with an ENT was for one year’s time (August 2023, from the time of obtaining the appointment).

Exploration and Experimentation

Upon obtaining the MRI scan data, I sat with Ben (Cartesian Caramel) to look over the image slices and discuss methods of 3D visualization. Following on from Ben’s previous tests with CT scan data obtained online, we decided to take a volumetric shader approach to present the data as a three-dimensional volume that can have modifiable parameters - variable density, color ranges to restrict values (useful for isolating certain data in the image slices), and can be easily sliced / culled by restricting the geometry of the shaded mesh volume.

Using the same principle, it is also possible to voxelize the volumes into physical meshes by opting to use the geometry nodes features in Blender instead of shader nodes. Having creative node systems like this can provide a massive amount of control for the artistic interpretation of the scan data, but may have limited utility for diagnosis, which I’ll talk a bit more about in the ‘Limitations of Volumes’ section.

This experimentation has been recounted in another video on the main YouTube channel, in which I provide a general breakdown, showing eye-catching examples, while sharing recordings of discussions with Ben as the project evolved.

Image Tiles

Though there are several viable approaches to representing scan slices as a volume inside of Blender, we opted to go for a method which included converting the slices of each scan phase into a single unique tiled image, which would provide the shader editor in Blender with all of the relevant slices in one go.

A collection of parameters created on a bespoke mapping node group would allow the user to indicate how many rows / columns the shader should expect inside of the 1:1 aspect ratio tile image.

Even though a variety of software packages already exist to convert multiple image slices into a single tiled image, Ben opted to go for an in-house approach and create a new geometry-nodes tool that would take any arbitrary number of images imported into the Blender 3D view (stored in a collection called ‘Images’), and pack them into a 1:1 grid in front of a camera, ready to be rendered out.

Making It Public

Since experimenting with my scan data, Ben has decided to clean up the technique and make a tool file available for free on their Gumroad account. This means that if you have an intermediate understanding of Blender and want to visualize your own scan slices, then the tool is now available for you to try. Remember, you will need a collection of image slides from a suitable scan, typically an MRI or CT scan, either of which will highlight body material of interest. If you’re reading this you likely already know that CT scans employ X-Rays whereas MRI scans employ magnetism and radio waves, meaning that even though the resulting data can be structured very similarly, they will be highlighting different types of tissues, bones, organs, etc.

Update: I have since released a video on the main channel demonstrating the resource.

My Visualization File

With the scan data and visualization method ready, I hooked up all of the generated tile images to a single blend file and had fun choosing the best volume densities, camera framing and color ramp combinations. This artistic process was more informative than expected because when playing with color ramps (which can be used to restrict and re-color data), it became clear that isolating certain scan regions and tissues would be relatively simple.

In the accompanying image you can see a simple color ramp plugged into a Principled Volume shader. This is plugged into the Color input whereas the node group creating volume data (off screen) is plugged into the Density input. The lower (darker) greyscale values of the slices have been recolored to a dark brown, whereas the brighter values have been recolored with a blue-to-white gradient.

Restricting the values even further on a back-to-front axis scan reveals the surprisingly detailed internal structures. The blue-to-white gradient contributes nicely to the readability of the data.

Being a mathematical system, the shader nodes allow us to extract ranges of values (as what we would call ‘masks’) and use them for the creation of even more complex artistic effects.

The Limitations of Volumes

There are a couple of important factors to consider when using volume visualizations of scan slices. Namely: interpolation (blending between the slices), the scan axis, and accuracy.

The Scan Axis

In the accompanying image, you can see a limitation of the method, representing highly accurate information from one angle, but a ‘blurred’ interpretation from a side angle. This is dependent on the scan axis. I have no idea whether this term is used in the medical field, but it’s what I’ve been using to describe the direction that the scan slices are taken from. For the left model we are looking along the scan axis, for the right model we are looking tangential to the scan axis.

Because of this, there is less pixel data available to work from.

An important variable that can affect the readability of tangential angles is the total number of scan slices per tiled image. When taking a look through the DICOM files downloaded from my scans, we noticed that the naming convention of the files indicated that not all scan slices were packaged and sent. They were named incrementally but skipped numbers between every slice.

Neither Ben nor myself are medical experts or radiologists, but that made us wonder whether we had received a limited form of the total data package, perhaps to conserve storage space.

Volume Density

The next significant limitation is volume density, and it highlights why 3D volume visualizations only have situational benefit for the diagnosis of medical conditions, and are less effective for subtle or ‘small scale’ conditions where every pixel on a scan slice is significant.

In the first image, I have extracted the inner ear labyrinth / semicircular canals. This is an orthographic view, meaning that perspective scaling from the camera view is irrelevant.

With the discussion of canal dehiscence, the 3D models can provide a clearer and immediate visual reading of the distance from the posterior semicircular canal to the cranial fossa, however (and it’s a big ‘however’), as you can see; the variable density of the scans can provide different visual interpretations of the distance between the two elements.

Notice in the second image how the cranial fossa and canal are practically touching in the higher density model, but seem further apart in the lower density model.

So the important question is: which of these two densities is the most representative of reality?

This is also a matter of accuracy with the original scan data. Imagine a gradient from black to white. Low density visualizations only represent the lighter section of the gradient, whereas higher density visualizations represent both the dark and light sections of the gradient. If the original scan data is ‘messy’, meaning darker greyscale pixels (where tissue may not exist) next to white pixels (where tissue does exist), then the volume will make the tissue appear thicker than it actually is. Higher density visualizations are effectively multiplying the pixels that exist, making darker greyscale values appear white.

That may have been a confusingly-worded paragraph, but hopefully it makes sense.

This got me thinking that volumetric visualizations may not necessarily have a standard for density. I haven’t done any research into this area, so I’m not sure if another group of people have already identified this as an issue and attempted to tackle it. It may not even be suitable to create a standard until even more higher-resolution and ‘messy-pixel-resistant’ scan methods are created. Another thing that might complicate the issue is that there is no ‘single’ algorithm / method / shader for representing volumes on a screen, and creating a ‘density standard’ would likely require a new agreement to be in place.

In Blender, we use the Principled Volume shader for creating volumes (alternatives available), which goes hand-in-hand with the Principled BSDF shader for surface materials. There is room for improvement with both of these, but they are effectively our most ‘standardized’ shaders. This is fine for the shader space, but becomes more complicated when creating mesh content from scan data in geometry nodes, for which there are many possible values to create ‘voxelized’ content depending on the combination of nodes used.

Taking It Further

This project is an interesting starting point for future development, with many areas that can be explored. Areas of particular interest for me are:

  • Increasing accuracy for diagnosis - seeing if there’s a way to ‘standardize’ volumetric visualizations while also maintaining artistic control.

  • More range control - seeing if there are ways to get more creative procedural range control to refine the ability to isolate tissue regions.

  • Resource building - there are resources available for bringing MRI data into Blender, but as a resource developer, it would be enjoyable for me to come up with a collection of visualization styles that are visually appealing, good for communication, and can be used easily by medical visualization artists.

  • Multi-Axis consolidation - combining scan data from multiple axes into one variable density model. This would require a method for localizing each scan in relation to the others accurately inside of the shader nodes.

  • Artwork - I imagine there is something purposeful and perhaps ‘spiritual’ about creating artwork using medical data. It would be fun to explore.

Hopefully this post has been interesting to you. If you would like to get into contact, feel free to message me on social media, or through the form on my contact page.

Previous
Previous

AI Language Models as Algorithmic ‘LPUs’

Next
Next

My Experience with Hemiplegic Migraines