Difference between revisions of "CSC400 3D Visualization of 2D Brain Scans"
m (1 revision)
Latest revision as of 20:10, 17 January 2010
3D Visualization of 2D Brain Scans
CSC400 Independent Study
- 1 3D Visualization of 2D Brain Scans
- 2 Technical Information
- 3 References
- 4 Log of Activities
This page details the components of my special studies in Spring 2009. The proposed problem, in the words of Prof D. Bickar, 'I am unable to easily show the brain regions that are affected during different stages of Parkinson's Disease. I would like to follow the progress of this disease, and in particular, observe which brain regions are involved at different times during the course of the disease.' The following programs are an attempt at simplifying the process of marking affected brain areas.
The first section details the parts of the different programs and how to make them work, and the second section details how they work. Finally the last section contains links to websites I referenced frequently while working.
DrawPoints contains the class that records the mouse-clicks and draws the points in the scan.
ScanPrompt.py contains all the QtDesigner-generated code for the window and elements.
This program launches the GUI, records the clicked points, controls which image is being displayed, and saves all the points into a file called 'scandata.txt' in the directory of your scans (see #Directory Structure).
translate.py translates the points that were defined in the GUI to points in the 3D space of MRICron.
File Naming & Directory Structure
Each brain scan should be named as follows: <axis of scan>_<level of scan>.<extension>
There can be a prefix of 'scan_', or a suffix of '_C' (before the extension). The prefix is meaningless. The suffix of '_C' is important, however, because it denotes a 'center' scan. This scan is identified (by the user) to be the one which generates the bounding box, which is used to translate the coordinates from the image-system into the MRICron-system #Coordinate Translation.
The 'axis of scan' refers to 'horizontal', 'sagittal', or 'coronal' - and should all be lowercase (though it would be simple to make case-insensitive).
The 'level of scan' is the 3rd dimension (in the horizontal images, it's the z directon).
An example filename is
The only requirement in directory structure is that all of your scan images need to be in the same folder, though that folder can be placed anywhere. The output,
scandata.txt , will be stored in that directory by default, but the nifti file(s) output will be in the same directory as
How to Run
Download files in .zip here: Program.zip
Extract to your computer, make sure you have the required libraries, then use the instructions below to run the programs.
- PyQt4 to display the GUI. (Also python bindings for a C library)
Run the GUI
The first part of this project is a graphical user interface to identify points in a set of 2D scan images. This outputs a text file of points.
To launch, open a terminal,navigate to the directory with the program code, and type the following.
$ python start.py <scan directory>/
This will launch the GUI, which should look like below:
If the program launches but you still don't see a scan, you may have entered the directory wrong. Don't forget the trailing slash.
Assuming it's launched correctly, the first thing you see is the center sagittal scan. You should first identify the 'bounding box', that is, the four points on each edge of the scan. This will be used later to scale to points to the model brain. You can now enter any number of points to convert to MRIcron display.
When you are finished with the scan, you can click 'Next' to save your current points and work on the next image. The first three images displayed are those defined as 'center' ('_C'). After that, the images are not shown in any particular order.
Clicking 'Exit' at any time saves your clicked points to <directory>/scandata.txt and exits.
Generate MRICron Cubes
The second part of this project reads a file of scan names and points and generates NIfTI (Neuroimaging Informatics Technology Initiative) overlay files. To use this, you need to start python on your system, then run the program with the lines:
$ python translate.py <filename>
This will generate a number of 'cube*.nii' files, one for each point identified through the GUI. Each of these cube files can be imported through MRICron in Overlay > Add Overlay.
After adding all the overlays, you should get a cube at each location you clicked. For the example scandata.txt, the overlays look like below (you can only see two of the three due to the viewpoint).
Format of Scandata.txt
Below is an example of the output file scandata.txt.
sagittal_0152_C.jpg [88, 353] [559, 32] [943, 301] [667, 632] [181, 168] coronal_2240_C.jpg [88, 450] [657, 11] [949, 328] [556, 693] [255, 270] horizontal_1000_C.jpg [125, 415] [563, 60] [973, 416] [552, 676] [542, 416]
The first line is the file, the next four are the bounding-box points, and the rest (before the next filename) are the points to be displayed in MRICron. If just this was input into translate.py, it would output three cube*.nii files, each with a cube centered at each point: (152,181,168), (255,2240,270), and (542,416,1000).
The two coordinate systems (image vs MRICron) are complete opposites. The origin in the scan images is at the top middle front, while the origin in MRICron is at the bottom left back. (note: it's actually offset from this by a vector, see code)
This was fixed by maniplulating the points in translate.points(). As you can see in the image below, we are given the location of the red point in the general image coordinate system (origin at top left). We have also defined the bounding box around the brain by clicking the four points, b0-3 (shown here in blue). Using these points we've saved information about the height and width of the bounding box, and also a point p0 showing the offset, from the origin, of the box. This all happens in translate.points.setRefs().
We want the red point in terms of the green coordinate system (origin at the bottom right of the bounding box). To do this, we do the following:
Where refx is the width of the bounding box, x is the x-value of the point in top-left coordinates, and p0x is the x-value of p0. Doing this to the y- and z-values as well will translate the point into MRICron coordinates.
Now that the directions are correct, the point still needs to be scaled to MRICron's values. We find the ratio of MRICron bounding box to the image box, multiply that by the image point, and add the offset mentioned earlier.
ImgPoint * MRICron/Refs + offset
Currently the MRICron bounding box is defined in the program itself, but in the future could be changed to be automatically detected. Perhaps by using the NiftiImage.bbox property.
The NIfTI file format is actually two parts: a header and image. These can be two separate files (*.hdr and *.img) or one (*.nii). I chose to uses the latter. The header has all the information about positioning, dimensions (nifti allows for up to 7), and any other information you might need.
The image section allows you to define a array of 'voxels' (portmanteau of 'volume' and 'pixel') using a 3D numpy array. The values in this array can be any number, I chose to keep it simple by defining a 10x10x10 array of '1's. By default, this point is placed at the origin of the MNI coordinate system (MRICron displays two systems, xyz and MNI). The points generated from translate.points.scale() are also in MNI format. To get the cube in the correct position, you just need to translate it using the point as a displacement vector.
The header contains two ways of translating points, the qform and sform matrices. This page has a comprehensive overview of which to use for various image manipulations. We are in situation 5, and so can use either:
Apply an arbitrary affine transformation to the image
On one level this is the straightforward - the affine transformation needs to be expressed as a change in voxel coordinate, A, and then the above formula applied to the qform and/or sform. [...] If the reference image's sform matrix is known and it the intention is to align the two coordinate systems then the reference sform should be copied.
Since the sform is a simple transformation matrix, whereas the qform is slightly more complicated, I choose to use the sform. The translate program assumes that you have a copy of the model (
ch2better.nii) in the same directory. It opens that file and copies the sform (the reference image sform mentioned in the quote). Then it changes the last column of data to the displacement vector (the desired point) making it look like:
[ A11 | A12 | A13 | px ]
[ A21 | A22 | A23 | py ]
[ A31 | A32 | A33 | pz ]
[ 0 | 0 | 0 | 1 ]
When applied to the voxels in the cube file (which start at the origin), this will displace the x-value by px, the y-value by py, and the z-value by pz. More about affine transformations.
Outstanding Issues & Other Improvements
Some of the following issues are easy to fix, but due to time constraints I couldn't get to. For those I've suggested potential solutions.
- The MRICron bounding box is hardcoded, but could potentially be fixed using the NiftiImage.bbox property translated back through the sform matrix.
- Still need to click 4x on each image, despite not using that info for any scans other than 'C'. Can be solved by adding an extra case to the translate.py program, so that it only skips the first four points if the file is a 'center' file.
- Uses bounding box to scale points, may not match up correctly with actual brain anatomy. Not sure how to solve this, as the model brain doesn't have any anatomy markers.
- Currently doesn't indicate any kind of temporal information. The nifti format does support this information, so it would be possible to implement. The page detailing which matrix to use for different image manipulations also mentions how to combine a series of images over time.
- Perhaps the program could identify the problem areas itself somehow, if the area appears different than the rest of the brain.