A neuroinformatics resource to store, visualize, and mine anatomical and functional data


Web-browsers and settings

Most browsers support our programs to one extent or another EXCEPT for Microsoft Internet Explorer.  They work on most reasonably recent versions of Firefox, Google Chrome (under Mac OSX but not under Linux), and Safari, with a slight bit of finagling.  We suggest to use Chrome.

If you're running Safari, you'll need to enable WebGL, since it isn't enabled by default. To do so, go to the "Safari" menu and select "Preferences"

Choose the "Advanced" tab on the "Preferences" window and tick "Show Develop in menu bar."

The "Develop" menu should show up.

Pull it down and select "Enable WebGL."


Examples (preliminary)

A VERY simpleminded volume-visualization test:

http://tirebiter.ucsd.edu/salkdemo

The interaction tools available in the viewer are listed here:

https://github.com/xtk/X/wiki/X%3AInteraction


1) Rough segmentation: I imported the original DICOM images into Web Image Browser (WIB) and, using the manual tracing tools, performed a rough manual segmentation, separating the brain from the surrounding tissue:

http://tirebiter.ucsd.edu/brain/combined.html

http://tirebiter.ucsd.edu/brain/volume.html


2) I then used the Marching Cubes algorithm to perform a better segmentation:

http://tirebiter.ucsd.edu/brain/surface.html


3) As you can see, because of the relative coarseness of the volume, there are a considerable number of artifacts.  I used the free interactive tool MeshLab

http://meshlab.sourceforge.net/

to smooth the product and, using a custom developed tool, I "colored" the brain based upon the intensities of the voxels in the neighborhood of each vertex, producing

http://tirebiter.ucsd.edu/brain/mesh.html


4) I imported the CAD model of the chamber (not to scale) and roughly positioned it by manually estimating the offsets and rotation angle of the chamber relative to the brain:

http://tirebiter.ucsd.edu/brain/nmesh.html


5) I resliced the mesh and imported the contours into WIB,

http://tirebiter.ucsd.edu/WebImageBrowser/?plugin=SLASH&datasetID=10784357&modelID=4

where the contours were edited to more closely conform to the brain's morphology.

If you haven't used the Web Image Browser, please refer to the online help through the pulldown menu or at

http://tirebiter.ucsd.edu/WebImageBrowser/WIBHelp.html


6) The contours were extracted from the database, reprocessed through Marching Cubes to produce a surface, smoothed through MeshLab, and regenerated as a mesh (chamber not to scale):

http://tirebiter.ucsd.edu/brain/nbmesh.html

The revised STL mesh may be retrieved at

http://tirebiter.ucsd.edu/brain/newmesh.stl

It is useful to note at this point that some of these steps are indeed redundant or extraneous but are descriptive of the process of data exploration I took to arrive at the model in step 6.  Furthermore, in a "production" workflow, automatic or semiautomatic segmentation tools could be used or adapted to perform most of the data redaction.  In particular, there are well known tools for segmentation of MRI brain images that could be used to perform the initial segmentations.


With respect to further work discussed, probably the easiest way to precisely position the chamber with respect to the brain is to use the free tool Blender.  Since Blender is highly extensible through Python scripting, Blender might also be used to perform the spatially-based queries to the data base.

A web-based tool could also be accomplished by writing a small custom application based on XTK

http://www.goXTK.com

and the Google Web Toolkit (GWT)

http://www.gwtproject.org/

Nick Chernov (Vanderbilt) is using:

http://www.artofillusion.org/

http://en.wikipedia.org/wiki/Vector_Markup_Language

However, there is no apparent way to connect ArtofIllus with the database.




Magnetic Resonance Imaging

Identification of cortical lamination in awake monkeys by high resolution magnetic resonance imaging.  Gang Chen, Feng Wang, John C. Gore, Anna W. Roe:

http://www.sciencedirect.com/science/article/pii/S1053811911012432




Database

Will be based on PostGIS:

http://postgis.net/





User workflow


Reconstruct a 3-D volume and surface of the head, skull and brain, using the MRI sections from the subject

Segment skull and brain surface.  We have a functional prototype based on Web Image Browser, MeshLab, and custom tools (Fig. 2A)  –  Other options?

The reconstruction would be made in successive approximations, with each pass improving the contours of the segmentation.


Select a "reference" or "master" image showing the surface of the brain inside the implant

Choose the best image of the brain and chamber in the set (e.g. Fig. 1A of NI2013 poster). Rotate image to standard anatomical view (say rostral at the top). Correct image for contrast and luminance. Extract green channel, and produce a new, B&W image, from the green channel (this improves the contrast between blood vessels and tissue) (Fig. 1G).

If the image is not isotropic (e.g. a circle appears like an oval) correct it changing the appropriate dimension.  Crop image to remove elements outside the chamber.


Register the "master" photo on the 3-D MRI brain surface reconstruction

Use an interactive tool to position the master photo on the brain surface, aligning the photo to the sulci and blood vessels shown on the surface reconstruction.


Register an Optical-Imaging map or epifluorescence photos to the master photo

I see two alternatives to do this: (1) place each new image on the 3-D reconstruction, registering it to the master photo, or (2) register each new image to the master photo, outside the 3-D reconstruction, and then calculate the transformation to place it on the 3-D reconstruction.  –  Other options? What solution would be better?


Produce a 3-D model of the chamber, and place this model on the 3-D reconstruction

This will enable to locate the optimal position for a new chamber to be implanted over a specific brain structure, or to have a model of a chamber already in place. The model will enable calculation of pipette and electrode trajectories inside the brain.


Add the location of virus injections and electrode penetrations to the spatial database

Define a coordinate system in exposed-brain surface, based on key bifurcations of the surface blood vessels.

For each micropipette and electrode penetration, locate its insertion point in the master photo. Measure the coordinates of this point and store them in the database.





Queries to the system

Reconstruct the trajectory of a micropipette or electrode penetration, in the MRI volume reconstruction


Show intensity and spread of epifluorescence over time

Show (one after another and repeat) the fluorescence images captured during the life of the implant


Injections or electrode penetrations on epifluorescence images

Show the virus injection sites, electrode penetrations, or optical imaging results, on any of the registerd images



Link each spot of the exposed brain to neural and behavioral data stored in the database






... To be continued ...