An introduction to Nilearn

This notebook is about the amazing nilearn Python package for applying statistical learning techniques (from GLMs to multivariate “decoding” and connectivity techniques) to neuroimaging data. In addition, it features all kinds of neat functionality like automic fetching of publicly available data, (interactive) visualization of brain images, and easy image operations.

In this tutorial, we’ll walk you through the basics of the package’s functionality in a step-by-step fashion. Notably, this notebook contains several exercises (which we call “ToDos”), which are meant to make this tutorial more interactive! Also, this tutorial is merely an introduction to (parts of) the Nilearn package. We strongly recommend checking out the excellent user guide and example gallery on the Nilearn website if you want to delve deeper into the package’s (more advanced) features.

Contents

  1. What is Nilearn?

  2. Data formats

  3. Data visualization

  4. Image manipulation

  5. Region extraction

  6. Connectome/connectivity analyses

Estimated time needed to complete: 1-3 hours (depending on your experience with Python)
Credits: if you end up using nilearn in your work, please cite the corresponding article.

# Let's see whether Nilearn is installed
try:
    import nilearn
except ImportError:
    # if not, install it using pip
    !pip install nilearn

What is Nilearn?

Nilearn is one of the packages in the growing “nipy” ecosystem of Python packages for neuroimaging analysis (see also MNE, nistats, nipype, nibabel, and dipy). Specifically, Nilearn provides tools for analysis techniques like functional connectivity, multivariate (machine-learning based) “decoding”, but also more “basic” tools like image manipulation and visualization.

ToDo: Go through the Nilearn website to get an idea of what functionality the package offers. Also, for more information, check out this article about "machine learning for neuroimaging with scikit-learn", which discusses some of Nilearn's functionality in more detail.

On Nilearn’s website, you can see that the package contains several modules (such as connectome, datasets, decoding, etc.). In the following sections, we will discuss some of them.

Data formats

Nilearn’s functionality assumes that your MRI data is stored in nifti images. Many functions in Nilearn accept either strings pointing towards the path of a nifti file (or a list with multiple paths) or a Nifti1Image object from the nibabel package. Together, these two types of inputs (filenames pointing to nifti files and nibabel Nifti1Images) are often referred to a “niimgs” (or “niimg-like”) by Nilearn — a term you’ll see a lot in Nilearn’s documentation (for more info about data formats in Nilearn, see this explainer).

Before we go on, let’s actually download some example nifti files to work with. Fortunately, Nilearn has an entire module dedicated to fetching example data and other useful files (e.g., atlases): nilearn.datasets. We’ll import it below:

from nilearn import datasets

Now, we will download some example data from the famous Haxby et al. (2001) study. This can be done using the fetch_haxy function from the datasets module. We’ll download data from only a single subject for now.

# This might take a while, depending on your internet speed
data = datasets.fetch_haxby(
    data_dir=None,
    subjects=1,
    fetch_stimuli=False,
    verbose=1
)
Dataset created in /home/runner/nilearn_data/haxby2001

Downloading data from https://www.nitrc.org/frs/download.php/7868/mask.nii.gz ...
Downloading data from http://data.pymvpa.org/datasets/haxby2001/MD5SUMS ...
Downloading data from http://data.pymvpa.org/datasets/haxby2001/subj1-2010.01.14.tar.gz ...
Note: throughout this tutorial, you might see several warnings (in red) after running cells. The code is most likely executing perfectly fine (they are not errors!) and are often caused by other packages that Nilearn uses internally.

The fetch_haxby function returns a dictionary with the location of the downloaded files and some metadata:

from pprint import pprint  # useful function to "pretty print" dictionaries
pprint(data)
{'anat': ['/home/runner/nilearn_data/haxby2001/subj1/anat.nii.gz'],
 'description': 'Haxby 2001 results\n'
                '\n'
                '\n'
                'Notes\n'
                '-----\n'
                'Results from a classical fMRI study that investigated the '
                'differences between\n'
                'the neural correlates of face versus object processing in the '
                'ventral visual\n'
                'stream. Face and object stimuli showed widely distributed and '
                'overlapping\n'
                'response patterns.\n'
                '\n'
                'Content\n'
                '-------\n'
                'The "simple" dataset includes\n'
                "    :'func': Nifti images with bold data\n"
                "    :'session_target': Text file containing session data\n"
                "    :'mask': Nifti images with employed mask\n"
                "    :'session': Text file with condition labels\n"
                '\n'
                '\n'
                'The full dataset additionally includes\n'
                "    :'anat': Nifti images with anatomical image\n"
                "    :'func': Nifti images with bold data\n"
                "    :'mask_vt': Nifti images with mask for ventral "
                'visual/temporal cortex\n'
                "    :'mask_face': Nifti images with face-reponsive brain "
                'regions\n'
                "    :'mask_house': Nifti images with house-reponsive brain "
                'regions\n'
                "    :'mask_face_little': Spatially more constrained version "
                'of the above\n'
                "    :'mask_house_little': Spatially more constrained version "
                'of the above\n'
                '\n'
                '\n'
                'References\n'
                '----------\n'
                'For more information see:\n'
                'PyMVPA provides a tutorial using this dataset :\n'
                'http://www.pymvpa.org/tutorial.html\n'
                '\n'
                'More information about its structure :\n'
                'http://dev.pymvpa.org/datadb/haxby2001.html\n'
                '\n'
                '\n'
                '`Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J.,\n'
                'and Pietrini, P. (2001). Distributed and overlapping '
                'representations of\n'
                'faces and objects in ventral temporal cortex. Science 293, '
                '2425-2430.`\n'
                '\n'
                '\n'
                'Licence: unknown.\n',
 'func': ['/home/runner/nilearn_data/haxby2001/subj1/bold.nii.gz'],
 'mask': '/home/runner/nilearn_data/haxby2001/mask.nii.gz',
 'mask_face': ['/home/runner/nilearn_data/haxby2001/subj1/mask8b_face_vt.nii.gz'],
 'mask_face_little': ['/home/runner/nilearn_data/haxby2001/subj1/mask8_face_vt.nii.gz'],
 'mask_house': ['/home/runner/nilearn_data/haxby2001/subj1/mask8b_house_vt.nii.gz'],
 'mask_house_little': ['/home/runner/nilearn_data/haxby2001/subj1/mask8_house_vt.nii.gz'],
 'mask_vt': ['/home/runner/nilearn_data/haxby2001/subj1/mask4_vt.nii.gz'],
 'session_target': ['/home/runner/nilearn_data/haxby2001/subj1/labels.txt']}

Let’s inspect the "description" in more detail, which described the contents of the dictionary:

print(data['description'])
Haxby 2001 results


Notes
-----
Results from a classical fMRI study that investigated the differences between
the neural correlates of face versus object processing in the ventral visual
stream. Face and object stimuli showed widely distributed and overlapping
response patterns.

Content
-------
The "simple" dataset includes
    :'func': Nifti images with bold data
    :'session_target': Text file containing session data
    :'mask': Nifti images with employed mask
    :'session': Text file with condition labels


The full dataset additionally includes
    :'anat': Nifti images with anatomical image
    :'func': Nifti images with bold data
    :'mask_vt': Nifti images with mask for ventral visual/temporal cortex
    :'mask_face': Nifti images with face-reponsive brain regions
    :'mask_house': Nifti images with house-reponsive brain regions
    :'mask_face_little': Spatially more constrained version of the above
    :'mask_house_little': Spatially more constrained version of the above


References
----------
For more information see:
PyMVPA provides a tutorial using this dataset :
http://www.pymvpa.org/tutorial.html

More information about its structure :
http://dev.pymvpa.org/datadb/haxby2001.html


`Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J.,
and Pietrini, P. (2001). Distributed and overlapping representations of
faces and objects in ventral temporal cortex. Science 293, 2425-2430.`


Licence: unknown.

Alright, now we have some data to work with. With the image module in Nilearn, we can load in and perform many different operations on nifti images. We’ll import it below:

from nilearn import image

And let’s load in the anatomical image from the Haxby dataset that we downloaded using the load_img function from the image module:

anat_img = image.load_img(data['anat'])
print("The variable anat_img is an instance of the %s class!" % type(anat_img).__name__)
The variable anat_img is an instance of the Nifti1Image class!

Note that load_img is basically the same as the nibabel function load, but with some extra functionality (like loading in a list of files using wildcards) and checks (like whether it’s really a nifti image).

Also, note that the anat_img is an object instantiated from the custom Nifti1Image class.

print(type(anat_img))
<class 'nibabel.nifti1.Nifti1Image'>
ToDo: In the subj1 subdirectory of the download directory, there are multiple nifti files with "masks" outlining functional regions for this particular subject. Can you load them all at once using the load_img function with a wildcard? Store the result in a variable named all_mask_imgs, which should be a 4D Nifti1Image object. Don't forget to remove the NotImplementedError!
# Try it below!

# YOUR CODE HERE
raise NotImplementedError()
''' Tests the ToDo above. '''
# Check shape (should be 5 volumes)
assert(all_mask_imgs.shape == (40, 64, 64, 5))
print("Well done!")

In general, Nilearn contains several functions that can be seen as “wrappers” around common operations that you’d normally use nibabel and/or numpy for, such as creating new Nifti images from numpy arrays (image.new_img_like), indexing (4D) images (image.index_img), and averaging 4D images across time (image.mean_img). For example, suppose we have a 3D numpy array containing, e.g., \(\hat{\beta}\) values from a GLM analysis with the same shape as our anatomical scan:

import numpy as np
arr_3d = np.random.normal(0, 1, size=anat_img.shape)
print("The variable arr_3d is an instance of the %s class!" % type(arr_3d).__name__)
The variable arr_3d is an instance of the ndarray class!

Now, suppose we want to convert this data to a nibabel Nifti1Image object, so that we can perform other operations on it or visualize it (using Nilearn). We can use the new_img_like function for this:

arr_3d_img = image.new_img_like(ref_niimg=anat_img, data=arr_3d)
print("The variable arr_3d_img is an instance of the %s class!" % type(arr_3d_img).__name__)
The variable arr_3d_img is an instance of the Nifti1Image class!

Before we go into more detail, let’s make it a little bit more interesting by focusing on the data visualization tools within Nilearn.

Data visualization

The data visualization tools in Nilearn are grouped in the plotting module. The plotting module, in our opinion, is one of Nilearn’s most useful features!

ToDo: Before going on, browse through the Nilearn documentation of the plotting module to get a feel for what's possible.

Alright, let’s start by importing the module (and telling Jupyter to plot our figures inside the notebook using %matplotlib inline):

from nilearn import plotting
%matplotlib inline

We can try plotting our anat_img (high-resolution T1-weighted image) using the plot_anat function. In the most basic approach, you can simply call it with your image (a Nifti1Image or path to a nifti file) as the first argument:

display = plotting.plot_anat(anat_img)
../../_images/nilearn_33_0.png

Note that the variable display is an instance of a custom Nilearn class (Orthoslicer) which contains some nifty (pun intended) features as well (which we won’t discuss here, but check out Nilearn’s plotting reference).

But Nilearn plotting functions contain many (optional) arguments that you can use to customize your plot. For example, the display_mode argument allows you to plot the image in one (or more) particular dimensions (e.g., the “X” axis, which is usually sagittal) and the cut_coords argument allows you to specify the number (if integer) or locations of the particular slices/cuts (if list):

display = plotting.plot_anat(anat_img, display_mode='x', cut_coords=[-40, -20, 0, 20, 40])
../../_images/nilearn_35_0.png
ToDo: Read through the documentation for the plot_anat function. Now, try to make the following plot of the anat_img image: 8 cuts in the coronal direction, thresholded at 50, a dimming factor of -1, and a title "T1-weighted image".
# Implement your ToDo here
# YOUR CODE HERE
raise NotImplementedError()

There are many other plotting functions besides plot_anat, which we’ll highlight when relevant in the next sections of this tutorial.

Image manipulation

Nilearn contains a lot of functionality to easily manipulate images. Note that, again, all of this could be implemented with numpy (in fact, Nilearn uses numpy “under the hood” for many of its operations); just see the Nilearn functions as “wrappers” around common numpy routines for nifti-based arrays (which also handle the updating of image affines, e.g. after resampling an image, appropriately).

To showcase some nilearn functions, we’ll use the functional MRI data we downloaded (bold.nii.gz).

func_img = image.load_img(data['func'][0])
print("Shape of functional MRI image: %s" % (func_img.shape,))
Shape of functional MRI image: (40, 64, 64, 1452)

This file, however, is quite large (~400 MB), as it contains concatenated data across 12 runs. To limit the amount of data that we need to load into RAM, we’ll only load in the data from the first run. We can find out which volumes belong to which run in the “session_target” information, which we’ll load in below as a pandas dataframe:

import pandas as pd
metadata = pd.read_csv(data['session_target'][0], sep=' ')
print("Shape of metadata dataframe: %s" % (metadata.shape,), end='\n\n')
metadata.head(20)
Shape of metadata dataframe: (1452, 2)
labels chunks
0 rest 0
1 rest 0
2 rest 0
3 rest 0
4 rest 0
5 rest 0
6 scissors 0
7 scissors 0
8 scissors 0
9 scissors 0
10 scissors 0
11 scissors 0
12 scissors 0
13 scissors 0
14 scissors 0
15 rest 0
16 rest 0
17 rest 0
18 rest 0
19 rest 0

Here, “chunks” refer to the specific run index and “labels” refers to the timepoint-by-timepoint condition (i.e., the condition of the active block at each moment in time, e.g., images of scissors, images of chairs, rest blocks, etc.). Let’s compute the number of timepoints in the first “chunk” (run):

nvol_run_1 = np.sum(metadata['chunks'] == 0)
print("Number of volumes in run 1: %i" % nvol_run_1)
Number of volumes in run 1: 121

Alright, now we can use Nilearn’s index_img function from the image module to index our func_img object.

to_index = np.arange(nvol_run_1, dtype=int)
func_img_run1 = image.index_img(func_img, to_index)
print("Shape of func_img_run1: %s" % (func_img_run1.shape,))
Shape of func_img_run1: (40, 64, 64, 121)
ToDo: with the information in the header of func_img_run1, can you determine how long (in seconds) the first run lasted? Store the answer in a variable named length_run1.
# Implement the ToDo here
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
assert(length_run1 == 302.5)

Okay, that was the boring part. Let’s do some more interesting things!

Image mathematics

Nilearn provides some functions to make your life easier when doing array mathematics on 3D or 4D images. For example, the mean_func from the image module computes the mean across time for every voxel in a 4D image. (Note that, again, this could also be done in numpy.)

ToDo: Use the mean_img function from the image module to average across the volumes of our func_img_run1 image and store it in a new variable named mean_func.
# Implement the ToDo here. 

# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
assert(mean_func.shape == (40, 64, 64))
print("Well done!")
ToDo: Now, plot the mean_func image using the plot_epi function from the plotting module. Use whatever arguments to make the plot as pretty as you like.
# YOUR CODE HERE
raise NotImplementedError()

Another useful function from the image module is math_img. This function allows you to perform more complex mathematical operations to entire images at once. For example, suppose you want to mean-center the time series of every voxel \(v\) in an image (i.e., subtract the mean across time from each time point).

We can do that as follows using math_img:

mean_centered_img = image.math_img('img - np.mean(img, axis=3, keepdims=True)', img=func_img_run1)

As you can see, the function math_img takes a string indicating a particular (numpy) operation on the variable “img”, which is given as an argument to the function. You can give as many extra arguments (associated with particular images) to the function as you’d like. For example, the above operation can also be performed as follows:

# Recompute mean_func because of ToDo
mean_func = image.mean_img(func_img_run1)
mean_centered_img = image.math_img('img_4d - img_mean[:, :, :, np.newaxis]', img_4d=func_img_run1, img_mean=mean_func)
ToDo: Compute the voxelwise TSNR (mean across time divided by standard deviation across time for each voxel) of the func_img_run1 image using math_img and store it in a variable tsnr_func. Then, plot the image using plot_epi.
# Implement the ToDo here

# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
np.testing.assert_almost_equal(
    tsnr_func.get_fdata()[20, 30, 30],
    64.76949,
    decimal=5
)
print("Well done!")

Image preprocessing

Nilearn also provides some functionality for basic preprocessing of (functional) images. Note the adjective “basic” — most preprocessing steps (such as image registration, motion correction, susceptibility distortion correction, etc.) should be done using other packages (for which we strongly recommend Fmriprep). That said, Nilearn does provide some preprocessing functionality, such as smoothing (with image.smooth_img):

# Recompute tsnr_func because of ToDo
tsnr_func = image.math_img('img.mean(axis=3) / img.std(axis=3)', img=func_img_run1)

tsnr_func_smooth = image.smooth_img(tsnr_func, fwhm=10)
display = plotting.plot_epi(tsnr_func_smooth);
../../_images/nilearn_64_1.png

Hmm, perhaps it would be nicer to plot a thresholded version of this map on the subject’s high-resolution T1-weighted scan … Of course, Nilearn has a function for that: plot_stat_map, which takes both a “stat_map” (which can be anything, as long as it’s a 3D image) and a background image, as well as an optional threshold argument:

display = plotting.plot_stat_map(tsnr_func_smooth, bg_img=anat_img, threshold=150)
../../_images/nilearn_66_0.png

Note that you can even create interactive image viewers using, for example, the view_img function. This works especially well in Jupyter notebooks. Importantly (at least in Jupyter notebooks), you should call this plotting function at the end of the code cell and you should not assign the output of the function to a new variable, otherwise the viewer won’t render.

# Do not add code after this line, otherwise it won't work
plotting.view_img(tsnr_func_smooth, bg_img=anat_img, threshold=150)