\n",
"**ToDo** (2 points):\n",
" \n",
"*Note*: this is a difficult ToDo! \n",
"\n",
"Below, we load in new data (`slice_data`) from a very short experiment (60 seconds, TR = 3). The data are from 1 voxel from every of the 32 slices. Implement slice-time correction by resampling each slice to the onsets of the middle slice (slice no. 16). Use \"cubic\" interpolation (by setting `kind=\"cubic\"` when initializing your resampler). Please store the results in the pre-allocated array `stc_data`.\n",
"\n",
"Tip 1: define the TR, number of volumes, length of experiment, and `dt`.

\n", "Tip 2: before looping over the different slices, define the onsets of the reference slice (remember: Python is 0-indexed).

\n", "Tip 3: you have to initialize your resampler again for every iteration of your loop!

\n", "Tip 4: when initializing your resampler, set`fill_value='extrapolate'`.\n",
"Tip 5: plot your data before and after slice-time correction to see whether it worked properly (it should show you more-or-less aligned timeseries after STC, which may still show some differences in amplitude). Not part of the ToDo, but a good way to check your implementation.\n",
"\n",
"Note: this is a ToDo which requires a little more code than usual!\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "9b1db3082ee09d212624efb18d66c171",
"grade": false,
"grade_id": "cell-3cf95c7cc4ec7e18",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"slice_data = np.load('stc_data.npy')\n",
"print(\"Slice data is of shape (time x slices): %s\" % (slice_data.shape,))\n",
"\n",
"stc_data = np.zeros(slice_data.shape)\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "e4e0c4c6a609e9a8cf07c3ccdd63380e",
"grade": true,
"grade_id": "cell-13dc95bb058ac9b1",
"locked": true,
"points": 2,
"schema_version": 3,
"solution": false,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"\"\"\" Tests the above ToDo. \"\"\"\n",
"from niedu.tests.nii.week_4 import test_stc_todo\n",
"test_stc_todo(slice_data, stc_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### A short primer on the frequency domain and the Fourier transform\n",
"Now, before we'll delve into important temporal preprocessing operations, let's discuss how we can represent time series in the frequency domain using the Fourier transform.\n",
"\n",
"Thus far, we've always looked at our fMRI-signal as activity that varies across **time**. In other words, we're always looking at the signal in the *time domain*. However, there is also a way to look at a signal in the *frequency domain* (also called 'spectral domain') through transforming the signal using the *Fourier transform*. \n",
"\n",
"Basically, the fourier transform calculates to which degree sine waves of different frequencies are present in your signal. If a sine wave of a certain frequency (let's say 2 hertz) is (relatively) strongly present in your signal, it will have a (relatively) high *power* in the frequency domain. Thus, looking at the frequency domain of a signal can tell you something about the frequencies of the (different) sources underlying your signal.\n",
"\n",
"This may sound quite abstract, so let's look at some examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# start with importing the python packages we'll need \n",
"from niedu.utils.nii import create_sine_wave\n",
"from numpy.linalg import inv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sine waves are oscillating signals that have (for our purposes) two important characteristics: their *frequency* and their *amplitude*. Frequency reflects how fast a signal is oscillating (how many cycles it completes in a given time period) and the amplitude is the (absolute) height of the peaks and troughs of the signal. To illustrate this, we generate a couple of sine-waves (with a sampling rate of 500 Hz, i.e., 500 samples per second) with different amplitudes and frequencies, which we plot below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"max_time = 5\n",
"sampling_rate = 500\n",
"timepoints = np.arange(0, max_time, 1.0 / sampling_rate)\n",
"\n",
"amplitudes = np.arange(1, 4)\n",
"frequencies = np.arange(1, 4)\n",
"sines = []\n",
"\n",
"fig, axes = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(13, 8))\n",
"for i, amp in enumerate(amplitudes):\n",
" \n",
" for ii, freq in enumerate(frequencies):\n",
" this_ax = axes[i, ii]\n",
" \n",
" if ii == 0:\n",
" this_ax.set_ylabel('Activity (A.U.)', fontsize=14)\n",
" \n",
" if i == 2:\n",
" this_ax.set_xlabel('Time (seconds)', fontsize=14)\n",
" \n",
" sine = create_sine_wave(timepoints, frequency=freq, amplitude=amp) \n",
" sines.append((sine, amp, freq))\n",
" this_ax.plot(timepoints, sine)\n",
" this_ax.set_xlim(0, 5)\n",
" this_ax.set_title('Amp = %i, freq = %i' % (amp, freq), fontsize=18)\n",
" this_ax.set_ylim(-3.5, 3.5)\n",
" this_ax.grid()\n",
"\n",
"fig.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the signals vary in their amplitude (from 1 to 3) and their frequency (from 1 - 3). Make sure you understand these characteristics! Now, we are going to use the fast fourier transform to plot the same signals in the *frequency domain*. We're not going to use a function to compute the FFT-transformation, but we're going to use a function that computes the \"power spectrum density\" directly (which makes life a little bit easier): the `periodogram` function from `scipy.signal`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy.signal import periodogram"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, the `periodogram` function takes two arguments, the signal and the sampling frequency (the sampling rate in Hz with which you recorded the signal), and returns both the reconstructed frequencies and their associated power values. An example:\n",
"\n",
"```python\n",
"freqs, power = periodogram(some_signal, 1000) # sampling_rate = 1000 Hz\n",
"```\n",
"\n",
"We'll use the `periodogram` function to plot the 9 sine-waves (from the previous plot) again, but this time in the frequency domain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(13, 8))\n",
"\n",
"for i, ax in enumerate(axes.flatten()):\n",
" sine, amp, freq = sines[i]\n",
" title = 'Sine with amp = %i and freq = %i' % (amp, freq)\n",
" freq, power = periodogram(sine, sampling_rate)\n",
" ax.plot(freq, power)\n",
" ax.set_xlim(0, 4)\n",
" ax.set_xticks(np.arange(5))\n",
" ax.set_ylim(0, 25)\n",
" \n",
" if i > 5:\n",
" ax.set_xlabel('Frequency (Hz)', fontsize=15)\n",
" \n",
" if i % 3 == 0:\n",
" ax.set_ylabel('Power', fontsize=15)\n",
"\n",
" ax.set_title(title, fontsize=15)\n",
" ax.grid()\n",
" \n",
"fig.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the frequency domain correctly 'identifies' the amplitudes and frequencies from the signals. But the real 'power' from fourier transforms is that they can reconstruct a signal in *multiple underlying oscillatory sources*. Let's see how that works. We're going to load in a time-series recorded for 5 seconds of which we don't know the underlying oscillatory sources. First, we'll plot the signal in the time-domain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"mystery_signal = np.load('mystery_signal.npy')\n",
"plt.figure(figsize=(15, 5))\n",
"plt.plot(np.arange(0, 5, 0.001), mystery_signal)\n",
"plt.title('Time domain', fontsize=25)\n",
"plt.xlim(0, 5)\n",
"plt.xlabel('Time (sec.)', fontsize=20)\n",
"plt.ylabel('Activity (A.U.)', fontsize=20)\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n", "Tip 2: before looping over the different slices, define the onsets of the reference slice (remember: Python is 0-indexed).

\n", "Tip 3: you have to initialize your resampler again for every iteration of your loop!

\n", "Tip 4: when initializing your resampler, set

\n",
"**ToDo** (2 points): It's hard to see which frequencies (and corresponding amplitudes) are present in this 'mystery signal'. Get the frequencies and power of the signal using the `periodogram` function (you have to deduce the sampling rate of the signal yourself! It is *not* the variable `sampling_rate` from before). Set the x-limit of the x-axis to (0, 8) (`plt.xlim(0, 8)`). Also, give the plot appropriate labels for the axes.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "430da1112b529d08922661cb1167f6b2",
"grade": true,
"grade_id": "cell-031446c144829664",
"locked": false,
"points": 2,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement your ToDo here\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you know that you can use visualization of the signal in the frequency domain to help you understand from which underlying frequencies your signal is built up. Unfortunately, real fMRI data is not so 'clean' as the simulated sine waves we have used here, but the frequency representation of the fMRI signal can still tell us a lot about the nature and contributions of different (noise- and signal-related) sources!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Frequency characteristics of fMRI data\n",
"Now, we will load a (much noisier) voxel signal and the corresponding design-matrix (which has just one predictor apart from the intercept). The signal was measured with a TR of 2 seconds and contains 300 volumes (timepoints), so the duration was 600 seconds. The predictor reflects an experiment in which we showed 15 stimuli in intervals of 40 seconds (i.e., one stimulus every 40 seconds).\n",
"\n",
"We'll plot both the signal ($y$) and the design-matrix ($X$; without intercept):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from niedu.utils.nii import simulate_signal\n",
"\n",
"onsets = np.arange(0, 600, 40)\n",
"sig, X = simulate_signal(\n",
" onsets,\n",
" ['stim'] * onsets.size,\n",
" duration=600,\n",
" TR=2,\n",
" icept=0,\n",
" params_canon=[10],\n",
" std_noise=5,\n",
" rnd_seed=29,\n",
" phi=0.95,\n",
" plot=False\n",
")\n",
"X = X[:, :-1] # trim off the temporal derivative\n",
"\n",
"print(\"Shape of X: %s\" % (X.shape,))\n",
"print(\"Shape of y (sig): %s\" % (sig.shape,))\n",
"\n",
"plt.figure(figsize=(15, 8))\n",
"plt.subplot(2, 1, 1)\n",
"plt.plot(sig)\n",
"plt.xlim(0, sig.size)\n",
"plt.title('Signal in time domain', fontsize=25)\n",
"plt.ylabel('Activity (a.u.)', fontsize=20)\n",
"plt.grid()\n",
"\n",
"plt.subplot(2, 1, 2)\n",
"plt.plot(np.arange(sig.size), X[:, 1], c='tab:orange')\n",
"plt.title('Predictor in time domain', fontsize=25)\n",
"plt.xlabel('Time (TR)', fontsize=15)\n",
"plt.xlim(0, sig.size)\n",
"plt.ylabel('Activity (a.u.)', fontsize=20)\n",
"plt.ylim(-0.5, 1.5)\n",
"plt.tight_layout()\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (2 points): Run linear regression using the variable `X` (which already contains an intercept) to explain the variable `sig`. Calculate the model's MSE, and store this in a variable named `mse_no_filtering`. Then, in the next code-cell, plot the signal and the predicted signal ($\\hat{y}$) in a single figure. Give the axes sensible labels and add a legend.\n",
"

\n",
"\n",
"**Tip** (feel free to ignore): This tutorial, you'll be asked to compute t-values, R-squared, and MSE of several models quite some times. To make your life easier, you could (but certainly don't have to!) write a function that runs, for example, linear regression and returns the R-squared, given a design (X) and signal (y). For example, this function could look like:\n",
"\n",
"```python\n",
"def compute_mse(X, y):\n",
" # you implement the code here (run lstsq, calculate yhat, etc.)\n",
" # ...\n",
" # and finally, after you've computed the model's MSE, return it\n",
" return r_squared\n",
"```\n",
"\n",
"If you're ambitious, you can even write a single function that calculates t-values, MSE, and R-squared. This could look something like this:\n",
"\n",
"```python\n",
"def compute_all_statistics(X, y, cvec):\n",
" # Implement everything you want to know and return it\n",
" # ...\n",
" return t_value, MSE, r_squared # and whatever else you've computed!\n",
"```\n",
"\n",
"Doing this will save you a lot of time and may prevent you from making unneccesary mistakes (like overwriting variables, typos, etc.). Lazy programmers are the best programmers!\n",
"\n",
"(Note: writing these functions is *optional*!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "4a35cdfede995e21a70a9c28e52af2d6",
"grade": false,
"grade_id": "cell-dadea5c142374e31",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement the linear regression part of the ToDo here:\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "d58580f2d92156d354b3822cc1087d19",
"grade": true,
"grade_id": "cell-20561dbb73c2f8ba",
"locked": true,
"points": 1,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above linear regression ToDo. '''\n",
"from niedu.tests.nii.week_4 import test_mse_no_filtering\n",
"test_mse_no_filtering(X, sig, mse_no_filtering)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "c24d8fc5e579579c8924a29d2b44e1fa",
"grade": true,
"grade_id": "cell-117a97afa6bbd6ac",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement your y/yhat plot here (this is manually graded)\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToThink** (1 point): In your plot above, you should see that the fit of your model is \"off\" due to some low frequency drift. Name two potential causes of drift.\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "0def768ee55a4bbda525a8cf5e4bc311",
"grade": true,
"grade_id": "cell-4e3ee9534090d6cb",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Two approaches to temporal preprocessing\n",
"\n",
"Basically, there are *two* ways to preprocess your data:\n",
"1. Manipulating the signal ($\\mathbf{y}$) **directly** *before* fitting your GLM-model;\n",
"2. Including \"noise predictors\" in your design ($\\mathbf{X}$) when fitting your model;\n",
"\n",
"Often, preprocessing steps can be done both by method 1 (manipulating the signal directly) and by method 2 (including noise predictors). For example, one of the videos showed that you could apply a high-pass filter by applying a \"gaussian weighted running line smoother\" (the method FSL employs) *directly* on the signal (method 1) **or** you could add \"low-frequency (drift) predictors\" to the design matrix (method 2; in the video they used a 'discrete cosine basis set'; the SPM method). In practice, both methods often yield very similar resuls. The most important thing to understand is that both methods are trying to accomplish the same goal: reduce the noise term of the model.\n",
"\n",
"First, we will discuss how temporal and spatial filtering can *directly* filter the signal (method 1) to reduce error. Later in the tutorial, we will discuss including adding outlier-predictors and motion-predictors to the design to reduce noise (method 2). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### High-pass filtering of fMRI data (option 1)\n",
"From the previous ToDo, you probably noticed that the fit of the predictor to the model was not very good. The cause for this is the slow 'drift' — a low-frequency signal — that prevents the model from a good fit. Using a high-pass filter — meaning that you *remove* the low-frequency signals and thus *pass only the high frequencies* — can, for this reason, improve the model fit. But before we go on with actually high-pass filtering the signal, let's take a look at the frequency domain representation of our voxel signal: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(17, 5))\n",
"TR = 2\n",
"sampling_frequency = 1 / TR # our sampling rate is 0.5, because our TR is 2 sec!\n",
"freq, power = periodogram(sig, fs=0.5)\n",
"plt.plot(freq, power)\n",
"plt.xlim(0, freq.max())\n",
"plt.xlabel('Frequency (Hz)', fontsize=15)\n",
"plt.ylabel('Power (dB)', fontsize=15)\n",
"plt.axvline(x=0.01,color='r',ls='dashed', lw=2)\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the frequency-domain plot above, you can clearly see a low-frequency drift component at frequencies approximately below 0.01 Hz (i.e., left of the dashed red line)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToThink** (1 point): Apart from the low frequency drift component around 0.01 Hz, there is also a component visible at 0.025 Hz (and its harmonics at 0.05, 0.075, 0.1, etc.). What does this component represent? Please explain (concisely).\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "03631a5da5728b09ddd15c324a25fc50",
"grade": true,
"grade_id": "cell-e983db4008407af2",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's get rid of that pesky low frequency drift that messes up our model! There is no guideline on how to choose the cutoff of your high-pass filter, but most recommend to use a cutoff of around 100 seconds (i.e., of 0.01 Hz). This means that any oscillation slower than 100 seconds (one cycle in 100 seconds) is removed from your signal.\n",
"\n",
"Anyway, as you've seen in the videos, there are many different ways to high-pass your signal (e.g., frequency-based filtering methods vs. time-based filtering methods). Here, we demonstrate a time-based 'gaussian running line smoother', which is used in FSL. As you've seen in the videos, this high-pass filter is estimated by convolving a gaussian \"kernel\" with the signal (taking the element-wise product and summing the values) across time, which is schematically visualized in the image below:\n",
"\n",
"![](https://docs.google.com/drawings/d/e/2PACX-1vRZTMvXJDBj3HGhrMZxQy1_6T1yF7bVinpBpeIQBgVUPAM_igGXrMonQskFP_Mymy-NVvGJnsvbDhiv/pub?w=934&h=649)\n",
"\n",
"One implementation of this filter is included in the scipy \"ndimage\" subpackage. Let's import it\\*:\n",
"\n",
"---\n",
"\\***Note**: if you're going to restart your kernel during the lab for whatever reason, make sure to re-import this package to avoid `NameErrors` (i.e., the error that you get when you call a function that isn't imported)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy.ndimage import gaussian_filter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `gaussian_filter` function takes two mandatory input: some kind of (n-dimensional) signal and a cutoff, \"sigma\", that refers to the width of the gaussian filter in standard deviations. \"What? We decided to define our cutoff in seconds (or, equivalently, Hz), right?\", you might think. For some reason neuroimaging packages seem to define cutoff for their temporal filters in **seconds** while more 'low-level' filter implementations (such as in scipy) define cutoffs (of gaussian filters) in **the width of the gaussial filter**, i.e., **sigma**. Fortunately, there is an easy way to convert a cutoff in seconds to a cutoff in sigma, given a particular TR (in seconds):\n",
"\n",
"\\begin{align}\n",
"\\sigma = \\frac{\\mathrm{cutoff}_{sec}}{\\sqrt{8\\ln{2}} \\cdot \\mathrm{TR}_{sec}}\n",
"\\end{align}\n",
"\n",
"where $\\ln{2}$ refers to the natural logarithm (i.e., log with base $e$)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (1 point): Suppose I acquire some fMRI data (200 volumes) with a sampling frequency of 0.25 Hz and I would like to apply a high-pass filter of 80 seconds. What sigma should I choose? Calculate sigma and store it in a variable named `sigma_todo`. You can use the function `np.log(some_number)` to evaluate the natural logarithm of a number.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "07547af9d3ceed4c05cafebc4c135b87",
"grade": false,
"grade_id": "cell-cc9371327535b16b",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement your ToDo here\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "b9d799f0948a7c41e69429fe56e13718",
"grade": true,
"grade_id": "cell-bb07c42af9a395ef",
"locked": true,
"points": 1,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo.'''\n",
"from niedu.tests.nii.week_4 import test_sec2sigma\n",
"test_sec2sigma(sec=80, ans=sigma_todo)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Importantly, the gaussian filter does not return the filtered signal itself, but the estimated low-frequency component of the data. As such, to filter the signal, we have to subtract this low-frequency component from the original signal to get the filtered signal! \n",
"\n",
"Below, we estimate the low-frequency component using the high-pass filter first and plot it together with the original signal, which shows that it accurately captures the low-frequency drift (upper plot). Then, we subtract the low-frequency component from the original signal to create the filtered signal, and plot it together with the original signal to highlight the effect of filtering (lower plot):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"filt = gaussian_filter(sig, 8.5)\n",
"\n",
"plt.figure(figsize=(17, 10))\n",
"\n",
"plt.subplot(2, 1, 1)\n",
"plt.plot(sig, lw=2)\n",
"plt.plot(filt, lw=4)\n",
"plt.xlim(0, sig.size)\n",
"plt.legend(['Original signal', 'Low-freq component'], fontsize=20)\n",
"plt.title(\"Estimated low-frequency component using HP-filter\", fontsize=25)\n",
"plt.ylabel(\"Activation (A.U.)\", fontsize=20)\n",
"plt.grid()\n",
"\n",
"# IMPORTANT: subtract filter from signal\n",
"filt_sig = sig - filt\n",
"\n",
"plt.subplot(2, 1, 2)\n",
"plt.plot(sig, lw=2)\n",
"plt.plot(filt_sig, lw=3, c='tab:green')\n",
"plt.xlim(0, sig.size)\n",
"plt.legend(['Original signal', 'Filtered signal'], fontsize=20)\n",
"plt.title(\"Effect of high-pass filtering\", fontsize=25)\n",
"plt.xlabel(\"Time (TR)\", fontsize=20)\n",
"plt.ylabel(\"Activation (A.U.)\", fontsize=20)\n",
"plt.grid()\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The signal looks much better, i.e., it doesn't display much drift anymore. But let's check this by plotting the original and filtered signal in the frequency domain:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(17, 5))\n",
"\n",
"freq, power = periodogram(sig, fs=0.5)\n",
"plt.plot(freq, power, lw=2)\n",
"\n",
"freq, power = periodogram(filt_sig, fs=0.5)\n",
"plt.plot(freq, power, lw=2)\n",
"\n",
"plt.xlim(0, freq.max())\n",
"plt.ylabel('Power (dB)', fontsize=15)\n",
"plt.xlabel('Frequency (Hz)', fontsize=15)\n",
"plt.title(\"The effect of high-pass filtering in the frequency domain\", fontsize=20)\n",
"plt.legend([\"Original signal\", \"Filtered signal\"], fontsize=15)\n",
"plt.grid()\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sweet! It seems that the high-pass filtering worked as expected! But does it really improve our model fit?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (1 point): We've claimed several times that high-pass filtering improves model fit, but is that really the case in our case? To find out, fit the same design (variable `X`) on the filtered signal (variable `filt_sig`) using linear regression. Calculate MSE and store it in the variable `mse_with_filter`.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "54297a09edca887487d560636bace95b",
"grade": false,
"grade_id": "cell-623fa8547acd57c2",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement linear regression of X on filt_sig here!\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "cd46e6e5d50aa08131933941de8d3665",
"grade": true,
"grade_id": "cell-cea5cd70b866a178",
"locked": true,
"points": 1,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo. '''\n",
"from niedu.tests.nii.week_4 import test_mse_with_filter\n",
"test_mse_with_filter(X, filt_sig, mse_with_filter)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (2 points)\n",
" \n",
"So far, we've filtered only a single (simulated) voxel timeseries. Normally, you want to temporally filter *all* your voxels in your 4D fMRI data, of course. Below, we load in such a 4D fMRI file (`data_4d`), which has $50$ timepoints and ($80 \\cdot 80 \\cdot 44 = $) $281600$ voxels.\n",
"\n",
"For this assignment, you need to apply the high-pass filter (i.e., the `gaussian_filter` function; use `sigma=25`) on each and every voxel separately, which means that you need to loop through all voxels (which amounts to three nested for-loops across all three spatial dimensions). Below, we've already loaded in the data. Now it's up to you write the loops (across spatial dimensions) to filter the signal in the inner-most loop and store it in the pre-allocated `data_4d_filt` variable (the loop may, if implemented correctly, take about 20 seconds!).\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "a820568dc8790a4432234e52d21dabb1",
"grade": false,
"grade_id": "cell-a0c2db1cc9a5d1fb",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"import os.path as op\n",
"import nibabel as nib\n",
"\n",
"f = op.join(op.dirname(op.abspath('')), 'week_1', 'func.nii.gz')\n",
"print(\"Loading data from %s ...\" % f)\n",
"data_4d = nib.load(f).get_fdata()\n",
"\n",
"print(\"Shape of the original 4D fMRI scan: %s\" % (data_4d.shape,))\n",
"\n",
"# Here, we pre-allocate a matrix of the same shape as data_4d, in which\n",
"# you need to store the filtered timeseries\n",
"data_4d_filt = np.zeros(data_4d.shape)\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "b49640a2aa60c06c051cdc0bd85811cf",
"grade": true,
"grade_id": "cell-de2da1b458164588",
"locked": true,
"points": 2,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo. '''\n",
"# Test some random indices\n",
"np.testing.assert_almost_equal(data_4d_filt[20, 20, 20, 20], -1366.2907714, decimal=3)\n",
"np.testing.assert_almost_equal(data_4d_filt[30, 30, 30, 30], -381.6953125, decimal=3)\n",
"np.testing.assert_almost_equal(data_4d_filt[40, 40, 40, 40], -269.359375, decimal=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **Note**: when temporally filtering your fMRI data ($\\mathbf{y}$), it is important to apply the *same filter* to your design matrix ($\\mathbf{X}$)! This makes sure that your design matrix does not contain any \"information\" that is removed from the fMRI time series by the filter, anyway. Note that most neuroimaging software packages (including FSL) do this automatically.\n",
" \n",
"Technically, by filtering each column (predictor) in your design matrix as well, you're orthogonalizing your design matrix with respect to your temporal high-pass filter. If you want to know more about this, check out this excellent paper.\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### High-pass filtering of fMRI data (option 2)\n",
"As we've seen, high-pass filtering using a \"running line smoother\" is an operation that is applied to the signal directly (before model fitting) to reduce the noise term. Another way to reduce the noise term is to include *noise regressors* (also called 'nuisance variables/regressors') in the design matrix. As such, we can subdivide our design matrix into \"predictors of interest\" (which are included to model the task/stimuli) and \"noise predictors\" (which aim to model the thus-far unmodelled variance). These \"noise predictors\" are also sometimes called \"nuisance\" predictors/regressors/covariates. We can now slightly reformulate our linear regression equation by dividing our design into two components, $\\mathbf{X}_{\\mathrm{interest}}$ and $\\mathbf{X}_{\\mathrm{noise}}$:\n",
"\n",
"\\begin{align}\n",
"y = \\mathbf{X}_{\\mathrm{interest}}\\beta_{\\mathrm{interest}} + \\mathbf{X}_{\\mathrm{noise}}\\beta_{\\mathrm{noise}} + \\epsilon\n",
"\\end{align}\n",
"\n",
"Importantly, the difference between $\\mathbf{X}_{\\mathrm{noise}}$ and $\\epsilon$ is that the $\\mathbf{X}_{\\mathrm{noise}}$ term refers to noise-related activity that you *are able to model* while the $\\epsilon$ term refers to the noise that you *can't model* (this is often called the \"irreducible noise/error\" term). \n",
"\n",
"We can use this technique, which we'll call \"nuisance regression\" (which we'll discuss in more detail later), as an alternative to directly high-pass filtering the signal ($\\mathbf{y}$). One example of this (with respect to high-pass filtering) is including a series of cosines with varying frequencies in your design, which have the same effect as a high-pass filter. This type of filter is called a \"discrete cosine (basis) set\". Basically, for any given high-pass cutoff (in hertz), the \"discrete cosine transform\" (DTC) will yield a set of cosine regressors that is sufficient to filter out any frequency slower than your cutoff.\n",
"\n",
"Fortunately, the `nilearn` package contains a function to calculate discrete cosine sets:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nilearn.glm.first_level.design_matrix import _cosine_drift as discrete_cosine_transform"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function takes two arguments: `high_pass` (in hertz) and `frame_times` (an array with volume onsets). The signal from the previous examples (`sig`) was from an experiment lasting 600 seconds and with a TR of 2, so the volume onsets can be defined as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame_times = np.linspace(0, 600, 300, endpoint=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToThink** (0 points): Here, we assumed our volume onset at the start of the TR. But technically, the exact definition/computation of our \"frame times\" depends on a preprocessing step that we discussed previously. Which one?\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's compute a discrete cosine set for a highpass cutoff of 100 seconds (i.e., 0.01 hertz):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dc_set = discrete_cosine_transform(high_pass=0.01, frame_times=frame_times)\n",
"dc_set = dc_set[:, :-1] # remove the (extra) intercept\n",
"print(sig.shape)\n",
"print(dc_set.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the function returns a numpy array with the same number of timepoints as our signal (300) and 12 predictors (we removed the last one, because that's an intercept). Note that it's not super important to know how, mathematically, a discrete cosine set is created; it's more important to understand the idea of adding these low-frequency cosine predictors to your design matrix in order to account for (\"explain\") the low-frequency parts of your data.\n",
"\n",
"Let's plot the discrete cosine set that we created (note: we only plot the first 6 for clarity):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(15, 5))\n",
"plt.plot(dc_set[:, :6], lw=3)\n",
"plt.xlim(0, sig.size)\n",
"plt.grid()\n",
"plt.title(\"Discrete cosine set for a high-pass filter of 0.01 Hz\", fontsize=25)\n",
"plt.xlabel(\"Time (TRs)\", fontsize=20)\n",
"plt.ylabel(\"Activation (A.U)\", fontsize=20)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (2 points): Add the discrete cosine set as predictors to the design (`X`) and store it in a new variable named `X_dct` (do *not* overwrite the `X` variable). Make sure the first two columns of `X_dct` are the original predictors from `X` followed by the DCT set. Then, run linear regression. Save the parameters (\"betas\") in a variable named `betas_dct`. Plot the predicted signal ($\\hat{\\mathbf{y}}$) and the signal (`sig`) in the same plot. Name the axis labels appropriately.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "4dfe98bc7456d1fb80440fa031077ef1",
"grade": true,
"grade_id": "cell-fe0f074f077b93fc",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"\"\"\" Implement your ToDo here. \"\"\"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "27d29e22695f6fbdc51ae1ef41f2ffb3",
"grade": true,
"grade_id": "cell-5a0c55935e0371e4",
"locked": true,
"points": 1,
"schema_version": 3,
"solution": false,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"from niedu.tests.nii.week_4 import test_dct_betas\n",
"test_dct_betas(X, dc_set, sig, betas_dct)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you've seen in the previous ToDos, it doesn't really matter which strategy you choose, filtering the signal directly or adding nuisance regressors to the design: both (usually) work equally well."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Autocorrelation and prewhitening \n",
"As you (should) have seen in the previous ToDos, the model fit increases tremendously after high-pass filtering! This surely is the most important reason why you should apply a high-pass filter. But there is another important reason: high-pass filters reduce the signal's autocorrelation! \n",
"\n",
"\"Sure, but why should we care about autocorrelation?\", you might think? Well this has to with the estimation of the standard error of our model, i.e., $\\hat{\\sigma}^{2}\\mathbf{c}(\\mathbf{X}^{T}\\mathbf{X})^{-1}\\mathbf{c}^{T}$. As you've seen in the videos, the Gauss-Markov theorem states that in order for OLS to yield valid estimates (including estimates of the parameters' standard errors) *the errors (residuals) have a mean of 0, have 0 covariance (i.e., are uncorrelated), and have equal variance*. \n",
"\n",
"Let's go through these three assumptions step by step. We'll use the previously filtered signal for this.\n",
"\n",
"### Assumption of zero-mean of the residuals\n",
"First, let's check whether the mean of the residuals is zero:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"b = inv(X.T @ X) @ X.T @ filt_sig\n",
"y_hat = X @ b\n",
"resids = filt_sig - y_hat\n",
"mean_resids = resids.mean()\n",
"print(\"Mean of residuals: %3.f\" % mean_resids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToThink** (1 point): What component of the design-matrix ($\\mathbf{X}$) ensures that the mean of the residuals is zero? Explain (concisely) why.\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "c70670653a497b0332076553b813e6aa",
"grade": true,
"grade_id": "cell-398fc4a5c6e31dfc",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Equal variance of the residuals\n",
"Alright, sweet — the first assumption seems valid for our data. Now, the next two assumptions — about equal variance of the residuals and no covariance between residuals — are trickier to understand and deal with. In the book (and videos), these assumptions are summarized in a single mathemtical statement: the covariance-matrix of the residuals not differ substantially from the identity-matrix ($\\mathbf{I}$) scaled by the noise-term ($\\hat{\\sigma}^{2}$). Or, put in a formula:\n",
"\n",
"\\begin{align}\n",
"\\mathrm{cov}[\\epsilon] = \\hat{\\sigma}^{2}\\mathbf{I}\n",
"\\end{align}\n",
"\n",
"This sounds difficult, so let's break it down. First off all, the covariance matrix of the residuals is always a symmetric matrix of shape $N \\times N$, in which the *diagonal represents the variances* and the *off-diagonal represents the covariances*. For example, at index $[i, i]$, the value represents the variance of the residual at timepoint $i$. At index $[i, j]$, the value represents the covariance between the residuals at timepoints $i$ and $j$. \n",
"\n",
"In OLS, we assume that the covariance matrix of the residuals ($\\mathrm{cov}[\\epsilon]$) matches the \n",
"identity-matrix ($\\mathbf{I}$) times the noise-term ($\\hat{\\sigma}^{2}$). The identity-matrix is simply a matrix with all zeros except for the diagonal, which contains ones. For example, the identity-matrix for a residual-array of length $8$ looks like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identity_mat = np.eye(8) # makes an \"eye\"dentity matrix\n",
"print(identity_mat)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also represent this visually:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(8, 8))\n",
"plt.imshow(identity_mat, cmap='gray', aspect='auto')\n",
"plt.xlabel(\"Time\", fontsize=15)\n",
"plt.ylabel(\"Time\", fontsize=15)\n",
"plt.title(\"Assumed covariance matrix of residuals\", fontsize=20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, suppose we calculated that the noise-term of a model explaining this hypothetical signal of length $8$ equals 2.58 ($\\hat{\\sigma}^{2} = 2.58$). Then, OLS *assumes* that the noise stems from a covariance matrix estimated from the residuals:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"noise_term = 2.58\n",
"assumed_cov_resid = noise_term * identity_mat\n",
"print(assumed_cov_resid)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In other words, this assumption about the covariance matrix of the residuals states that the *variance across residuals (the diagonal of the matrix) should be equal* and the *covariance between residuals (the off-diagonal values of the matrix) should be 0* (in the population).\n",
"\n",
"Now, we won't explicitly estimate the covariance matrix of the residuals (which is usually estimated using techniques that fall beyond the scope of this course); however, we *do* want you to understand *conceptually* how fMRI data might invalidate the assumptions about the covariance matrix of the residuals and how fMRI analyses deal with this (i.e., using prewhitening, which is explained later). \n",
"\n",
"So, let's check *visually* whether the assumption of equal variance of our residuals roughly holds for our (simulated) fMRI data. Now, when we consider this assumption in the context of our fMRI data, the assumption of \"equal variance of the residuals\" (also called homoskedasticity) means that we assume that the \"error\" in the model is equally big across our timeseries data. In other words, the mis-modelling (error) should be constant over time.\n",
"\n",
"Let's check this for our data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(15, 5))\n",
"plt.plot(resids, marker='.')\n",
"plt.xlim(0, resids.size)\n",
"plt.xlabel(\"Time (TR)\", fontsize=15)\n",
"plt.ylabel(\"Activation (A.U.)\", fontsize=15)\n",
"plt.title(\"Residuals\", fontsize=20)\n",
"plt.axhline(0, ls='--', c='black')\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks quite alright! Sure, there is some variation here and there, but given that our estimates (including the residuals and their variance!) are imperfect, this suffices. \n",
"\n",
"Just to give you some intuition about serious issues with homoskedasticity, check out the (hypothetical) timeseries residuals below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"mfactor = np.linspace(0, 2, sig.size)\n",
"example_resids = resids * mfactor\n",
"\n",
"plt.figure(figsize=(15, 5))\n",
"plt.xlim(0, sig.size)\n",
"plt.xlabel(\"Time (TR)\", fontsize=15)\n",
"plt.ylabel(\"Activation (A.U.)\", fontsize=15)\n",
"plt.title(\"An example of residuals with (problematic) unequal variance\", fontsize=20)\n",
"plt.axhline(0, ls='--', c='black')\n",
"plt.plot(example_resids, marker='.')\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" \n",
"**ToThink** (1 point): What could cause unequal variance in the residuals of an fMRI signal, *given that autocorrelation (i.e. low-frequency components) is filtered out appropriately*? In other words, can you think of something that might cause larger (or smaller) errors across the duration of an fMRI run?\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "2ea0844f548cf9c9641c8869acdc2dbe",
"grade": true,
"grade_id": "cell-b7a49704abd2c6ec",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Zero covariance between residuals\n",
"The last assumption of zero covariance between residuals (corresponding to the assumption of all zeros on the off-diagonal elements of the covariance-matrix of the residuals) basically refers to the assumption that *there is no autocorrelation (correlation in time) in the residuals*. In other words, knowing the residual at timepoint $i$ does not tell you anything about the residual at timepoint $i+\\tau$, where $\\tau$ reflects a particular \"lag\" and can be any positive number (up to $N$). For example, the \"lag 1\" autocorrelation ($\\tau = 1$) is the correlation between the data at timepoints $i$ and $i+1$. In OLS, we assume that there is no autocorrelation across all possible lags.\n",
"\n",
"Take for example the residuals of our unfiltered signal from before, which looked like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"b = inv(X.T @ X) @ X.T @ sig\n",
"resids_new = sig - X @ b\n",
"\n",
"plt.figure(figsize=(15, 8))\n",
"plt.subplot(2, 1, 1)\n",
"plt.plot(resids_new, marker='.')\n",
"plt.axhline(0, ls='--', c='black')\n",
"plt.xlim(0, 200)\n",
"plt.xlabel(\"Time (TR)\", fontsize=15)\n",
"plt.title('Residuals (containing unmodelled drift!)', fontsize=20)\n",
"plt.ylabel('Activity (a.u.)', fontsize=15)\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above plot, there is clear and strong autocorrelation in the residuals. For example, the residuals are overall getting larger across time (\"drift\") and it contains other low-frequency (oscillatory) patterns. As such, we *do know something about the residual at timepoint $i+1$ (and other lags) given the residual at timepoint $i$, namely that it is likely that the residual at timpoint $i+1$ is likely __lower__ than the residual at timepoint $i$*! Therefore, drift is a perfect example of something that (if not modelled) causes autocorrelation in the residuals (i.e. covariance between residuals)! In other words, autocorrelation (e.g. caused by drift) will cause the values of the covariance matrix of the residuals at the indices $[i, i+1]$ to be non-zero, violating the third assumption of Gauss-Markov's theorem!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (1 point)\n",
" \n",
"We stated that autocorrelation captures the information that you have of the residual at timepoint $i+\\tau$ given that you know the residual at timepoint $i$. Practically, you can compute the autocorrelation (or actually, autocovariance) for a particular lag $\\tau$ by computing the covariance of the residuals with the lag-$\\tau$ shifted version of itself. In general, the autocovariance for the residuals $\\epsilon$ with lag $\\tau$ is calculated as:\n",
"\n",
"\\begin{align}\n",
"\\mathrm{cov}[\\epsilon_{i}, \\epsilon_{i+\\tau}] = \\frac{1}{N-\\tau-1}\\sum_{i=1}^{N-\\tau}(\\epsilon_{i}\\cdot\\epsilon_{i+\\tau})\n",
"\\end{align}\n",
"\n",
"Jeanette Mumford explains how to do this quite clearly in her [video on prewhitening](https://www.youtube.com/watch?v=4VSzZKO0k_w) (around minute 10). For this ToDo, calculate the covariance (with $\\tau = 1$) between the residuals (i.e., using the variable `resids_new`) and store this in a variable named `lag1_cov`.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "84a90d401ac762182cdbdd0f7f59d551",
"grade": false,
"grade_id": "cell-d0a50982f817bdd4",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement your ToDo here\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "8e1dae5dfad437f168a9307114ab47a2",
"grade": true,
"grade_id": "cell-650ab4e9ce9a3385",
"locked": true,
"points": 1,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo. '''\n",
"from niedu.tests.nii.week_4 import test_lag1_cov\n",
"test_lag1_cov(resids_new, lag1_cov)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Accounting for unequal variance and autocorrelation: prewhitening\n",
"So, in summary, if the covariance matrix of your residuals appear to significantly deviate from the identity-matrix scaled by the noise-term ($\\mathrm{cov}[\\epsilon] = \\hat{\\sigma}^{2}\\mathbf{I}$) — either due to unequal variance or non-zero covariance — your estimate of the variance term of your effects ($\\mathrm{var}[c\\hat{\\beta}]$) will be incorrect. \n",
"\n",
"Unfortunately, even after high-pass filtering (which corrects for *most* but not *all* autocovariance), the covariance matrix of the residuals of fMRI timeseries usually do no conform to the Markov-Gauss assumptions of equal variance and zero covariance. Fortunately, some methods have been developed by statisticians that transform the data such that the OLS assumptions hold again. One such technique is called *prewhitening*. \n",
"\n",
"Prewhitening uses an estimate of the error covariance matrix — usually denoted by $\\mathbf{V}$ — to account for possible unequal variance and/or autocovariance of the residuals. The matrix $\\mathbf{V}$ is an $N \\times N$ matrix ($N$ referring to the number of timepoints of your signal), and may be estimated using different techniques, with names such as \"ARMA\", \"AR(1)\", and \"REML\". In this course, we won't discuss these techniques and instead assume that your software of choice (FSL, AFNI, SPM) has computed an accurate estimate of $\\mathbf{V}$ for you already. But if you're up for a (programming) challenge, you can do the *optional* ToDo below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (optional, ungraded)\n",
" \n",
"The \"AR(1)\" method is a relatively \"easy\" way to estimate $\\mathbf{V}$. It computes only a single parameter, the lag-1 correlation (not covariance!). Then, it assumes that the correlation decreases exponentially as a function of lag:\n",
"\n",
"\\begin{align}\n",
"\\mathrm{autocor}_{\\tau} = \\phi^{\\tau + 1}\n",
"\\end{align}\n",
"\n",
"where $\\phi$ is the estimated lag-1 correlation. For example, for $\\phi = 0.9$, $\\mathbf{V}$ would look like:\n",
"\n",
"\\begin{align}\n",
"\\mathbf{V} = \\begin{bmatrix}\n",
" 1.0 & 0.9 & 0.9^{2} & 0.9^{3} & \\dots & 0.9^{N-1} \\\\\n",
" 0.9 & 1.0 & 0.9 & 0.9^{2} & \\dots & 0.9^{N-2} \\\\\n",
" 0.9^{2} & 0.9 & 1.0 & 0.9 & \\dots & 0.9^{N-3} \\\\\n",
" 0.9^{3} & 0.9^{2} & 0.9 & 1.0 & \\dots & 0.9^{N-4} \\\\\n",
" \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" 0.9^{N-1} & 0.9^{N-2} & 0.9^{N-3} & 0.9^{N-4} & \\dots & 1.0\n",
" \\end{bmatrix}\n",
"\\end{align}\n",
"\n",
"Compute the lag-1 correlation below and create the corresponding AR(1) matrix of the residuals (`resids_new`) and store this in a variable named `V_ar1` (do not use the `toeplitz` function for this).\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "9df6869dc6c9c223de14be75c8606e91",
"grade": false,
"grade_id": "cell-28c312e7a92208f5",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Implement the (optional) ToDo here.'''\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "6bef8c54c29c5917f09a3ade1466332a",
"grade": true,
"grade_id": "cell-5e0eeaee778dd227",
"locked": true,
"points": 0,
"schema_version": 3,
"solution": false,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the optional ToDo above. '''\n",
"from scipy.linalg import toeplitz\n",
"phi = (t0 - t0.mean()) @ (t1 - t1.mean()) / np.sqrt(np.sum((t0 - t0.mean()) ** 2) * np.sum((t1 - t1.mean()) ** 2))\n",
"V_ans = phi ** toeplitz(np.arange(resids_new.size))\n",
"np.testing.assert_array_almost_equal(V_ans, V_ar1)\n",
"print(\"Well done!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Suppose you have a signal of 20 timepoints (an irrealistically low number, but just ignore that for now) and that you already estimated the covariance matrix of the residuals of this signal. Now, suppose you take a look at it and you notice that it looks faaaaar from the identity-matrix ($\\mathbf{I}$) that we need for OLS.\n",
"\n",
"For example, you might see this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"N = 20\n",
"phi = 0.7\n",
"V = phi ** toeplitz(np.arange(N))\n",
"# This will increase variance over time\n",
"V[np.diag_indices_from(V)] += np.linspace(0, 1, V.shape[0])\n",
"\n",
"fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(15, 8))\n",
"axes[0].imshow(V, vmax=2, cmap='gray', aspect='auto')\n",
"axes[0].set_title(\"V (actual covariance matrix)\", fontsize=25)\n",
"axes[0].set_xlabel('Time (volumes)', fontsize=20)\n",
"axes[0].set_ylabel('Time (volumes)', fontsize=20)\n",
"\n",
"axes[1].imshow(np.eye(N), vmax=2, cmap='gray', aspect='auto')\n",
"axes[1].set_title(\"Identity-matrix (assumed matrix)\", fontsize=25)\n",
"#axes[1].colorbar()\n",
"axes[1].set_xlabel('Time (volumes)', fontsize=20)\n",
"\n",
"fig.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" \n",
" **ToThink** (0 points): In the above cell, the `phi` variable controls the amount of autcorrelation (technically, it is the $\\phi$ parameter of an AR(1) autocorrelation model). Try changing the value of this variable. Do you understand the way the plotted $V$ matrix is changing as a function of $\\phi$? \n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Well, shit. We have both unequal variance (different values on the diagonal) *and* non-zero covariance (some non-zero values on the off-diagonal). So, what to do now? Well, we can use the technique of prewhitening to make sure our observed covariance matrix ($\\mathbf{V}$) will be \"converted\" to the identity matrix! Basically, this amounts to plugging in some extra terms to formula for ordinary least squares. As you might have seen in the book/videos, the *original* OLS solution (i.e., how OLS finds the beta-parameters is as follows):\n",
"\n",
"\\begin{align}\n",
"\\hat{\\beta} = (\\mathbf{X}^{T}\\mathbf{X})^{-1}\\mathbf{X}^{T}y\n",
"\\end{align}\n",
"\n",
"Now, given that we've estimated our covariance matrix of the residuals, $\\mathbf{V}$, we can rewrite the OLS solution such that it prewhitens the data (and thus the covariance matrix of the residuals will approximate $\\hat{\\sigma}^{2}\\mathbf{I}$) as follows:\n",
"\n",
"\\begin{align}\n",
"\\hat{\\beta} = (\\mathbf{X}^{T}\\mathbf{V}^{-1}\\mathbf{X})^{-1}\\mathbf{X}^{T}\\mathbf{V}^{-1}y\n",
"\\end{align}\n",
"\n",
"Then, accordingly, the standard-error of any contrast of the estimated beta-parameters becomes:\n",
"\n",
"\\begin{align}\n",
"SE_{\\mathbf{c}\\hat{\\beta}} = \\sqrt{\\hat{\\sigma}^{2} \\cdot \\mathbf{c}(\\mathbf{X}^{T}\\mathbf{V}^{-1}\\mathbf{X})^{-1}\\mathbf{c}^{T}}\n",
"\\end{align}\n",
"\n",
"This \"modification\" of OLS is also called \"generalized least squares\" (GLS) and is central to most univariate fMRI analyses! You *don't* have to understand how this works mathematically; again, you should only understand *why* prewhitening makes sure that our data behaves according to the assumptions of the Gauss-Markov theorem.\n",
"\n",
"(Fortunately for us, there is usually an option to 'turn on' prewhitening in existing software packages, so we don't have to do it ourselves. But it is important to actually turn it on whenever you want to meaningfully and in an unbiased way interpret your statistics in fMRI analyses!)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (1 point): Given the target signal (`some_sig`), design-matrix (`some_X`), and the (hypothetical) covariance-matrix of the residuals from before (the variable `V`), calculate the beta-parameters using the prewhitened version of OLS (i.e., 'generalized least squares'; the formula above). Also, calculate the $t$-value of the contrast `[0, 1]` given the appropriate (GLS) computation of the standard-error. Store your results in the variable `betas_gls` and `tval_gls`, respectively.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "e922bd77c51acf6f2391959d2062b898",
"grade": false,
"grade_id": "cell-88f856f11c0a35a6",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Implement your ToDo here!\n",
"some_sig = sig[:20] # y\n",
"some_X = X[:20, :] # X\n",
"c_vec = np.array([0, 1]) # the contrast you should use\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "4fe3912140fd6b631556ee652463bc83",
"grade": true,
"grade_id": "cell-30eabb21fe70e080",
"locked": true,
"points": 2,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the optional ToDo above '''\n",
"from niedu.tests.nii.week_4 import test_gls_todo \n",
"test_gls_todo(some_sig, some_X, V, c_vec, betas_gls, tval_gls)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (*optional!* 0 points)\n",
"\n",
"When it comes to estimating parameters from data with unequal (co)variance, OLS actually still gives you unbiased parameters: on average, they will be correct. However, OLS is not the estimator with least variance anymore, meaning that it is less \"precise\" (it is not the \"Best Linear Unbiased Estimator\" anymore; still unbiased, but not the \"best\"). In fact, with unequal (co)variance, GLS is the best unbiased linear estimator. A good way to build intuition about this is to iteratively generate data with known parameters (the \"true betas\", $\\beta$, \"sigma squared\", $\\sigma^{2}$, and $V$) and to estimate the parameters back from the generated data. Then, you can plot the histograms of the parameters and you'll see that, on average, the estimated parameters are the same as the true parameters.\n",
"\n",
"Below, we set up such a \"simulation\" loop for you. We define the true parameters (`true_betas`, `siqsq`, `V`). Now, if you're up to the challenge, complete the loop by:\n",
"1. Generating some random design matrix (e.g., using `np.random.normal` of size (N, 1));\n",
"2. Stack an intercept\n",
"3. Generate correlated noise using the `np.random.multivariate_normal` function (with `cov=V`);\n",
"4. Generate the data using the formula $X\\beta + \\mathrm{noise}$;\n",
"5. Estimate the OLS parameters, and store in `betas_ols`;\n",
"6. Estimate the GLS parameters, and store in `betas_gls`;\n",
"7. In the next cell, plot both parameters (`betas_ols[:, 1]` and `betas_gls[:, 1]`) as histograms\n",
"\n",
"Do the histograms look like you expected? Also, try changing the `phi` parameter, which controls the amount of autocorrelation in the data (it's the AR1 parameter which is used to create `V`). What happens to the difference between OLS and GLS?\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "df49910967176b41e544926994e94022",
"grade": false,
"grade_id": "cell-7c2a62f024a1a631",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"true_betas = np.array([0, 1])\n",
"iters = 100\n",
"\n",
"N = 50\n",
"phi = 0.8 # AR1 parameter\n",
"sigsq = 2\n",
"V = sigsq * phi ** toeplitz(np.arange(N))\n",
"\n",
"betas_ols = np.zeros((iters, 2))\n",
"betas_gls = np.zeros((iters, 2))\n",
"\n",
"for i in range(iters):\n",
" # YOUR CODE HERE\n",
" raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "1c9f409feb7afa95dc104b40ae945539",
"grade": true,
"grade_id": "cell-4eef0788eaaeee62",
"locked": false,
"points": 0,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## (More on) nuisance regression\n",
"Let's go back to the technique of nuisance regression. We have seen before that this technique can be used to model low-frequency components in our data (effectively functioning as a high-pass filter), but it can, in general, be used to model *any* thus-far unmodelled variance in the signal that would otherwise end up in the noise term. For example, people use this technique to model variance due to physiological processes (such cardiac and respiratory related signals; see e.g. [Glover et al., 2000](https://www.ncbi.nlm.nih.gov/pubmed/10893535)), motion-related variance (which we'll discuss later), and high-intensity \"spikes\". To get a better feel for nuisance regression and its consequences, let's look at this process of removing high-intensity spikes (which is sometimes called \"despiking\").\n",
"\n",
"### Using nuisance regression for despiking\n",
"This technique of adding noise-predictors to the design matrix is sometimes used to model 'gradient artifacts', which are also called 'spikes' (which you've heard about in one of the videos for this week). This technique is also sometimes called \"despiking\". These spikes reflect sudden large intensity increases in the signal across the entire brain that likely reflect scanner instabilities. One way to deal with these artifacts is to \"censor\" bad timepoints (containing the spike) in your signal using a noise predictor.\n",
"\n",
"But what defines a 'spike'/bad timepoint? One way is to compute the normalized \"root mean square successive differences\" (RMSSD), normalizing this, and imposing some threshold above which a timepoint is marked as a spike (technically, it's a little bit more complex, but we'll ignore that for now).\n",
"\n",
"We'll delve into the details of this computation later. For now, let's take a look at some example data that we're going to use for this section:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with np.load('spike_data.npz') as spike_data:\n",
" all_sig = spike_data['all_sig']\n",
" pred = spike_data['pred']\n",
"\n",
"print(\"Shape of all_sig: %s\" % (all_sig.shape,))\n",
"print(\"Shape of pred: %s\" % (pred.shape,))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The example data `all_sig` is a (simulated) 4D fMRI scan with $10 \\times 10 \\times 10$ voxels and 500 timepoints (assuming a TR of 2, this amounts to a duration of 1000 seconds). The predictor reflects a design in which the participants was shown a stimulus every 100 seconds (50 TRs). Let's plot the predictor:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(15, 5))\n",
"plt.plot(pred)\n",
"plt.grid()\n",
"plt.xlabel('Time (TRs)', fontsize=20)\n",
"plt.ylabel('Activation (A.U.)', fontsize=20)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alright, now let's take a look at how you would calculate the \"root mean square successive differences\" (RMSSD). This quantity reflects the difference between every timepoint $t$ of your signal and the signal at timepoints $t-1$, which is then squared, averaged (across all voxels, $1 \\dots K$), after which the square root is taken:\n",
"\n",
"\\begin{align}\n",
"\\mathrm{RMSSD}_{t} = \\sqrt{\\frac{1}{K}\\sum_{K}(s_{t, k} - s_{t-1, k})^2}\n",
"\\end{align}\n",
"\n",
"First, let's focus on the \"successive differences\". Suppose we have only one \"signal\" of length 5:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ex_sig = np.array([1, 3, -2, 0, 5])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The \"successive differences\" are the difference between 3 and 1, -2 and 3, 0 and -2, and 5 and 0. Note that the successive difference for the first timepoint ($t=0$) is not defined! In code, we can compute this with:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"succ_diff = ex_sig[1:] - ex_sig[:-1]\n",
"print(succ_diff)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After computing the successive differences, we need to do another thing. As we noted earlier, RMSSD is not defined for $t=0$ (because there is no $t-1$ for the first timepoint). We can insert a duplicate of the first value here for convenience. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"succ_diff = np.insert(succ_diff, 0, succ_diff[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (optional; 0 point): Compute the RMSSD of the data. Note that our data (`all_sig`) has 4 dimensions, with the fourth dimension representing time. Store the result in a variable named `all_sig_rmssd`, which should have the same shape as the original signal. Also, try to do this without a for-loop by making using of vectorization.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "bb4a007271c20f134e43346934b16959",
"grade": false,
"grade_id": "cell-e9aa5fb38ea53895",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Compute the sucessive differences here\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "277037b2667647031ef5921ee64763b0",
"grade": true,
"grade_id": "cell-39cad92120209b1c",
"locked": true,
"points": 0,
"schema_version": 3,
"solution": false,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo (optional). '''\n",
"from niedu.tests.nii.week_4 import test_rmssd\n",
"test_rmssd(all_sig, all_sig_rmssd)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**ToDo** (optional; 0 points)\n",
" \n",
"Now, in order to identify spikes, we need to identify timepoints that have a RMSSD-values that differ more than (let's say) 7 standard deviations from the mean RMSSD value. As such, you need to 'z-score' the RMSSD values: subtract the mean value from each individual value and divide each of the resulting 'demeaned' values by the standard deviation ($\\mathrm{std}$) of the values. In other words, the z-transform of any signal $s$ with mean $\\bar{s}$ is defined as:\n",
"\n",
"\\begin{align}\n",
"z(s) = \\frac{(s - \\bar{s})}{\\mathrm{std}(s)}\n",
"\\end{align}\n",
"\n",
"Implement this z-score transform for the variable `all_sig_rmssd` and store it in the variable `z_rmssd` (1 point). Then, plot the z-scored RMSSD signal (with appropriate axis labels; 1 point). \n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "c9e9b775f84fc88cb4f64a9a1110f77f",
"grade": false,
"grade_id": "cell-5a1c56ebd42c2704",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Let's load the correct answer from the previous optional ToDo\n",
"# in case you didn't do it\n",
"all_sig_rmssd = test_rmssd(all_sig, None, check=False)\n",
"\n",
"# Implement the z-scoring below\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "3f96e7c625e923c704d37f2934f592ad",
"grade": true,
"grade_id": "cell-52789944709f295b",
"locked": true,
"points": 0,
"schema_version": 3,
"solution": false,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests part 1 of ToDo. '''\n",
"from niedu.tests.nii.week_4 import test_zscore_rmssd\n",
"test_zscore_rmssd(all_sig_rmssd, z_rmssd)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "8b8d9b533455dc80da2b68328ccf60e8",
"grade": true,
"grade_id": "cell-26abd9ec9a7cf085",
"locked": false,
"points": 0,
"schema_version": 3,
"solution": true,
"task": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Now, plot the zscored rmssd signal (z_rmssd)!\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can set a threshold above which we define timepoints as \"spikes\". Let's say we do this for $z > 7$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Load the correct answer from the previous (optional) ToDo\n",
"from niedu.tests.nii.week_4 import test_zscore_rmssd\n",
"z_rmssd = test_zscore_rmssd(all_sig_rmssd, None, check=False)\n",
"\n",
"identified_spikes = z_rmssd > 7 # creates array with True/False\n",
"n_spike = identified_spikes.sum()\n",
"print(\"There are %i spikes in the data!\" % n_spike)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, to remove this influence, we can simply add a nuisance predictor for each spike, in which the predictor contains zeros at timepoints without the spike and 1 at the timepoint with a spike."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"spike_pred = np.zeros((pred.size, n_spike))\n",
"t_spikes = np.where(identified_spikes)[0]\n",
"for i, t in enumerate(t_spikes):\n",
" print(\"Creating spike predictor for t = %i\" % t)\n",
" spike_pred[t, i] = 1\n",
"\n",
"plt.figure(figsize=(15, 5))\n",
"plt.plot(spike_pred)\n",
"plt.xlabel(\"Time (TRs)\", fontsize=20)\n",
"plt.ylabel(\"Activation (A.U.)\", fontsize=20)\n",
"plt.grid()\n",
"plt.xlim(0, pred.size)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToThink** (1 point):\n",
" \n",
"Why do you think we do not convolve the spike regressors with an HRF (or basis set)? Write your answer in the text-cell below.\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "309f70d6d532c935d826ac9b0c25b501",
"grade": true,
"grade_id": "cell-a89dbfe92f7f7298",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToDo** (2 points): Calculate the *t*-value of the stimulus-predictor-against-baseline contrast in a model with both the stimulus predictor (`pred`) and the spike predictors (`spike_pred`). Also, stack an intercept. Store the *t*-value in the variable `tval_spike_model`. Use `spike_sig` (defined below) as your target, i.e., $y$.\n",
"

"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "2340fce74dc8024de2a7155ba363680a",
"grade": false,
"grade_id": "cell-9f652d7f8d3de436",
"locked": false,
"schema_version": 3,
"solution": true
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"# Calculate the t-value of the model with only an intercept + stim predictor\n",
"spike_sig = all_sig[5, 5, 5, :]\n",
"\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "5083dff7f47bd71f5642e83cacc29d5e",
"grade": true,
"grade_id": "cell-3f022b3782558bc9",
"locked": true,
"points": 2,
"schema_version": 3,
"solution": false
},
"tags": [
"raises-exception",
"remove-output"
]
},
"outputs": [],
"source": [
"''' Tests the above ToDo. '''\n",
"from niedu.tests.nii.week_4 import test_spike_model\n",
"test_spike_model(pred, spike_pred, spike_sig, tval_spike_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **ToThink** (1 point): An eager researcher might think that adding more and more (nuisance) predictors will always improve the amount of variance explained and thus will improve his/her chances of finding significant effects (i.e., $t$-values). Argue why this is not the case.\n",
"

"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "26fb5f35f8d1cc0136b5d46cf1da8c15",
"grade": true,
"grade_id": "cell-1682e0bc202bae31",
"locked": false,
"points": 1,
"schema_version": 3,
"solution": true,
"task": false
}
},
"source": [
"YOUR ANSWER HERE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" **Tip!**\n",
" Before handing in your notebooks, we recommend restarting your kernel (*Kernel* → *Restart & Clear Ouput*) and running all your cells again (manually, or by *Cell* → *Run all*). By running all your cells one by one (from \"top\" to \"bottom\" of the notebook), you may spot potential errors that are caused by accidentally overwriting your variables or running your cells out of order (e.g., defining the variable 'x' in cell 28 which you then use in cell 15).\n",
"

"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {
"height": "calc(100% - 180px)",
"left": "10px",
"top": "150px",
"width": "361px"
},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 1
}