Tutorial 7 - Event-Related Data Analyses and Deconvolution
Goals
Understand why event-related designs can be useful
Understand the concepts of detection and estimation
Understand how to combine multiple runs of a single subject in a GLM
Understand event-related averaging
Understand how event-related data can be analyzed with either a classic GLM or a deconvolution GLM
Relevant Lecture
Event-related designs
Accompanying Data
Background
Why Use Event-Related Designs?
Block designs have many advantages. They have the best statistical power of all designs and they are quite resilient to differences between the actual and modelled hemodynamic response function (HRF). As such, they block designs are the default approach for fMRI. Block designs are excellent for Detection = finding blobs of activation showing differences between conditions.
However, because block designs just measure the aggregate response to a series of trials, we cannot determine the time course of the response to a single trial. Put another way, block designs are poor at Estimation = extracting the time course for a type of event (trial).
Event-related designs are preferable for some types of experiments:
- We may want to look at responses to unpredictable or uncontrollable trials. For example, a memory study may wish to compare responses to familiar vs. unfamiliar items without having familiarity blocked (and thus entirely predictable by the participant).
- We may want to have a large number of conditions, in which case it becomes difficult to match order history across conditions when there are a limited number of blocks.
- We may want to dec
In class, we also talked about the problem of order history and how actual HRFs in individuals can deviate from the standard HRF model. Combined, uncounterbalanced order history and inaccurate HRF models can lead to incorrect estimates of beta weights. One solution to reduce this problem is to counterbalance order history, but counterbalancing is not possible when there are many conditions and long blocks and is often imperfect. Thus event-related designs can be preferable for experiments with many conditions.
One approach is to use Slow Event-Related Designs, in which we have long rest periods (intertrial intervals, ITIs) between trials (e.g., 12-30 s ITIs). The benefit is that we have good Estimation of trial time courses. The cost is that we have poor Detection of activation differences because we have so few events in a run or a session.
Ideally we want both good Detection and good Estimation. Good News! A "Goldilocks" design that gives us the best of both worlds is available: Jittered Rapid Event-Related Designs.
What is a Jittered Rapid Event-Related Design?
- Event-related = examines responses to single trials
- Rapid = trials are spaced close together
- Jittered = trial spacing is not entirely regular
We used such a design for our experimental runs. Recall that we had six trial types. These trial types were placed in pseudo-random order (actually counterbalanced for n-1 trial history) with a default stimulus onset asynchrony (SOA = the time between the start of one trial and the next) of 4 s, with occasional gaps of 8 s. In the actual experiment, we had 6 runs per participant, with each run having a different order of trials.
How Are Jittered Rapid Event-Related Designs Analyzed?
- We can analyze event-related data using the same Classic GLM Approach that we used for block-design data. As then, we make predictor functions that we generate from square-wave functions convolved with the default HRF, estimate the beta weights, and do statistical contrasts between conditions.
- We can also analyze event-related data using a Deconvolution GLM Approach which enables a more fine-grained analyses of the the BOLD signal. Although this approach also uses predictors, beta weights, and contrasts, the key differences are that (a) instead of using one predictor per condition with an assumed Hemodynamic Response Function (HRF), we use a series of "stick predictors" and thus do not have to assume any specific HRF.
Here you will see examples of both approaches to analyzing Event-Related data using our experimental data with 6 runs from 1 Subject (P02). We will first explore the Classic GLM Approach and then the Deconvolution GLM Approach.
1. CLASSIC GLM APPROACH
So far we have only examined single runs from a single participant. However, typically fMRI analyses involve statistically analyzing multiple runs from one participant. This is valuable for this tutorial on rapid event-related designs because our statistics will be better the more trials we have and data from just one run would be quite noisy. Later (in Tutorial 6), we will go one step further to combine data from multiple participants in group analyses. So let's first learn how we combine data.
You will not be executing all these steps because the processes associated with this approach take up a lot of time. Instead, we will take what Jody calls a "Julia Child" approach: when you watch a cooking show, you don't want to see someone spending 10 minutes dicing onions or an hour waiting for a casserole to bake; you just want to get the gist of the key steps. Here we've done the boring/slow stuff and you can review the key steps to combine data across runs for the same participant.
[In case you need to do this yourself for other data, we unpack the missing steps in italicized square brackets.]
1) Load the VMR associated with the Tutorial (P02_Anat-S1_BRAIN_IIHC_MNI.vmr)
2) Under the Analyses tab, Link the VTC (P02_Exp1-S1R1_3DMCTS_MNI_SD3DVSS6.00mm_LTR_THP3c.vtc)
3) Previously we have been examining a single run from a single subject. If you want a refresher about this step, under the Analyses tab choose General Linear Model: Single Study and click Define Preds .
[At this point, you could save the six-predictor model as a file type called .sdm = single-study (i.e., single-run) design matrix but we have already done this for you. If you were actually doing these steps yourself, you would have to do this step for each of the six runs.]
4) Now let's investigate multiple runs of the same subject. Under the analyses tab, Open GLM - Multi Study - Multi Subject . Load the mdm file. You will see 6 preds (+ 6 constants, which statistically is like normalizing the baseline and concatenating the runs before fitting betas). DO NOT HIT GO! -- this step takes too long and we've done it for you.
[The .mdm file is a multi-study (i.e., multi-run) design matrix, which must be used any time you have data from more than one run, be it multiple runs within a participant or multiple participants. It simply associates each run's .vtc with the appropriate models or .sdms]
5) Go to Overlay General Linear Model in the analyses tab and load 6runs_Exp1-S1R1_VTC_N-6_FFX_PT_ITHR-100_classic.glm. This model has estimated the beta weights for each condition across the data from all six runs.
6) Assign the contrast based on the values below to compare Faces > Hands:
+FL +FC +FR -HL -HC -HR
Let's explore the the data from one area - the Left LOTChand . One independent way to define a region is to use a web site called neurosynth.org. This site performs meta-analyses of a huge number of published fMRI studies to find which brain regions are most associated with different topics. We did a neurosynth search for "hands" and picked a sphere (15-mm diameter) centered around the hotspot in LOTC, which we can call LOTChand.
7) Under the Analyses Tab, find the Region-of-Interest Analysis tab and Load neurosynth_radius_7mm.voi. Select Left LOTChand... and click show VOI . Load one of the experimental time courses in the right side and click Show time course .
8) Click the small green box in the lower left of the black time course window to expand your options. Now find the Event-Related Averaging box and use browse to select NFs-6_NCs-6_Pre-2-Post-16_PSC-1-BL-From--2-To-0_Bars-SEM_Data-VTC.avg
Event-related averaging computes the average time course across a window of time for each event. Here, the time window we selected began 2 s before the trial onset and continued for 16 s after. We set y=0 to be the average of 18 data points = 3 time points (-2 to 0) for 6 curves. You can see that the curves for the 6 conditions overlap reasonably well here, suggesting that we did a decent job in counterbalancing for order history. You can see that, as expected based on the HRF, the activation peaks about 4 s after the stimuli appeared and the activation in our data is higher for Hands (in green) than Faces (in pink), as one would expect for LOTChand. Data from this time window were extracted for each trial of a given condition, for example, the darkest green time course is the average time course for 48 trials (8 trials/run x 6 runs) of the Hand Left condition.
Question 1: Why do you see a bump every ~4 s in the event-related average? What is the problem with event-related averages when events are densely packed?
DO NOT CLOSE this window since we want to compare later to the output from the deconvolution approach
2. DECONVOLUTION GLM APPROACH
Now that we've seen how the Classic GLM Approach can be applied, let's look at the Deconvolution GLM Approach. One key benefit of the deconvolution approach is that we do not have to assume an HRF and thus we don't have to worry about how errors in the HRF can interact with trial history to yield inaccurate beta weights. Rather, we can use the many repetitions of a given trial type to estimate the best-fit time course for that event by fitting a series of stick predictors to the data. A second key benefit of the "decon" approach is that we can do a much better job at Estimation of the time course of an event type.
Before we use deconvolution on our course experimental data (rapid event-related design with 6 conditions), let's use a simpler example with some simpler localizer data (block design with two conditions). Before (in Tutorial 2), you worked with this data using the Classic GLM Approach. You fit the data using two beta weights, one for an HRF-convolved Faces predictor and one for an HRF-convolved Hands predictor. Feel free to revisit Tutorial 2, 3rd interactive slider section, if you need to refresh your memory.
Now we're going to fit the same data with a deconvolution approach. Instead of 2 predictors, one per condition, plus a constant you'll use 50 predictors (25 per condition), plus a constant. Each of those 50 predictors is a "stick predictor" that estimates the magnitude of activation for a single volume at a particular point in the trial. For example, in this data, in which we sampled the brain every second (TR=1), the 5th predictor for Faces estimates the activation 5 s after the face block began.
Open and play around with the following interactive animation. (Note: if you have trouble opening the animation, please see our troubleshooting page.) Try to adjust the stick predictors to fit the timecourse. Note that adjusting the beta sets the same magnitude for a given time point within each block. Once you realize this is really hard and boring, let the computer do the work for you. Click "Optimize GLM" Note: As this is quite a lot of processing the figure might take a minute or two to finish loading Observe how the stick predictors are adjusted to fit each condition. Note: for consistency with Tutorial 2, this is raw (unpreprocessed) data and the quality of our fit would improve with preprocessing (especially linear trend removal).
Question 2: If you exported the 25 beta weights for Faces and the 25 beta weights for Hands, and plotted them in Excel, what would it look like? HINT: Look at the profile of the sliders after optimization.
Deconvolution on Example Data
Now, you should have a better understanding of how deconvolution works so we can apply it to our experimental runs, which used a jittered rapid event-related design.
1) Open the same vmr (P02_Anat-S1_BRAIN_IIHC_MNI.vmr) you previously did in a new window/tab and link the same vtc file (P02_Exp1-S1R1_3DMCTS_MNI_SD3DVSS6.00mm_LTR_THP3c.vtc) again.
2) In order to make deconvolution predictors instead of our regular convolved predicotrs open GLM Single-Study (Analysis tab), click options and choose Deconvolution Design , then Define Predictors. Now inspect these predictors by stepping through them in the Predictors section of the window.
We have 20 predictors/condition x 6 conditions = 120 predictors. Why 20/condition? We know that it takes up to ~20 s for the hemodynamic response to return to baseline after a brief stimulus so we want to model the full time span. Here we collected one volume every 1 s (i.e., TR = 1 s) so 20 predictors/condition is appropriate.
Question 3: If we were to repeat the experiment with a TR = 0.5 s, how many predictors in total (for all conditions) would we want?
We already did this procedure for each run and made a .mdm and .glm that you can now examine.
3) In the analysis tab go to overlay General Liner Model... and load this file 6runs_Exp1-S1R1_VTC_N-6_FFX_PT_ITHR-100_decon.glm
You will now see several predictors for each condition instead of just one per condition. But how are we going to do a contrast with this many predictors? Instead of contrasting all these predictors with each other we will only contrast the "peak responses" . With a stimulus that lasts about 2 s, we usually expect the peak to occur somewhere between predictors 4,5 or 6 so we will only compare these predictors across conditions (Faces456 - Hands456). We already set this contrast up for you in a CTR file .
4) In the open window find Load CTR... and load 6runs_F456-H456.ctr
Compare the contrast from the Classic GLM in the beginning of this tutorial for +F-H to the contrast from deconvolution GLM with +F456-H456 by tiling two windows and examining the contrast maps across the two windows by toggling between them at different coordinates. Make sure to first set both maps to the same statistical threshold (e.g., t>6) under Analysis/Overlay Volume Maps/Statistics tab/Use statistic value/change confidence range to minimum = 6
Question 4: Why not do a contrast with all 60 Face predictors set to + and all 60 Hand predictors set to -?
Question 5: How similar do the maps look? What differences can you find? In sum, how well does the deconvolution approach work for detecting activation blobs?
Now let's see what the deconvolved time course looks like...
Now we can estimate the time course a the same region that we used earlier, e.g., LOTChand defined by neurosynth on "hands".
In the VMR window that you used for deconvolution (you can now close the VMR window with the classic approach now)...
5) Go to Analysis / Region-of-Interest Analysis and Load the neurosynth_radius_7mm.voi. Select the Left LOTChand... and Show VOI . In the Options menu under the Access Options tab, click Browse Design Matrix and load the file 6runs_Exp1-S1R1_VTC_N-6_FFX_PT_ITHR-100_decon.mdm.
In
VOI GLM
tab, select Left LOTChand and create GLM output (table/plot) for selected VOI.
Now click in the same window
VOI GLM
and you will see a window pop up.
Press
Fit GLM
and look at Event-related Deconvolution Plot.
Question 6: How does the deconvolution plot look different from event-related average? Why?
Question 7: Does this participant have a default HRF shape? What would be the consequences for this plot and for the differences between the classic and deconvolution maps if they had an individual HRF that was very different from the default HRF shape?
Question 8: What are the pros and cons of deconvolution vs. classic GLM approaches for rapid event-related designs? When is deconvolution most worthwhile?