Posts by: behinger

When the H0 distribution of TFCE is not uniform

I wrote about Threshold-Free-Cluster-Enhancement (TFCE) before, this time I stumbled upon a weirdly looking H0 diagram. Let me explain: If you simulate data without any effect, you expect that the P(data|H0) distribution is uniform, that is, all p-values are equally likely. Here, I define the p-value as the minimal p-value over time that I get from one whole simulation (1000 permutations per simulated dataset) – I simulated only cluster in time not space (find the notebook here, raw-jl here). When I did this for 100 repetitions, each applying permutation TFCE and calculating the min-p, I got the following histogram of 100 p-values:

TFCE H0 distribution with integration step of 0.4

That does not look uniform at all! What is going on? It turns out, that you can get this kind of “clustering” if your integration step-size is too large. Indeed, I change the integration step from 0.4 to 0.1

TFCE H0 distribution with integration step 0.1

Now it looks much more uniform; I should probably use more repetitions (indeed in full simulations I use 10x as many) – but this already took 500s and I am not prepared to wait longer 😉

Thanks @Olivier Renauld for this explanation!

Estimating travelling speed using only your eyes

Here is a fun trick (I think) invented by Martin Rolfs and Casimir Ludwig.

You are in a train and would like to know the speed of the train – but no phone, GPS or speedometer – here is how you do it.

Voyager GIF - Voyager GIFs
Person pondering how fast she is going – https://tenor.com/JuRb.gif

Here is how to do it:

  1. Stretch out both arms, thumbs up.
  2. Make eye-movements from one thumb to the other, focus on the eye movements going in direction of the train
  3. Slowly increase/decrease the distance of the thumbs, effectively changing (in a controlled manner) how large your eye movements are.
  4. Notice the rail sleepers of the nearby track – at some point of (3) your eye-velocity will perfectly match the angular speed of the train, and you will be able to see the rails crisp as day – during an eye movement (take that saccadic suppression! – but seriously, check out that Wikipedia article, it explains step 3 in more words)
  5. Now measure (somehow) how many thumbs-width’ your two thumbs hands are apart and take this times two (1 thumb-width are approximately 2° visual angle). For our example, let’s say we measured 7°.
  6. Gauge the distance to the neighboring track, let’s say 2m.

7. The final ingredient is the log-log relationship between eye-movement size & eye-movement speed: the Main Sequence of Eye movements. A 7° saccade is ~130° / s fast.

Image
Collewijn et al., 1988

8. Let’s solve for the train’s speed: $ v=\frac{130°/s}{2*\pi} *2m=20\frac{m}{s} = 72\frac{km}{h} $

Now remember or print out the main sequence from Collewijn & you will never not know how fast the train is going 🙂

Vision-Demo: Purkinjes Blue Arc Phenomenon

Purkinjes Blue Arc is a cool and not well known visual effect. It consists of illusionary blue arcs, emanating from a (typically) red stimulus. It has been rediscovered at least half a dozen times in the last 200 years and goes back to Purkinje. The exact physiological reason for the Blue Arcs is still not now. A detailed write-up of the demonstration with more tipps can be found in Moreland 1967.

Modern screen technology make it much easier for you, to experience this effect. Just follow these simple instructions!

Purkinjes Blue Arc Recipe

https://benediktehinger.de/upload/purkinje.gif
Display me in fullscreen on an OLED screen in a dark.room! Look at the red dot with the right eye only.
  1. You need an OLED screen – ~50% of modern mobile phone screens are OLED. To check, go to a dark room and open a complete black image. Is the display pitch black (OLED), or is there backlight coming through (LCD)?
  2. Display this gif in fullscreen – you probably need to download it. No other UI-elements should be visible
  3. Go to a reasonable dark room
  4. Close the left eye and stare at the red dot
  5. Purkinjes Blue Arcs should appear

It is hard to see the blue arcs if you do not know what to look for, therefore I added a small visualization

A rough approximation of the Purkinje Blue Arcs

I can’t see it?!

  • Maybe you flipped your phone (or closed the wrong eye ;)) – be sure that the straight line points to the right
  • Maybe the room is still too bright. Also be sure that the red-color of the stimulus doesnt brighten up the background of the room
  • I don’t have an OLED screen: The demonstration should also work with just a red dot – maybe you have one at your stereo? Any bright red LED should work, the effect is smaller but still there. Your mileage on an LCD screen will vary…

What’s going on?

The nerve fibers from the photoreceptors are bundled and leave the retina through the optic discs. These nervebundles are called the Raphe. As visible in the next screen, Purkinjes Blue Arc follow the raphe perfect.y

The Raphe – all pointing towards the optical disc (Blind Spot)

Why exactly such a red bright stimulus activates the blue ganglio cells is currently not know.

Visualization of deconvolution with pluto.jl

I just started dabbling with Pluto.jl and very quickly it allows to give very insightful notebooks.

For example, take this signal:

X in samples, y in “µV”, blue = 1 EEG channel, orange= event onsets

Clearly, the simulated event-responses (the event-related potentials) overlap in time (e.g. at ~sample 350). We could do a “naive” regression on all timepoints relative to the event-onset, ignoring any overlap – or we could use linear deconvolution aka. overlap-correction to correct for the overlap (as the name says ;).

What follows is the beauty of Pluto.jl – simple reactive/interactive notebooks. As shown in the following gif, it is very easy to show the dependency of deconvolution-success on window-size and noise:

Mass Univariate analysis on the left, deconvolution on the right

Looks pretty robust for this simulation! Cool!

If you want to try for yourself: here is the notebook and here the link to the Unfold.jl toolbox

Why Robust Statistics?

For my new EEG course in Stuttgart I spend some time to make this gif – I couldn’t find a version online. It shows a simple fact: If you calculate the mean, the breakdown point of 0%. That is, every datapoint counts whether it is an outlier or not.
Trimmed or winsorized means instead calculate the mean based on the inner X % (e.g. inner 80% for trimmed mean of 20%, removing the top and bottom 10% of datapoints) – or in case of winsorizing the mean with the 20% extreme values not removed, but changed to the now new remaining limits). Therefore they have breaking points of X% too – making them robust to outliers.

Fun fact: a 100% trimmed mean is just the median!

As you can see, increasing the amount of outliers has a clear influence on the mean but not the 20% trimmed mean.

One important point: While sometimes outlier removal and robust statistics are very important, and arguable a better default (compared to mean) – you should also always try to understand where the outliers you remove actually come from.

The source code to generate the animation is here:

using Plots
using Random
using StatsBase

anim = @animate  for i ∈ [range(3,20,step=0.5)...,range(20,3,step=-0.5)...]
    Random.seed!(1);

    x = randn(50);
    append!(x,randn(5) .+ i); #add the offset

    histogram(x,bins=range(-3,20,step=0.25),ylims=(0,9.),legend=false)

    vline!([mean(x)],linewidth=4.)
    vline!([mean(trim(x,prop=0.2))],linewidth=4.)
    #vline!([mean(winsor(x,prop=0.2))])# same as trimmean in this example

end
gif(anim, "outlier_animation.gif", fps = 4)

Thesis Art Karolis Degutis

The idea of thesis art is to inspire discussion with persons who do not have an academic background or work in a different field. Each student that finishes his thesis with me, receives a poster print of this piece from me. One copy for them, one for me.

The thesis is hidden in the drawer, but the poster is out there at the wall for everyone to see. You can find all past thesis art pieces here

In his project Karolis Degutis (@karolisdegutis) tried to replicate two laminar fMRI effects, but not at high-field 7T, but at 3T. Unfortunately, we failed to replicate these effects – on the one hand, we had to stop acquisition early due to COVID-19, on the other hand, we found anecdotal evidence in favour of the H0.

Karolis made use of laminar fMRI, and accordingly in this thesis art, I used a layerified horizontal slice of brain (bigBrain). The layers are completely made up by the words of his thesis – overall ~55.000 characters were used. This was the first time that I completed a thesis art in Julia. It was a blast! Not only could I completely extract all PDF text easily, but I also used a nice library to solve a large travelling salesman problem. Finally, using makie.jl, plotting that many characters took only 0.5s – and it did not crash at all (compared to my experience with matlab/ggplot).
You can find the julia code here

New lab in Stuttgart

I will be starting a new lab on Computational Cognitive Science, next month at University of Stuttgart. I will be working on the connection of EEG and Eye-Tracking, Statistics and methods development. The group is attached to the SimTech and the VIS Stuttgart

SimTech Building, three brains (not to scale) are arranged in the ponzo illusion
SimTech Building in Stuttgart

EEG recording chamber

I recently asked on twitter whether people can recommend recording chambers to seat the subject in psychological experiments. I had a tough time googling it, terms that could be helpful in case you are in search for the same thing: Testing chamber, subject booth, audiology.

I got a lot of answers and for the sake of “google-ability” will summarize them here:

  • Steve Luck recommends a separated chamber, but highlights importance of air-conditioning due to sweating artefacts
  • Aina Puce recommends no chamber, but to sit 2-3m behind the subject and use white noise generators

Regarding actual chambers several commercial vendors were thrown in the ring:

  • Studiobricks*
  • Whisper Rooms*
  • Desone
  • Eckel
  • IAC Acoustics

* no Farraday cage directly available as far as I know. But check this tweet for a custom solution

I haven’t asked all vendors for a price estimate, but as far as I can tell, with climate control & lighting a ~4m² room costs around 8.000€ – 12.000€ without a Farraday Cage. With a cage I would guesstimate +10.00-15.000€ but I actually don’t really know.

Comparing Manual and Atlas-based retinotopies; my journey through fmri-surface-land

PS: For this project I moved from EEG to fMRI, and in this post I will sometimes explain terms that might be very basic to fMRI people, but maybe not for EEG people.

I want to investigate cortical area V1. But I don’t want to spend time on retinotopy during my recording session. Thus I looked a bit into automatic methods to estimate it from segmented (segment = split up in WhiteMatter/GrayMatter+extract 3D-surfaces from voxel-MRI and also inflate them) brains. I used the freesurfer/label/lh.V1 labels and the neurophythy/Benson et al tools . The manual retinotopy was performed by Sam Lawrence using MrVista. And here the trouble begins:

The manual retinotopy was available only as a volume (voxel-file, maybe due to my completly lacking mrVista skills. I should look into whether I can extract the mrVista mesh-files somehow), while the other outputs I have as freesurfer vertex values, ready to be plotted against the different surfaces freesurfer calculated (e.g. white matter, pial (gray matter), inflated). Thus I had to map the volume to surface. Sounds easy – something that is straight forward – or so I thought.

After a lot of trial&error and bugging colleagues at the Donders, I settled for the nipype call to mri_vol2surf from freesurfer. But it took me a long time to figure out what the options actually mean. This answer by Doug Greve was helpful (the answer is 12 years old, nobody added it to the help :() (see also this answer):

It should be in the help (reprinted below). Smaller delta is better
but takes longer. With big functional voxels, I would not agonize too
much over making delta real small as you'll just hit the same voxel
multiple times. .25 is probably sufficient.

doug

   --projfrac-avg min max delta
   --projdist-avg min max delta

     Same idea as --projfrac and --projdist, but sample at each of the
     points    between min and max at a spacing of delta. The samples are then
     averaged    together. The idea here is to average along the normal.

The problem is that you have to map each vertex to a voxel. So in this approach you take the normal vector of the surface (e.g. from white matter surface), check where it hits the gray matter, sample ‘delta’ steps between WM (min) and GM (max), and check which voxels are closest to these steps. The average value of the voxels is then assigned to this vertex.

I will first show a ‘successful subject before I dive into some troubles along the way.

red=freesurfer label, orange = benson label restricted to <10deg visual angle, purple = manual based on 10deg retinotopy data

Overall a good match I would say, generally benson & freesurfer have a good alignment (reasonable), the manual retinotopy is larger in most subjects. This might also be due to the projection method (see below)

Initially I tried projection withour smoothing, see the results below. I then changed to a smooth of 5mm kernel with subsequent thresholding (for sure there is probably a smarter way).

Without smoothing
With 5mm smoothing (red=freesurfer label, orange = benson label, purple = manual)

It is pretty clear that in this example the fit of manual with automatic tools is not very good. My trouble is now that I don’t know if this is because of actual difference or because of the projection.

Next steps would be to double check everything in voxel land, i.e. project the surface-labels back to voxels and investigate the voxel-by-voxel ROIs.

EEG/ERP rounding event latencies

22.10.2019 Edit: Thanks Matt Craddock, I understand the source of the problem better. He mentioned that this should not occur if the amplifiers record the triggers as trigger channels (before converting it to events). And mentions that this could happen through downsampling. Indeed after checking in the dataset I used it was downsampled from 1024 to 512Hz. This made many eventlatencies ~ X.50001, which will be uprounded with round and floored with floor. This gives some context to the problem.
Full discussion on twitter

TLDR; EEGlab allows for non-integer event latencies (in units of samples). Eeglab chose floor(latency), while others e.g. unfold & fieldtrip choose round(latency) to round the latency to samples. This leads to differences between toolboxes, in my example of up to 1.5µV (or ~25% ERP magnitude). Importantly, this probably does not introduce bias between conditions

This is an ERP, actually its two ERPs. One is calculated using the “unfold” toolbox and one using eeglab’s pop_epoch function

Elec: PO8, average reference, 1280 trials, 512Hz, very typical experiment with single stimulus presentation, no task

If you look very closely, you can see that they are not identical, even though they should be. So – whats the difference?

It turns out that EEGlab saves event latencies in samples (e.g. stimulus is starting at sample 213), but also allows non-integer latencies (e.g. stimulus is starting between 212 and 213, to be exact: at sample 212.7). This makes sense, i.e. if your EEG sampling resolution is 100Hz you might know your stimulus onset with higher precision and not in 10ms bins. But in order to get ERPs we have to “cut” the signal at the event onset. EEGlab uses “floor” for this and rounds the stimulus onset from 212.7 to sample 212. Other toolboxes (unfold / fieldtrip) use “round”, thus the event would start at sample 213.

It turns out that in the example you see above, this introduces a difference between the two ERPs of 0.5µV (!) thats around 8% of the magnitude.

difference “floor” vs “round” for 512Hz data

This is just a random example I stumbled upon. With lower sampling rates, this effect should increase. Indeed, downsampling to 128 Hz gives us a whopping difference of 1.5µV.

Difference “floor”-“round” for 128Hz sampling rate

Floor vs. round (vs. others?)

The benefit of floor, at least the one I can think of, is that it would never shift the onset of a stimulus in the future. That is, it is causal. Possibly there are other benefits I am not aware of.

The benefit of round is that it more accurately reflects the actual stimulus onset. Possibly there are other benefits I am not aware of

Given that we mostly use acausal filtering anyway, I think the causal benefit is not very strong.

There is yet another alternative: a weighted average between samples. We could “split” the event onset to two samples, i.e. if we want the instantaneous stim-onset response, and stim onset is at sample 12.3, then sample 12 should be weighted 30% and sample 13 by 70%. I have to explore this idea a bit more, but I think its very easy to implement in unfold and test. But this for a new blogpost.

The big picture

In the fMRI community there are papers from time to time reporting that different analysis tools (or versions) lead to different results. I am not aware of any such paper in the EEG community (if you know one, let me know please!) but I think it would be nice if somebody would do such comparisons.

I currently do not forsee if such an event-latency-rounding difference could possibly introduce bias in condition differences. But I forsee that changing it will be difficult for the EEGlab developers, as “floor” has been around for a very long time in eeglab.

Code

Note that I did not use simulation here (but could have, it should be straight) but I cannot publicly share the data at this point in time.

load('EEG_subj8_inkl_ufresult.mat')
%%
%EEG = pop_resample(EEG,128);
timelimits = [-.3 .5];
EEG = uf_designmat(EEG,'eventtypes','stimulus','formula','y~1');
%deconv based "round" analysis
EEG = uf_timeexpandDesignmat(EEG,'timelimits',timelimits);
EEG = uf_glmfit(EEG,'channel',63);

%eeglab based "floor" analysis
EEGe = pop_epoch(EEG,{'stimulus'},timelimits);

ufresult = uf_condense(EEGe);
%% ERP plot
figure
plot(ufresult.times,mean(EEGe.data(63,:,:),3)) %floor
hold on
plot(ufresult.times,ufresult.beta(63,:,1)) %round
xlim(timelimits)
xlabel('time [s]')
ylabel('ERP [µV]')
box off
export_fig erp.png -m3 -transparent
%% difference plot
figure
title('Diff "floor"-"round"')
plot(ufresult.times,mean(EEGe.data(63,:,:),3)-ufresult.beta(63,:,1)) %floor-round
xlim(timelimits)
xlabel('time [s]')
ylabel('ERP [µV]')
box off
export_fig diff.png -m3 -transparent
2 of 7
1234567