Welcome! Have a look at my research!

Thesis Art: Judith Schepers

I was a supervisor for Judith Scheper’s Bachelor’s Thesis.


In this thesis-art, I visualized the guided-bubble paradigm used in a recent publication in the Journal of Vision. Judith generalized the paradigm to more than five bubbles, therefore, many more bubbles are visible in the thesis-art.

The idea of “thesis art” is to inspire discussion with persons who do not have an academic background or work in a different field. The thesis is hidden in the drawer, but the poster is out there at the wall for everyone to see. You can find all past thesis art pieces here

New Paper: The temporal dynamics of eye movements as an exploitation-exploration dilemma.

We just published a new paper in the Journal of Vision

The temporal dynamics of eye movements as an exploitation-exploration dilemma
Ehinger Kaufhold & Kƶnig, 2018

The highlights:

  • We put eye movements as a decision process between exploitating the current view and exploring more of the scene
  • We use gaze-contingent eye-tracking to control the When and Where of eye movements
  • We find large effects of how long a subject fixates on their reaction time to continue exploring
  • We find large effects of the number of possible future target locations (Hick’s effect)

Check out the paper at the Journal of Vision
doi:10.1167/18.3.6

Ubuntu 16+: Recover ctrl+alt+bksp to restart X server

Often when developing with psychotoolbox or psychopy/opensesame your program crashes. And I often then have a full-screen window open and cannot click somewhere else. I then try to Alt+Tab and execute “sca” (screen close all) into the matlab console, with often mixed success. Sometimes restarting the computer is the last option. Instead of restarting, a useful command in older ubuntu versions was: STRG + ALT + Backspace => restart X server (=> restart GUI).

In order to activate this again use:

setxkbmap -option terminate:ctrl_alt_bksp

source on askubuntu

 

 

PS: I wrote this blogpost because I looked up this thing multiple times – now I know where to look šŸ˜‰

Stretching the axes; visualizing non-linear linear regression

 

From time to time I explain concepts and ideas to my students.

Background

Often this pops up in a statistical context, when one has a non-linear dependency between the to-be-predicted variable and the predictor-variables. By transforming the predictors, relationships can be made linear, i.e. a logarithmic (exponential, quadratic etc.) relationships can be modeled by a **linear** model.

The idea

I have a very visual understanding on basis-functions / non-linear transformation of variables in terms of stretching / condensing the basis (the x-axis here). This can also be applied to the generalized linear model (here for logistic regression).

Imagine that the x-axis of a plot is made of some kind of elastic material, you can stretch and condense it. Of course you do not need to stretch every part equally, one example would be to stretch parts that are far away from zero, exponentially more than parts that are close to zero. If you would have an exponential relationship ($ y = e^x$) then $y$ would now lie on a straight line.

TLDR;

Imagine you have a non-linear relationship, by stretching the x-axis in accordance to that non-linear relationship, you will have a linear relationship.

An exemplary non-linear relationship:

We want to do $y = b_0 + b_1x $ but obviously a linear line does not fit well. We can do something called polynomial expansion, i.e. add more predictors which are simple transformations of the predictor $x$. i.e. $y = b_0 + b_1x + b_2x^2 + b_3x^3$

The trick comes here: We can interpret the new $x^3$ basis function as a stretching of the x-axis. I.e. the further we move out on the x-axis, the longer we need to stretch the parts (exactly by x^3 times)

This can be shown also for other functions:

Exponential

Logarithmic

Note that the logarithm is not defined for negative numbers

Quadratic

Note how the stretching can be negative, i.e. the original negative values are stretching/transformed to positive values

Using the trick on the y-axis

One can interprete **logistic regression** with the same trick:
$$ g{-1}(y) = b_0 + b_1*x <=> y = g(b_0+b_1x)$$
with $g$, the logistic (logit) function and $g^{-1}$ the inverse logistic function (invlogit)
$$ g^{-1} = \ln\frac{p}{1-p} <=> g = \frac{1}{1+e^{-x}}$$

Usually we would have some non-linear dependency on a probabilty of e.g. success. That means, with a low value of x, your success-chance are low. To model this kind of data, one can transform the y-axis using $g$ above.

Working remote – X11 Forward, Putty, Windows, Gateway

Sometimes I need matlab/rstudio/spyder but with access to the university network. One way is to run matlab/rstudio/spyder on the university computers, but get the X (=Graphics) display on my local windows machine.

Because there is a gateway in between, I first need to tunnel the gateway to a university working computer, then use a second putty session to ssh right through the tunnel directly to the target computer.

These are the steps I need to do:

– Putty: ssh to gateway.university:22;Ā  Go to SSH-Tunnel and put source-port: 2222 (this is your local port you gonna target the second session). destination: remote-pc-that-runs-matlab:22

– Putty again: ssh to localhost:2222 with X11 forward enabled and “xming” installed

 

and perfect (but sometimes slow) remote-X11-forwarding. For the future I want to check out rdb to remotely control the session. This could be a quite useful in many cases because my programs are usually running anyway šŸ™‚

[matlab] performance for-loops vs. vectorization vs. bsxfun

From time to time I explain my students certain concepts. To archive those and as an extended memory, I share them here. We also recently had some discussion on vectorization in our research group. e.g. in python and matlab. With the second link claiming for-loops in matlab are performing much better than before.

 

Goal

Show that for-loops are still quite slow in matlab. Compare bsxfun against vectorized arithmetic expansion in matlab against bsxfun

The contenders

  • good old for-loop: Easy to understand, can be found everywhere, slow
  • arithmetic expansion: medium difficulty, should be general used, fast
  • bsxfun: somewhat difficult to understand, I use it regularily, fast (often)

Comparisons

While demonstrating this to my student, I noticed that subsetting an array has interesting effects on the performance differences. The same is true for different array sizes. Therefore, I decided to systematically compare those.

I subtract one row from either a subset (first 50 rows, dashed line) or all rows of an [n x m] matrix with n= [100, 1000, 10 000] and m = [10, 100, 1000, 10 000]. Mean + SE

Three take home messages:

  • for loop is very slow
  • vectorization is fastest for small first dimension, then equally fast as bsxfun
  • bsxfun is fastest if one needs to subset a medium sized array (n x m >100 x 1000), but see update below!

 

Update:

Prompted by Anne Urai, I redid the analysis with multiplication & devision. The pattern is the same. I did notice that allocating new matrices before doing the arithmetic expansion (vectorization) results in the same behaviour as bsxfun (but more lines of code)

A = data(ix,:);
B = data(1,:);
x = A./B;

 

matlab code

tAll = [];
for dim1 = [100 1000 10000]
    for dim2 = [10 100 1000 10000]
        tStart = tic;
        for subset = [0 1]
            if subset
                ix = 1:50;
            else
                ix = 1:dim1;
            end
            for run = 1:10
                data = rand(dim1,dim2);
                
                % for-loop
                x = data;
                tic
                for k= 1:size(data,2)
                    x(ix,k) = data(ix,k)-data(1,k);
                end
                t = toc;
                tAll = [tAll; table(dim1,dim2,subset,{'for-loop'},t)];
                %vectorized
                tic
                x = data(ix,:)-data(1,:);
                t = toc;
                tAll = [tAll; table(dim1,dim2,subset,{'vectorization'},t)];
                % bsxfun
                
                tic
                x= bsxfun(@minus,data(ix,:),data(1,:));
                t = toc;
                tAll = [tAll; table(dim1,dim2,subset,{'bsxfun'},t)];  
            end
        end
        fprintf('finished dim1=%i,dim2=%i - took me %.2fs\n',dim1,dim2,toc(tStart))
    end
end

% Plotting using the awesome GRAMM-toolbox
% https://github.com/piermorel/gramm
figure
g = gramm('x',log10(tAll.dim2),'y',log10(tAll.t),'color',tAll.Var4,'linestyle',tAll.subset);
g.facet_grid([],tAll.dim1)
g.stat_summary()
g.set_names('x','log10(second dimension [n x *M*])','y','log10(time) [log10(s)]','column','first dimension [ *N* x m]','linestyle','subset 1:50?')
g.draw()

 

Scientific Poster Templates

I got asked for the design of my academic posters. IndeedĀ I have templates in landscape and portrait and I’m happy to share them. In addition I can recommend the blogĀ better-posters which has regularily features and link-roundups on poster-design related things.

In my newest poster (landscape below) I tried to move as much text to the side, so that people can still understand the poster, but it does not obscure the content. I also really like the 15s summary, an easy way to see whether you will like the poster, or you can simply move on. Maybe it even needs to be a 5s summary!

These are two examples posters based on my template.

Neat Features

Titles’ backgrounds follow along
Title background follows along
This is useful because you do not manually need to resize the white background of the text that overlays on the borders

Borders are effects, easy resizing
round corner resizing
The corners are based on illustrator effects, thus resizing the containers does not change the curvature. Before I often had very strange curvatures in my boxes. No more!

 

Download here

Portrait Equal Columns (ai-template, 0.3mb)

Portrait Unequal Columns (ai-template, 0.3mb)

Landscape (ai-template, 0.4mb)

Licence is CC-4.0, you can aknowledge me if you want, but no need if you don’t šŸ™‚

Layman Paper Summary: Humans treat unreliable filled-in percepts as more real than veridical ones

We recently published an article (free to read): “Humans treat unreliable filled-in percepts as more real than veridical ones”. Inspired by Selim Onat and many others, I try to to explain the main findings in plain language. First let me give you some background:

To make sense of the world around us, we must combine information from multiple sources while taking into account how reliable they are. When crossing the street, for example, we usually rely more on input from our eyes than our ears. However we can reassess our reliability estimate: on a foggy day with poor visibility, we might prioritize listening for traffic instead.

The human blind spots

But how do we assess the reliability of information generated within the brain itself? We are able to see because the brain constructs an image based on the patterns of activity of light-sensitive proteins in a part of the eye called the retina. However, there is a point on the retina where the presence of the optic nerve leaves no space for light-sensitive receptors. This means there is a corresponding point in our visual field where the brain receives no visual input from the outside world. To prevent us from perceiving this gap, known as the visual blind spot, the brain fills in the blank space based on the contents of the surrounding areas. While this is usually accurate enough, it means that our perception in the blind spot is objectively unreliable.
You can try it out by using this simple test (click the image to enlarge)

Keep your eyes fixed on the cross in (a). Close the left eye. Depending on the size & resolution of your screen, move your head slowly closer to (or sometimes further away from) the screen while looking at the cross. The dot in (a) should vanish. You can then try the same with the stimulus we used in this study (b). The small inset should vanish and you should perceive a continuous stimulus.

What we wanted to find out

To find out whether we are aware of the unreliable nature of stimuli in the blind spotĀ we presented volunteers with two striped stimuli, one on each side of the screen. The center of some of the stimuli were covered by a patch that broke up the stripes. The volunteersā€™ task was to select the stimulus with uninterrupted stripes. The key to the experiment is that if the central patch appears in the blind spot, the brain will fill in the stripes so that they appear to be continuous. This means that the volunteers will have to choose between two stimuli that both appear to have continuous stripes.

A study participant chooses between two striped visual images, one ‘real’ and one inset in the blind spot, displayed using shutter glasses (CC-BY 4.0 Ricardo Gameiro)

 

The experimental setup. Only the case where the left stimulus is in the blind spot is shown here.

What we thought we would find

If subjects have no awareness of their blind spot, we might expect them to simply guess. Alternatively, if they are subconsciously aware that the stimulus in the blind spot is unreliable, they should choose the other one.

In reality, exactly the opposite happened:

The volunteers chose the blind spot stimulus more often than not. This suggests that information generated by the brain itself is sometimes treated as more reliable than sensory information from the outside world. Future experiments should examine whether the tendency to favor information generated within the brain over external sensory inputs is unique to the visual blind spot, or whether it also occurs elsewhere.

The results of the first experiment. Four subsequent experiments confirmed this finding.

 

Sources

All images are released under CC-BY 4.0.

Cite as: Ehinger et al. Ā “Humans treat unreliable filled-in percepts as more real than veridical ones”, eLife, doi:Ā 10.7554/eLife.21761

 

EEGlab: Gracefully overwrite the default colormap

EEGlab has ‘jet’ as the default colormap. But jet is pretty terrible

https://www.reddit.com/r/matlab/comments/1jqk8t/you_should_never_use_the_default_colors_in_matlab/

 

You see structure where there is none (e.g. rings in the third example).

 

The problem:

Eeglabs sets the default colormap to ‘jet’, thus overwriting a system wide default set e.g. by


<code>set(0,'DefaultFigureColormap',parula); </code>

It does so by calling icadefs.m in various functions (e.g. topoplot, erpimage) and defining:


DEFAULT_COLORMAP = 'jet'

We want to overwrite the one line, but keep it forward compatible i.e. we do not want to copy the whole icadefs file, but just replace the single line whenever icadefs is called.

Solutions

Overwrite the line in icadefs.m default

This has the benefit that it will always work irrespective of your path-ordering. The con is, you will loose the change if you switch eeglab versions or update eeglab.

Change/create your eeglab eeg_options.txt.

This has the benefit that it will carry over to the next version of eeglab, but it is an extra file you need to have somewhere completly different than your project-folder (your user-folder ~/eeg_options.txt). It is thereby hard to make selfcontained code.

Make a new icadefs.m

Make a file called icadefs.m (this script will be called instead of the eeglab icadef) and add the following code:


run([fileparts(which('eegrej')) filesep 'icadefs.m']);
DEFAULT_COLORMAP = 'parula';

This will call theĀ original icadef (in the same folder as eegrej.m and then overwrite the eeglab-default

 

Important: The folder to your icadef file must be above eeglab in your path.Ā 

Try this: edit('icadefs.m') to see which function comes up. If the eeglab-one comes up you have a path conflict here. Your own icadefs has to be above the eeglab one.

In my project_init.m where I add all paths, I make sure that eeglab is started before adding the path to the new icadefs.m

 

Examples:

ICA – Topoplots of a single subject

Single component of an IC-Decomposition that included noisy data portionsĀ (and thus, I would say, is not usable)

Simple Filter Generation

I sometimes explain concepts to my students. Then I forget the code and the next time round, I have to redo it. Take this post as an extended memory-post. In this case I showed what filter-ringing artefacts could look like. This is especially important for EEG preprocessing where filtering is a standard procedure.

A good introduction to filtering can be found in this slides by andreas widmannĀ or this paper by andreas widmann

Impulse with noise

I simulated as simple impulse response with some additional noise. The idea is to show the student that big spikes in the EEG-data could result in filter-ringing that is quite substantial and far away from the spike.

The filter

This is the filter I generated. It is a BAD filter. It shows huge passband ripples. But for educational purposes it suits me nice. I usually explain what passbands, transitionband, stopband, ripples and attenuation means.

The code

fs = 1000;
T = 2;
time = 0:1/fs:T-1/fs;

data = zeros(length(time),1);
% data(end/2:end) = data(end/2:end) + 1;
data(end/2) = data(end/2) + 1;
data = data + rand(size(data))*0.02;

subplot(2,1,1)
plot(data)

filtLow = designfilt('lowpassfir','PassbandFrequency',100, ...
'StopbandFrequency',110,'PassbandRipple',0.5, ...
'StopbandAttenuation',65,'SampleRate',1000);

subplot(2,1,2)

% 0-padding to get the borders right
data = padarray(data,round(filtLow.filtord));

% Filter twice to get the same phase(non-causal)
a = filter(filtLow,data);
b = filter(filtLow,a(end:-1:1));
b = b(round(filtLow.filtord)+1:end - round(filtLow.filtord));
plot(b)

fvtool(filtLow) % to look at the filter

1 of 3
123