{"id":409,"date":"2020-05-08T15:06:44","date_gmt":"2020-05-08T13:06:44","guid":{"rendered":"http:\/\/benediktehinger.de\/blog\/science\/?p=409"},"modified":"2020-05-08T15:06:44","modified_gmt":"2020-05-08T13:06:44","slug":"comparing-manual-and-atlas-based-retinotopies-my-journey-through-fmri-surface-land","status":"publish","type":"post","link":"https:\/\/benediktehinger.de\/blog\/science\/comparing-manual-and-atlas-based-retinotopies-my-journey-through-fmri-surface-land\/","title":{"rendered":"Comparing Manual and Atlas-based retinotopies; my journey through fmri-surface-land"},"content":{"rendered":"\n<p>PS: For this project I moved from EEG to fMRI, and in this post I will sometimes explain terms that might be very basic to fMRI people, but maybe not for EEG people.<\/p>\n\n\n\n<p>I want to investigate cortical area V1. But I don&#8217;t want to spend time on retinotopy during my recording session. Thus I looked a bit into automatic methods to estimate it from segmented (segment = split up in WhiteMatter\/GrayMatter+extract 3D-surfaces from voxel-MRI and also inflate them) brains. I used the freesurfer\/label\/lh.V1 labels and the neurophythy\/Benson et al tools [zotpressInText item=&#8221;{4784278:P2Y9DRY4}&#8221;]. The manual retinotopy was performed by Sam Lawrence using MrVista. And here the trouble begins:<\/p>\n\n\n\n<p>The manual retinotopy was available only as a volume (voxel-file, maybe due to my completly lacking mrVista skills. I should look into whether I can extract the mrVista mesh-files somehow), while the other outputs I have as freesurfer vertex values, ready to be plotted against the different surfaces freesurfer calculated (e.g. white matter, pial (gray matter), inflated). Thus I had to map the volume to surface. Sounds easy &#8211; something that is straight forward &#8211; or so I thought.<\/p>\n\n\n\n<p>After a lot of trial&amp;error and bugging colleagues at the Donders, I settled for the nipype call to mri_vol2surf from freesurfer. But it took me a long time to figure out what the options actually mean. This <a href=\"https:\/\/webcache.googleusercontent.com\/search?q=cache:eRf0TgjxuWsJ:https:\/\/mail.nmr.mgh.harvard.edu\/pipermail\/freesurfer\/2007-July\/005607.html+&amp;cd=1&amp;hl=de&amp;ct=clnk&amp;gl=nl&amp;client=firefox-b-d\">answer <\/a>by <strong>Doug Greve<\/strong> was helpful (the answer is 12 years old, nobody added it to the help :() (see also <a href=\"https:\/\/webcache.googleusercontent.com\/search?q=cache:HjNZeXLRG_oJ:https:\/\/www.mail-archive.com\/freesurfer%40nmr.mgh.harvard.edu\/msg33455.html+&amp;cd=8&amp;hl=de&amp;ct=clnk&amp;gl=nl&amp;client=firefox-b-d\">this answer<\/a>):  <\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">It should be in the help (reprinted below). Smaller delta is better\nbut takes longer. With big functional voxels, I would not agonize too\nmuch over making delta real small as you'll just hit the same voxel\nmultiple times. .25 is probably sufficient.\n\ndoug\n\n   --projfrac-avg min max delta\n   --projdist-avg min max delta\n\n     Same idea as --projfrac and --projdist, but sample at each of the\n     points    between min and max at a spacing of delta. The samples are then\n     averaged    together. The idea here is to average along the normal.<\/pre>\n\n\n\n<p>The problem is that you have to map each vertex to a voxel. So in this approach you take the normal vector of the surface (e.g. from white matter surface), check where it hits the gray matter, sample &#8216;delta&#8217; steps between WM (min) and GM (max), and check which voxels are closest to these steps. The average value of the voxels is then assigned to this vertex. <\/p>\n\n\n\n<p>I will first show a &#8216;successful subject before I dive into some troubles along the way.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"453\" height=\"394\" src=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-2.png\" alt=\"\" class=\"wp-image-412\" srcset=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-2.png 453w, https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-2-300x261.png 300w\" sizes=\"auto, (max-width: 453px) 100vw, 453px\" \/><figcaption>red=freesurfer label, orange = benson label restricted to &lt;10deg visual angle, purple = manual based on 10deg retinotopy data<br><\/figcaption><\/figure>\n\n\n\n<p>Overall a good match I would say, generally benson &amp; freesurfer have a good alignment (reasonable), the manual retinotopy is larger in most subjects. This might also be due to the projection method (see below)<\/p>\n\n\n\n<p>Initially I tried projection withour smoothing, see the results below. I then changed to a smooth of  5mm kernel with subsequent thresholding (for sure there is probably a  smarter way).<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image.png\" alt=\"\" class=\"wp-image-410\" width=\"295\" height=\"225\" srcset=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image.png 553w, https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-300x228.png 300w\" sizes=\"auto, (max-width: 295px) 100vw, 295px\" \/><figcaption>Without smoothing<\/figcaption><\/figure><\/div>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-1.png\" alt=\"\" class=\"wp-image-411\" width=\"301\" height=\"227\" srcset=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-1.png 622w, https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2019\/09\/image-1-300x226.png 300w\" sizes=\"auto, (max-width: 301px) 100vw, 301px\" \/><figcaption>With 5mm smoothing   (red=freesurfer label, orange = benson label, purple = manual) <\/figcaption><\/figure><\/div>\n\n\n\n<p>It is pretty clear that in this example the fit of manual with automatic tools is not very good. My trouble is now that I don&#8217;t know if this is because of actual difference or because of the projection.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Next steps would be to double check everything in voxel land, i.e. project the surface-labels back to voxels and investigate the voxel-by-voxel ROIs.<br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>PS: For this project I moved from EEG to fMRI, and in this post I will sometimes explain terms that might be very basic to fMRI people, but maybe not for EEG people. I want to investigate cortical area V1. But I don&#8217;t want to spend time on retinotopy during my recording session. Thus I looked a bit into automatic methods to estimate it from segmented (segment = split up in WhiteMatter\/GrayMatter+extract 3D-surfaces from voxel-MRI and also inflate them) brains. I used the freesurfer\/label\/lh.V1 labels and the neurophythy\/Benson et al tools [zotpressInText item=&#8221;{4784278:P2Y9DRY4}&#8221;]. The manual retinotopy was performed by Sam&#8230;<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-409","post","type-post","status-publish","format-standard","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts\/409","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/comments?post=409"}],"version-history":[{"count":0,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts\/409\/revisions"}],"wp:attachment":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/media?parent=409"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/categories?post=409"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/tags?post=409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}