{"id":193,"date":"2017-01-17T21:08:12","date_gmt":"2017-01-17T19:08:12","guid":{"rendered":"http:\/\/benediktehinger.de\/blog\/science\/?p=193"},"modified":"2017-01-17T21:08:12","modified_gmt":"2017-01-17T19:08:12","slug":"how-to-use-bimodal-priors-for-bayesian-data-analysis-in-stan","status":"publish","type":"post","link":"https:\/\/benediktehinger.de\/blog\/science\/how-to-use-bimodal-priors-for-bayesian-data-analysis-in-stan\/","title":{"rendered":"How to use bimodal priors for bayesian data analysis in STAN"},"content":{"rendered":"<p>I tried adding a bi-modal prior in STAN for a homework exercise on Bayesian Data Analysis. At first, I thought this could work:<br \/>\n&#8220;`STAN<br \/>\nmodel{<br \/>\n  mu ~ normal(-0.5,0.3) + normal(0.5,0.3);<br \/>\n}<br \/>\n&#8220;`<br \/>\nBut it doesn&#8217;t. I had to dig deeper and I thought I could simply add multiple times to the log-posterior due to <a href=\"http:\/\/stackoverflow.com\/questions\/40289457\/stan-using-target-syntax#comment67846849_40290683\" target=\"_blank\">a sideremark of Bob Carpenter<\/a>:<\/p>\n<p>&#8220;`STAN<br \/>\ntarget += normal_lpdf(mu|.5,0.3);<br \/>\ntarget += normal_lpdf(mu|-.5,0.3);<br \/>\n&#8220;`<br \/>\nWhich also does not work. Finally, the solution is akin to the mixture model in the STAN manual:<\/p>\n<p>&#8220;`STAN<br \/>\n  target += log_sum_exp(normal_lpdf(mu|.5,0.3),normal_lpdf(mu|-.5,0.3));<br \/>\n&#8220;`<\/p>\n<p>This results in beautiful bi-modal priors:<br \/>\n<\/a><a href=\"http:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2017\/01\/Rplot01-1.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2017\/01\/Rplot01-1.png\" alt=\"\" width=\"960\" height=\"370\" class=\"aligncenter size-full wp-image-195\" srcset=\"https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2017\/01\/Rplot01-1.png 960w, https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2017\/01\/Rplot01-1-300x116.png 300w, https:\/\/benediktehinger.de\/blog\/science\/upload\/sites\/2\/2017\/01\/Rplot01-1-768x296.png 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/a><\/p>\n<p>I did not find anything on google or the manual of how to do this. If there is a smarter way to do it, please leave a comment.<\/p>\n<p>&#8220;`R<\/p>\n<p>library(rstan)<\/p>\n<p>model <- \"\ndata { \n}\nparameters {\nreal mu;\n} \ntransformed parameters {\n}\nmodel {\n\/\/mu ~ normal(10,1);\n\/\/mu ~ normal(-10,1);\ntarget += log_sum_exp(normal_lpdf(mu|-.5,.3),normal_lpdf(mu|.5,.3));\n\n}\"\n\n\n\nsamples <- stan(model_code=model, \n  iter=2000, \n  chains=4, \n  thin=1,\n  # seed=123 # Setting seed; Default is random seed\n)\n\nggmcmc::ggs_density(ggmcmc::ggs(samples))+theme_minimal()\n```\n<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I tried adding a bi-modal prior in STAN for a homework exercise on Bayesian Data Analysis. At first, I thought this could work: &#8220;`STAN model{ mu ~ normal(-0.5,0.3) + normal(0.5,0.3); } &#8220;` But it doesn&#8217;t. I had to dig deeper and I thought I could simply add multiple times to the log-posterior due to a sideremark of Bob Carpenter: &#8220;`STAN target += normal_lpdf(mu|.5,0.3); target += normal_lpdf(mu|-.5,0.3); &#8220;` Which also does not work. Finally, the solution is akin to the mixture model in the STAN manual: &#8220;`STAN target += log_sum_exp(normal_lpdf(mu|.5,0.3),normal_lpdf(mu|-.5,0.3)); &#8220;` This results in beautiful bi-modal priors: I did not find&#8230;<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-193","post","type-post","status-publish","format-standard","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts\/193","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/comments?post=193"}],"version-history":[{"count":0,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/posts\/193\/revisions"}],"wp:attachment":[{"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/media?parent=193"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/categories?post=193"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/benediktehinger.de\/blog\/science\/wp-json\/wp\/v2\/tags?post=193"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}