tag:blogger.com,1999:blog-80919914484125467052024-03-05T04:43:24.509-05:00Minding the BrainAt the interface of psychology, neuroscience, and neuropsychology with a focus on computational and statistical modeling.Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.comBlogger58125tag:blogger.com,1999:blog-8091991448412546705.post-81550760620187280662018-08-11T14:39:00.000-04:002018-08-11T14:40:17.542-04:00Joining the editorial board of PLOS ONE<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;">I have joined the <a href="http://journals.plos.org/plosone/static/editorial-board" target="_blank">Editorial Board</a> of <a href="http://journals.plos.org/plosone/" target="_blank">PLOS ONE</a>. There are a few things about PLOS ONE that particularly appeal to me:</span><br />
<ul style="text-align: left;">
<li><span style="font-family: "trebuchet ms" , sans-serif;"><b>Broad scope is great for interdisciplinary research.</b> My own research is primarily driven by experimental psychology, neuroscience, and computer science, as well as linguistics and neuropsychology/neurology. Before writing a manuscript, I often have to decide whether I will be submitting it to a cognitive psychology journal or a clinically-oriented (neuropsychology or neurology) journal or a neuroscience journal. This decision is not always easy and it has a major impact on how the manuscript needs to be written and who will review it. Since the scope of PLOS ONE covers the full range of natural and social sciences as well as medical research, I (you) don't need to worry about that. Just clearly describe the motivation, methods, results, and conclusions of the study and trust that Editors like me will find appropriate reviewers.</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;"><b>Accepts various article types.</b> In addition to standard research articles, PLOS ONE accepts systematic reviews, methods papers (including descriptions of software, databases, and other tools), qualitative research, and negative results. If your manuscript is reporting original research, then it is a viable submission.</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;"><b>Publication decisions based on scientific rigor, not perceived impact</b> (see full <a href="http://journals.plos.org/plosone/s/journal-information#loc-criteria-for-publication" target="_blank">Criteria for Publication</a>). It is difficult to try to guess what kind of impact a paper will have on the field and unnecessary because the field can figure that out on its own. As a reviewer, I focus on scientific rigor and whether the methods and results align with the motivation and conclusion. It's nice that PLOS ONE has the same focus. This emphasis on technical and ethical standards also means that PLOS ONE can publish good replication studies and negative results, which is critical for reducing publication bias and moving our field forward.</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;"><b>Fast decision times.</b> Editors are expected to make decisions within a few days and reviewers are asked to complete their reviews in 10 days. Of course, this is no guarantee that a manuscript will have a fast decision -- it can take a long time to find reviewers and reviewers do not always meet their deadlines. But I think giving reviewers 10 days instead of 4-6 weeks (typical for psychology journals) and expecting editors to make fast decisions is a step in the right direction.</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;"><b>Open access at reasonable cost.</b> This is not the place to discuss the relative merits of the standard reader-pay publication model and the open access author-pay model used by PLOS ONE. Suffice it to say that I like the open access model and I appreciate that PLOS ONE is doing it at a cost ($1595 USD) that is on the low end compared to other established open access journals.</span></li>
</ul>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-74537790227280128152018-04-16T13:57:00.000-04:002018-08-11T14:43:23.830-04:00Correcting for multiple comparisons in lesion-symptom mapping<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;">We recently wrote a paper about correcting for multiple comparisons in voxel-based lesion-symptom mapping (<a href="http://www.danmirman.org/pdfs/Mirman_etal2018_VLSM-stats.pdf" target="_blank">Mirman et al., in press</a>). Two methods did not perform very well: (1) </span><span style="font-family: "trebuchet ms" , sans-serif;">setting a minimum cluster size based on </span><span style="font-family: "trebuchet ms" , sans-serif;">permutations produced too much spillover beyond the true region, (2)</span><span style="font-family: "trebuchet ms" , sans-serif;"> false discovery rate (FDR) correction produced anti-conservative results for smaller sample sizes (N = 30–60)</span><span style="font-family: "trebuchet ms" , sans-serif;">. We developed an alternative solution by generalizing the standard permutation-based </span><span style="font-family: "trebuchet ms" , sans-serif;">family-wise error correction approach, which provides a principled way to balance false positives and false </span><span style="font-family: "trebuchet ms" , sans-serif;">negatives. </span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span><span style="font-family: "trebuchet ms" , sans-serif;">For that paper, we focused on standard "mass univariate" VLSM where the multiple comparisons are a clear problem. The multiple comparisons problem plays out differently in </span><span style="font-family: "trebuchet ms" , sans-serif;">multivariate lesion-symptom mapping methods such as support vector regression LSM (SVR-LSM; <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4213345/" target="_blank">Zhang et al., 2014</a>, a slightly updated version is available from <a href="https://github.com/dmirman/SVR-LSM" target="_blank">our github repo</a>)</span><span style="font-family: "trebuchet ms" , sans-serif;">. Multivariate LSM methods consider all voxels simultaneously and there is not a simple relationship between voxel-level test statistics and p-values. In SVR-LSM, the voxel-level statistic is a SVR beta value and the p-values for those betas are calculated by permutation. I've been trying to work out how to deal with multiple comparisons in SVR-LSM.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"></span><br />
<a name='more'></a><span style="font-family: "trebuchet ms" , sans-serif;">My first (</span><span style="font-family: "trebuchet ms" , sans-serif;">in retrospect, </span><span style="font-family: "trebuchet ms" , sans-serif;">irrationally optimistic) idea, was that, since this is a multivariate analysis method that considers all voxels simultaneously, they do not constitute multiple comparisons and therefore no correction is necessary. I was running permutations to get the p-values, so I tweaked the code to record all of the beta values for all of the permutations, which allowed me to calculate p-values for the original (true) analysis as well the permuted (null) analyses. In the permutation analyses, where there was (by definition) no relationship between lesion location and behavioral deficit score, the p-values were literally true: with the null hypothesis being true, the proportion of voxels with p-values less than some level alpha was equal to alpha. The histogram below shows the distribution of proportion of voxels with (permutation-based) p < 0.01 in 210 permutations. In general, about 0.5-1% of the voxels had (uncorrected) p < 0.01, as is expected under the null hypothesis. </span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8Y3DhJ4OB8p8um9jBZsO2OT7pbkSXtTNxpKEvht7sRaDEXiy8psgbx5bctJa4-YI0Wx7jOHF4AIc56ZERjK6TgtmkUQk_D6ARX6_b8-mvsqKftyGT2mFt_4ETdsU3CUMCgu5MRavoATw/s1600/PMU_p-valid.png" imageanchor="1"><img border="0" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8Y3DhJ4OB8p8um9jBZsO2OT7pbkSXtTNxpKEvht7sRaDEXiy8psgbx5bctJa4-YI0Wx7jOHF4AIc56ZERjK6TgtmkUQk_D6ARX6_b8-mvsqKftyGT2mFt_4ETdsU3CUMCgu5MRavoATw/s320/PMU_p-valid.png" width="320" /></a></span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">When you're testing a very large number of voxels, this is a big problem: when 100,000 voxels are in play, even if the null hypothesis is true, about 500-1000 voxels are going to have (uncorrected) p < 0.01. </span><span style="font-family: "trebuchet ms" , sans-serif;">The red arrow shows that proportion for the true analysis (in which there really was a relationship between lesion location and deficit score), which was 0.0217 (2.17%). It is encouraging that the true analysis had substantially more voxels with p < 0.01, but it would still be hard to interpret a result when nearly half of it could be due to chance.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span><span style="font-family: "trebuchet ms" , sans-serif;">So some kind of correction would be helpful. Since the p-values are based on permutation, using more permutations to correct for multiple comparisons (that is, a standard permutation-based FWER correction or our generalized version) would be redundant. My second idea was that we could calculate a correction for the SVR beta values in the same way that standard FWER correction works on voxel-level test statistics. But the SVR beta values are not test statistics in the same way that voxel-level t-values are test statistics. Specifically, there is not a unique, one-to-one relationship between beta value and p-value. The figure below shows a scatterplot of voxel-wise beta values (x-axis) and p-values (y-axis). The p<0.05 points are shown in red. </span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8_z4YFyWOiaBc_PJnzY25ze9kYkzFeV-U917cvNozaLpVlkBDCobKsBWfSJJAgcysyYDBUUgWWjLziCWLiACyiwumO5y8P3NRyQKotPLc0sLLm96vq5umG0ccHwpqpcbTFsY2sMTwWkw/s1600/pval-beta.png" imageanchor="1"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8_z4YFyWOiaBc_PJnzY25ze9kYkzFeV-U917cvNozaLpVlkBDCobKsBWfSJJAgcysyYDBUUgWWjLziCWLiACyiwumO5y8P3NRyQKotPLc0sLLm96vq5umG0ccHwpqpcbTFsY2sMTwWkw/s320/pval-beta.png" width="320" /></a></span>
<br />
<span style="font-family: "trebuchet ms" , sans-serif;">In general, larger beta values tend to have smaller p-values, which makes good sense, but there is a huge amount of variability. Some relatively low beta values (3-4) have very low p-values while others have very high p-values; most relatively high beta values (7-9) have low p-values, but some don't. I suspect that locations near the periphery of the lesion territory are more prone to large beta values because fewer patients have damage there, so large beta values there are less meaningful than in other locations, but I haven't checked that. Bottom line: a multiple comparisons correction can't be based on raw beta values.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span><span style="font-family: "trebuchet ms" , sans-serif;">So what options are left?</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">(1) Cluster-based correction. One could set a voxel-wise p-value (e.g., p < 0.01) and use permutations to determine a null distribution of maximal cluster sizes for that p-value, then only consider clusters that are larger than (say) 95% of the null distribution of clusters. We tested this for mass univariate LSM and found </span><span style="font-family: "trebuchet ms" , sans-serif;">that it produces clusters that tend to spill far beyond the true regions. Using this method for SVR-LSM would be</span><span style="font-family: "trebuchet ms" , sans-serif;"> better than doing nothing, but I think it would have the same problem as we identified for mass univariate LSM. Also, since multivariate LSM is especially good for detecting separately contributing regions, focusing on particularly large clusters would undermine that advantage of the method.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">(2) False Discovery Rate (FDR). For mass univariate LSM, we found that FDR tended to be somewhat anti-conservative for smaller samples and real data (it performed reasonably well for larger samples and simulated data, where the lesion-symptom relationship is very strong). It's not clear to me whether this would also apply to SVR-LSM, but even if it does, it would still be better than no correction. At FDR < 0.05, we expect that up to 5% of above-threshold voxels may be false positives. Even if that is an anti-conservative estimate and up to 10% (or even 15%) of the above-threshold voxels might be false positives, that is still a relatively small subset of the voxels, which probably won't affect my inference.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">FDR is widely used for multiple comparisons correction and relatively easy to implement, so I think the best strategy at this point is to use FDR, but be aware that it might be somewhat anti-conservative, and to supplement it with some kind of minimum cluster threshold to avoid interpreting small clusters that could easily arise by chance.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: trebuchet ms, sans-serif;"><b>References</b></span><br />
<br />
<ol style="text-align: left;">
<li><span style="font-family: Trebuchet MS, sans-serif;">Mirman, D., Landrigan, J.-F., Kokolis, S., Verillo, S., Ferrara, C., & Pustina, D. (2018). Corrections for multiple comparisons in voxel-based lesion-symptom mapping Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2017.08.025</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;">Zhang, Y., Kimberg, D. Y., Coslett, H. B., Schwartz, M. F., & Wang, Z. (2014). Multivariate lesion-symptom mapping using support vector regression Human Brain Mapping, 35 (12), 5861-5876. DOI: 10.1002/hbm.22590</span></li>
</ol>
</div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-51600201091672891342018-03-23T13:09:00.000-04:002018-03-23T13:09:13.141-04:00Growth curve analysis workshop slides<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">Earlier this month I taught a two-day workshop on growth curve analysis at Georg-Elias-Müller Institute for Psychology in Göttingen, Germany. The purpose of the workshop was to provide a hands-on introduction to using GCA to analyze longitudinal or time course data, with a particular focus on eye-tracking data. All of the materials for the workshop are now available online (<a href="http://dmirman.github.io/GCA2018.html">http://dmirman.github.io/GCA2018.html</a>), including slides, examples, exercises, and exercise solutions.</span><span style="font-family: "Trebuchet MS", sans-serif;"> In addition to standard packages (ggplot2, lme4, etc.), we used my <a href="http://github.com/dmirman/psy811" target="_blank">psy811</a> package for example data sets and helper functions.</span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-36462808437371962482016-12-12T15:33:00.005-05:002016-12-12T15:33:49.843-05:00Flattened logistic regression vs. empirical logit<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;">I first learned about quasi-logistic regression and the "emprical logit" from Dale Barr's (2008) paper, which just happened to be right next to the growth curve analysis paper that Jim Magnuson, J. Dixon, and I wrote. I came to understand and like this approach in 2010 when Dale and I co-taught a workshop on analyzing eye-tracking data at Northwestern. I give that background by way of establishing that I'm positively disposed to the empirical logit method. So I was interested to read a new paper by Seamus Donnelly and Jay Verkuilen (2017) in which they point out some weaknesses of this approach and offer an alternative solution.</span><br />
<br />
<a name='more'></a><span style="font-family: "trebuchet ms" , sans-serif;">In short, the problems raised by Donnelly and Verkuilen (D&V) are that the empirical logit transformation tends to bias proportion estimates toward 0.5 (i.e., logit=0) and that the likelihood function is different. These don't seem like particularly controversial claims. To me, biasing toward 0.5 and using a Gaussian likelihood function are rather the point of using the empirical logit -- the biasing helps counteract the effects of floor and ceiling values that can arise in psychology experiments (i.e., </span><span style="font-family: "trebuchet ms" , sans-serif;">100% or 0% accuracy can be observed </span><span style="font-family: "trebuchet ms" , sans-serif;">with a limited number of trials even when a participant is not actually that perfect) and the Gaussian likelihood function helps with model convergence. However, D&V offer an alternative approach, <i>flattened logistic regression</i>, and show using simulations that it works better.</span><br />
<div>
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms" , sans-serif;">I tried it out on some of my data and the results were not very clear (the example data and helper functions are from <a href="https://github.com/dmirman/psy811" target="_blank">psy811</a>, a little package I wrote for my multilevel regression course, the main model-fitting functions are from <a href="https://github.com/lme4/lme4/" target="_blank">lme4</a>).</span></div>
<div>
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">library(psy811)<br /><span style="color: #38761d;">#compute values for logistic regression</span><br />WordLearnEx$NumCorr <- round(WordLearnEx$Accuracy*6)<br />WordLearnEx$NumErr <- 6 - WordLearnEx$NumCorr<br /><span style="color: #38761d;">#compute empirical logit values</span><br />WordLearnEx$elog <- with(WordLearnEx, log((NumCorr+0.5)/(NumErr+0.5)))<br />WordLearnEx$wts <- with(WordLearnEx, 1/(NumCorr+0.5) + 1/(NumErr+0.5))<br /><span style="color: #38761d;">#set up orthogonal polynomial</span><br />WordLearn <- code_poly(WordLearnEx, predictor = "Block", poly.order=2, draw.poly = FALSE)<br /><span style="color: #38761d;">#fit elog model</span><br />m.elogit <- lmer(elog ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), data=WordLearn, weights=1/wts, REML=FALSE)<br />get_pvalues(m.elogit)<br /><span style="color: #38761d;">#fit logistic regression</span><br />m.log <- glmer(cbind(NumCorr, NumErr) ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), data=WordLearn, family="binomial")<br />coef(summary(m.log))<br /><span style="color: #38761d;">#fit flattened logistic regression<br />#Flat = 0.1</span><br />m.flog1 <- glmer(cbind(NumCorr+0.1, NumErr+0.1) ~ (poly1+poly2)*TP + (poly1+poly2 | Subject), data=WordLearn, family="binomial")<br />coef(summary(m.flog1))</span></blockquote>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms" , sans-serif;">The empirical logit and logistic regression versions converged fine, but the flattened logistic regression gave a convergence warning ("Model failed to converge with max|grad| = 0.191469 (tol = 0.001, component 1)</span><span style="font-family: "trebuchet ms" , sans-serif;">"), in addition to the expected warning about non-integer values in a binomial glm. On the other hand, the pattern of results was very similar across these three models; even the parameter estimates for the two logistic models were very similar. I also </span><span style="font-family: "trebuchet ms" , sans-serif;">tried a few other flattening constants (as recommended by D&V), and the results were basically the same -- same convergence warning, essentially the same parameter estimates.</span></div>
</div>
<div>
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<div>
<span style="font-family: "trebuchet ms" , sans-serif;">So I'm not quite sure what to think; I'll keep trying it on other data sets as opportunities come up. In general, my preference is to stick with standard logistic regression and steer away from alternatives like empirical logit analysis. When that fails, I'm not sure which is the next best option -- the flattened logistic approach looks promising from the D&V simulations, but I'd like to try it some more on my own data.</span></div>
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_white.png" style="border: 0;" /></a></span>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Journal+of+Memory+and+Language&rft_id=info%3Adoi%2F10.1016%2Fj.jml.2016.10.005&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Empirical+logit+analysis+is+not+logistic+regression&rft.issn=0749596X&rft.date=2017&rft.volume=94&rft.issue=&rft.spage=28&rft.epage=42&rft.artnum=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0749596X1630167X&rft.au=Donnelly%2C+S.&rft.au=Verkuilen%2C+J.&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Donnelly, S., & Verkuilen, J. (2017). Empirical logit analysis is not logistic regression <span style="font-style: italic;">Journal of Memory and Language, 94</span>, 28-42 DOI: <a href="http://dx.doi.org/10.1016/j.jml.2016.10.005" rev="review">10.1016/j.jml.2016.10.005</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Journal+of+Memory+and+Language&rft_id=info%3Adoi%2F10.1016%2Fj.jml.2007.09.002&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Analyzing+%E2%80%98visual+world%E2%80%99+eyetracking+data+using+multilevel+logistic+regression&rft.issn=0749596X&rft.date=2008&rft.volume=59&rft.issue=4&rft.spage=457&rft.epage=474&rft.artnum=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0749596X07001015&rft.au=Barr%2C+D.&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Barr, D. (2008). Analyzing ‘visual world’ eyetracking data using multilevel logistic regression <span style="font-style: italic;">Journal of Memory and Language, 59</span> (4), 457-474 DOI: <a href="http://dx.doi.org/10.1016/j.jml.2007.09.002" rev="review">10.1016/j.jml.2007.09.002</a></span>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com3tag:blogger.com,1999:blog-8091991448412546705.post-58386963823365111972016-10-06T11:49:00.000-04:002016-10-06T11:54:47.001-04:00New media and priorities<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;">I was disappointed to read a <a href="https://www.dropbox.com/s/9zubbn9fyi1xjcu/Fiske%20presidential%20guest%20column_APS%20Observer_copy-edited.pdf" target="_blank">(a draft of) a forthcoming APS Observer article</a> by Susan Fiske in which she complains about how new media have allowed "unmoderated attacks" on individuals and their research programs. Other bloggers have written at some length about this (<a href="http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/" target="_blank">Andrew Gelman</a>, <a href="http://neurochambers.blogspot.com/2016/09/methodological-terrorism-and-other-myths.html" target="_blank">Chris Chambers</a>, <a href="http://datacolada.org/52" target="_blank">Uri Simonsohn</a>), I particularly recommend the longer and very thoughtful post by <a href="http://www.talyarkoni.org/blog/2016/10/01/there-is-no-tone-problem-in-psychology/" target="_blank">Tal Yarkoni</a>. A few points have emerged as the most salient to me:</span><br />
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;">First, scientific criticism should be evaluated on its accuracy and constructiveness. Our goal should be accurate critiques that provide constructive ideas about how to do better. Efforts to improve the peer review process often focus on those factors, along with timeliness. As it happens, blogs are actually great for this: posts can be written quickly and immediately followed by comments that allow for back-and-forth so that any inaccuracies can be corrected and constructive ideas can emerge. Providing critiques in a polite way is a nice goal, but it is secondary. (Tal Yarkoni's <a href="http://www.talyarkoni.org/blog/2016/10/01/there-is-no-tone-problem-in-psychology/" target="_blank">post</a> discusses this issue very well).</span></div>
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;">Second, APS is the publisher of <i>Psychological Science</i>, a journal that was once prominent and prestigious, but has gradually become a pop psychology punchline. Perhaps I should not have been surprised that they're publishing an unmoderated attack on new media.</span></div>
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<div style="text-align: left;">
<span style="font-family: "trebuchet ms" , sans-serif;">Third, things have changed very rapidly (this is the main point of Andrew Gelman's <a href="http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/" target="_blank">post</a>). When I was in graduate school (2000-2005), I don't remember hearing concerns about replication and standard operating procedures included lots of stuff that I would now consider "garden of forking paths"/"p-hacking". </span><span style="font-family: "trebuchet ms" , sans-serif;">2011 was a major turning point: </span><span style="font-family: "trebuchet ms" , sans-serif;">Daryl Bem reported his evidence of ESP (side note: he was working on that since at least the mid-to-late 90's when I was undergrad at Cornell and heard him speak about it). At the time, the flaws in that paper were not at all clear. That was also the year a paper called </span><span style="font-family: "trebuchet ms" , sans-serif;">“False-positive psychology” was published (in </span><i style="font-family: "trebuchet ms", sans-serif;">Psychological Science</i><span style="font-family: "trebuchet ms" , sans-serif;">), which</span><span style="font-family: "trebuchet ms" , sans-serif;"> showed that “researcher degrees of freedom” (or "p-hacking") make actual false positive rates much higher than the nominal </span><i style="font-family: "trebuchet ms", sans-serif;">p</i><span style="font-family: "trebuchet ms" , sans-serif;"> < 0.05 values. </span><span style="font-family: "trebuchet ms" , sans-serif;">The year after that, in 2012, Greg Francis's paper ("</span><span style="font-family: "trebuchet ms" , sans-serif;">Too good to be true") came out showing that multi-experiment papers reporting consistent replications of small effect sizes are themselves very unlikely and may be reflecting selection bias, p-hacking, or other problems. 2012 also the year <a href="http://mindingthebrain.blogspot.com/2015/09/reproducibility-project-front-row-seat.html" target="_blank">I was contacted by the Open Science Collaboration</a> to contribute to their large-scale replication effort, which eventually led to a major report on the reproducibility of psychological research.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">My point is that these issues, which are a huge deal now, were not very widely known even 5-6 years ago and almost nobody was talking about them 10 years ago. To put it another way, just about all tenured Psychology professors were trained before the term "p-hacking" even existed. So, maybe we should admit that all this rapid change can be a bit alarming and disorienting. But we're scientists, we're in the business of drawing conclusions from data, and the data clearly show that our old way of doing business has some flaws, so we should try to fix those flaws. Lots of good ideas are being implemented and tested -- transparency (sharing data and analysis code), post-publication peer review, new impact metrics for hiring/tenure/promotion that reward transparency and reproducibility. And many of those ideas came from those unmoderated new media discussions.</span></div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-84212630524853116292016-09-15T04:30:00.000-04:002016-09-15T05:10:32.469-04:00Post-doctoral research position available<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;"><a href="http://www.danmirman.org/" target="_blank">We</a> are hiring a post-doctoral
research fellow to start in 2017. Research in the lab focuses on spoken
language processing and semantic memory in typical and atypical speakers. Current
research projects investigate: (1) The processing and representation of
semantic knowledge, particularly knowledge of object features and categories,
and the events or situations in which they participate. (2) The organization of
the spoken language system by mapping the relationships between stroke lesion
location and behavioral deficits.</span><br />
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<br /></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<b><span style="font-family: "trebuchet ms" , sans-serif;">Research methods include:<o:p></o:p></span></b></div>
<div class="MsoListParagraphCxSpFirst" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -.25in;">
</div>
<ul style="text-align: left;">
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">behavioral and eye-tracking experiments</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">lesion-symptom mapping</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">computational modeling</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">non-invasive brain stimulation (tDCS)</span></span></li>
</ul>
<!--[if !supportLists]--><span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span><br />
<div class="MsoListParagraphCxSpMiddle" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoListParagraphCxSpMiddle" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoListParagraphCxSpLast" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l2 level1 lfo1; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<b><span style="font-family: "trebuchet ms" , sans-serif;">Qualifications:</span></b></div>
<div class="MsoListParagraphCxSpFirst" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l0 level1 lfo2; text-indent: -.25in;">
</div>
<ul style="text-align: left;">
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">Doctorate degree in Psychology, Cognitive &
Brain Science, CSD/SHLS, or related discipline. Must be completed before
starting post-doctoral fellowship.</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">Experience with one or more of the research
methods and/or content domains.</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">Programming experience in R, Matlab, python, or
similar language will be preferred.</span></span></li>
</ul>
<!--[if !supportLists]--><span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span><br />
<div class="MsoListParagraphCxSpMiddle" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l0 level1 lfo2; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoListParagraphCxSpLast" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l0 level1 lfo2; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;">The post-doctoral
fellow will be expected to contribute to ongoing projects and to develop an independent
line of research. Mentorship, training, and professional development
opportunities will be provided to facilitate the fellow’s future career in
academic, research, or industry settings.</span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<br /></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<b><span style="font-family: "trebuchet ms" , sans-serif;">About the <o:p></o:p><a href="http://www.danmirman.org/" target="_blank">Language & Cognitive Dynamics Lab</a></span></b></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;">LCDL has
recently relocated to the <a href="http://www.uab.edu/cas/psychology/" target="_blank">Department of Psychology</a> at the University of Alabama
at Birmingham.
UAB is a comprehensive, urban research university, ranked among the top 25 in
funding from the NIH. Postdoctoral training at UAB is enhanced by the <a href="http://www.uab.edu/postdocs" target="_blank">Office ofPostdoctoral Education</a>.
The medical school is routinely ranked among the top in the US, and
interdisciplinary programs are a particular strength, including the Psychology
Department’s undergraduate and graduate neuroscience programs. Birmingham is a
growing, diverse, and progressive city located in the foothills of the
Appalachians. It was recently rated #1 Next Hot Food City by Zagat, it is home
to several world-class museums and performing arts venues, and the region
offers excellent sites for hiking, camping, boating, swimming, and fishing.</span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<b><span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></b></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<b><span style="font-family: "trebuchet ms" , sans-serif;">To Apply, submit the following</span></b></div>
<div class="MsoListParagraphCxSpFirst" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -.25in;">
</div>
<ul style="text-align: left;">
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">A letter of interest that describes your training,
research experience and interests, and career goals</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">CV</span></span></li>
<li><span style="text-indent: -0.25in;"><span style="font-family: "trebuchet ms" , sans-serif;">2-3 letters of recommendation</span></span></li>
</ul>
<!--[if !supportLists]--><span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span><br />
<div class="MsoListParagraphCxSpMiddle" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoListParagraphCxSpLast" style="margin-bottom: .0001pt; margin-bottom: 0in; mso-add-space: auto; mso-list: l1 level1 lfo3; text-indent: -.25in;">
<span style="font-family: "trebuchet ms" , sans-serif;"><o:p></o:p></span></div>
<div class="MsoNormal" style="margin-bottom: .0001pt; margin-bottom: 0in;">
<span style="font-family: "trebuchet ms" , sans-serif;">Applications
will be considered until the position is filled. For full consideration please
apply by November 1, 2016. Only complete applications will be considered. Questions and applications can be addressed to LCDL Director <a href="mailto:dan@danmirman.org" target="_blank">Dan Mirman</a>.</span><o:p></o:p></div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-75463756319510387082016-03-01T14:47:00.000-05:002016-03-01T14:47:42.489-05:00MAPPD 2.0<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">About 5 or 6 years ago my colleagues at <a href="http://www.mrri.org/" target="_blank">Moss Rehabilitation Research Institute</a> and I made public a large set of behavioral data from language and cognitive tasks performed by people with aphasia. Our goal was to facilitate larger-scale research on spoken language processing and how it is impaired following left hemisphere stroke. </span><span style="font-family: 'Trebuchet MS', sans-serif;">We are pleased to announce that we have completed a thorough redesign of </span><a href="http://www.mappd.org/" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">Moss Aphasia Psycholinguistics Project Database</a><span style="font-family: 'Trebuchet MS', sans-serif;"> site. The MAPPD 2.0 interface is much simpler and easier to use, geared toward letting users download the data they want and analyze it themselves. </span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">The core of this database is single-trial picture naming and word repetition data for over 300 participants (including 20 neurologically intact control participants) with detailed target word and response information. The database also contains basic demographic and clinical information for each participant with aphasia, as well as performance on a host of supplementary tests of speech perception, semantic cognition, short-term/working memory, and sentence comprehension. A more detailed description of the included tests, coding schemes, and usage suggestions is available in our original description of the database (<a href="http://www.danmirman.org/pdfs/Mirman_etal2010_MAPPD.pdf" target="_blank">Mirman et al., 2010</a>) and in the site's documentation.</span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-80924402204415920152016-02-19T22:13:00.001-05:002016-02-19T22:14:04.557-05:00Acceptance and rejection rates<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: "trebuchet ms" , sans-serif;">There was a recent <a href="http://blog.frontiersin.org/2015/12/21/4782/" target="_blank">blog post</a> at Frontiers pointing out that journals' publicly-available rejection rates are not associated with their impact factors. Their post discusses several factors that contribute to this, but I've been thinking about how rejection rates are calculated, particularly publicly stated rejection rates. For example, the <a href="http://www.apa.org/pubs/journals/features/2013-statistics.pdf" target="_blank">2013 rejection rate</a> for both JEP:LMC and JEP:HPP is 78% and JEP:General is slightly higher at 83%. These are top-tier experimental psychology journals and those rejection rates seem intuitively appropriate for selective outlets, but I think they might be inflated because many papers are rejected with an invitation to revise and resubmit.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"></span><br />
<a name='more'></a><span style="font-family: "trebuchet ms" , sans-serif;">Here's how the math works out: let's imagine a typical journal where a small percentage of the submissions are accepted immediately (let's say 5%), a substantial minority are rejected either with or without review (let's say 30%), and most papers are rejected with an invitation to revise and resubmit (on our example, 65%). Most authors would revise and resubmit in this situation (let's say 80%) and, if the editors are doing a good job, most of those resubmissions would get accepted (again, let's say 80%). </span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">We can now calculate the key quantities based on:</span><br />
<br />
<ul style="text-align: left;">
<li><span style="font-family: "trebuchet ms" , sans-serif;">number of initial submissions (N)</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;">proportion accepted on first round (p_a1)</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;">proportion rejected outright on first round (p_rej1)</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;">proportion rejected with invitation to resubmit (p_rej_resub)</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;">proportion resubmitted (p_resub)</span></li>
<li><span style="font-family: "trebuchet ms" , sans-serif;">proportion of resubmissions that are accepted (p_a2)</span></li>
</ul>
<br />
<span style="font-family: "trebuchet ms" , sans-serif;">Number of submissions = N*(1+(</span><span style="font-family: "trebuchet ms" , sans-serif;">p_rej_resub*p_resub))</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;">Number of rejections = N*(p_rej + p_rej_resub + </span><span style="font-family: "trebuchet ms" , sans-serif;">(</span><span style="font-family: "trebuchet ms" , sans-serif;">p_rej_resub*p_resub*(1-p_a2)))</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;">Number of acceptances = N*(p_a1 + </span><span style="font-family: "trebuchet ms" , sans-serif;">(</span><span style="font-family: "trebuchet ms" , sans-serif;">p_rej_resub*p_resub*(p_a2))</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">If our hypothetical journal got 150 initial submissions, the total number of submissions (including resubmissions) would be 228, the number of rejections would be 158 for a 69% rejection rate, and the total number of acceptances would be 70 for a 46.6% effective acceptance rate. That is, 46.6% of the initial submissions were eventually accepted. Note that the effective acceptance rate and the rejection rate add up to more than 100% because a bunch of papers count twice -- as a rejection (with invitation to resubmit) and as an acceptance (after revision).</span><br />
<div>
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span></div>
<span style="font-family: "trebuchet ms" , sans-serif;">It's easy to see how this could be pushed even further by adding more revise-and-resubmit cycles: imagine that the same 5% of initial submissions are accepted, only 10% are rejected, the rest are rejected with invitation to resubmit. Most are resubmitted (90%) and of those 50% are accepted, 50% are rejected with an invitation to revise a second time, and those revisions are always submitted and accepted. Now those 150 initial submissions turns into 322 total submissions, 200 rejections (62% rejection rate), and 122 acceptances (81.5% effective acceptance rate). So you can nearly double the effective acceptance rate with a fairly small impact on the overall rejection rate. In other words, </span><span style="font-family: "trebuchet ms" , sans-serif;">the publicly stated rejection rate may not tell you very much about the</span><span style="font-family: "trebuchet ms" , sans-serif;"> "selectivity" of a journal unless you also know their effective acceptance rate.</span><br />
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
<span style="font-family: "trebuchet ms" , sans-serif;">An interesting reverse case is the Proceedings of the Cognitive Science Society Conference. The primary submission format for CogSci is a 6-page paper (standard two-column publication-ready layout) that goes through a single round of standard peer-review (about 3 reviews per paper). There is only one round of review -- papers are either accepted or rejected (accepted papers can be revised before publication in the proceedings). Typically, about 30% of 6-page paper submissions are rejected, which may sound not very selective compared to the 80% rejection rate of the JEPs, but, as we've seen, the 70% acceptance rate might be quite typical once you consider the revise-and-resubmit cycle at those journals.</span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-84910735924470848902016-02-04T13:34:00.000-05:002016-02-08T10:25:12.568-05:0015th Neural Computation and Psychology Workshop<div dir="ltr" style="text-align: left;" trbidi="on">
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<b><span style="font-size: 14pt;">NCPW15 – August 8-9, 2016 – Philadelphia, PA, USA</span></b></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<a href="http://www.frontiersin.org/events/15th_Neural_Computation_and_Psychology_Workshop_%28NCPW15%29_Contemporary_Neural_Network_Models_Machine/3135" style="color: #954f72;"><b><span style="font-size: 14pt;">15<sup>th</sup> Neural Computation and Psychology Workshop</span></b></a></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<b><span style="font-size: 16pt;">Contemporary Neural Network Models:<br />Machine Learning, Artificial Intelligence, and Cognition</span></b></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<span style="font-size: 12pt;"></span></div>
<a name='more'></a><br />
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<span style="font-size: 12pt;">Funded by the W. K. & K. W. Estes Fund, Google DeepMind<br />and the Rumelhart Emergent Cognitive Functions Fund</span></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<b>Organizers</b></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br />
<span style="font-size: 12pt;">Jay McClelland, Stefan Frank & Daniel Mirman</span></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<b>Confirmed Plenary Speakers</b></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<span class="SpellE"><span style="font-size: 12pt;">Nikolaus</span></span><span style="font-size: 12pt;"> <span class="SpellE">Kriegeskorte</span>, Timothy <span class="SpellE">Lillicrap</span>, Andrew Saxe<span class="GramE">,</span><br />Linda Smith, Greg Wayne, & Marco <span class="SpellE">Zorzi</span></span></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<i><span style="font-size: 12pt;">Abstracts and Applications to Attend Due: <b>April 1</b><br />Notification of Acceptance and Travel Awards: <b>May 1</b></span></i></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<i><span style="font-size: 12pt;"><b><br /></b></span></i></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<b>Overview</b></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
We are pleased to announce a workshop on <b><i>Contemporary Neural Network Models</i></b>, bringing the latest developments in Deep Neural Networks, Deep Reinforcement Learning Networks, and Recurrent Neural Networks with Long-Short-Term Memory Units into contact with contemporary cognitive science and cognitive neuroscience research. Plenary speakers include established and emerging experts in the development of contemporary neural network methods. The workshop will continue the <a href="http://www.cs.bham.ac.uk/~jxb/NCPW.html" moz-do-not-send="true" style="color: #954f72;">Neural Computation and Psychology Workshop</a> series, which originated in the UK in 1992. It will take place on Aug 8-9, 2016 in Philadelphia – in North America for the first time after 14 previous meetings in Europe.<o:p></o:p></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b><i>The Workshop has both a research dissemination and tutorial purpose</i></b>. Research submissions are welcome for spoken and poster presentations in any area of computational research that applies neural network models or related approaches to understanding human cognition. Both junior and senior scientists interested in learning more about the latest developments are encouraged to attend (space is limited and application is required) with or without making a presentation. Thanks to generous support, costs will be low and travel awards will encourage participation by a diverse population of participants with relevant goals. A <a href="http://www.frontiersin.org/events/15th_Neural_Computation_and_Psychology_Workshop_%28NCPW15%29_Contemporary_Neural_Network_Models_Machine/3135" style="color: #954f72;">website hosted by Frontiers</a> will provide submission and venue details by Feb 10; abstract submissions and applications to attend are due April 1 and applicants will be notified of acceptance and travel awards by May 1. [<b>EDIT</b>: new <a href="https://sites.google.com/site/ncpw15/" target="_blank">conference website</a>]</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
NCPW15 will be complemented by a separate day-long tutorial on Wednesday, August 10, as part of the Cognitive Science Society meeting also in Philadelphia (pending acceptance by the Program Committee). This day-long event will provide additional tutorial presentations, followed by in depth how-to sessions associated with the actual implementation and effective practical mastery of deep learning networks for cognitive science research.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Workshop Program Overview</b></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
Each of the two days of the NCPW workshop will include three 75 minute sessions led by invited speakers (schedule attached). The first five of these sessions will each focus on a different aspect or topic in contemporary neural network research, and each will be led by a different expert. The final session will begin with a commentary led by a senior Cognitive Scientist (Linda Smith) followed by a panel discussion with the other five speakers. During lunch each day, the day’s speakers will each hold a smaller discussion session with a subset of the workshop participants, to allow in-depth discussion of their approach and perspective. Published papers or lecture notes will be circulated in advance to enhance participants’ background and engagement for these discussions. Two 1.5-hour sessions each day will be devoted to submitted presentations selected for their scientific value and the extent to which they advance the use of neural network architectures, tools, and concepts in both computational and cognitive (neuro)science domains. A poster session at the end of the first day will allow all of the participants an opportunity to present and obtain feedback from the invited speakers, and to learn from and network with each other. A conference dinner on the first evening and a reception on the second evening will allow for informal interactions. </div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Invited speakers</b></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<span class="SpellE"><b>Nikolaus</b></span><b> <span class="SpellE">Kriegeskorte</span></b>, MRC-CBU Cambridge, UK. <span class="SpellE">Kriegeskorte</span> has applied a deep convolutional neural network architecture to model human voxel-level activity patterns in different layers of visual cortex.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Marco <span class="SpellE">Zorzi</span></b>, University of <span class="SpellE">Padova</span>. <span class="SpellE">Zorzi</span> has applied deep networks to modeling human numerosity judgment and reading and has developed tools for efficient implementation of these models.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Andrew Saxe</b>, Harvard University. Saxe has conducted mathematical analyses of deep neural network architectures leading to a conceptual understanding of the role of unsupervised pre-training and has applied these methods to the time course of cognitive and semantic development.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Greg Wayne</b>, Google DeepMind. Wayne is one of the creators of the Neural Turing Machine, a Deep Learning Model that relies on the Long-Short-Term Memory mechanism for the storage and retrieval of information in memory.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Timothy <span class="SpellE">Lillicrap</span></b>, Google DeepMind. <span class="SpellE">Lillicrap</span> is a leader in the development of Deep Reinforcement Learning methods that allow simulated agents to learn sophisticated continuous motor control policies.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Linda Smith</b>, Indiana University. Smith is a thought-leader in the development of dynamical systems models an in the application of such models in cognitive development.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Participants, Travel Awards, and Costs</b></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
The target population is PhD students, post-doctoral fellows, and more advanced researchers at any level. Both contributing researchers and non-presenting attendees are welcome to apply. Contributing researchers will be selected based on a submitted research abstract, according to past policies of NCPW. Selection of non-presenting attendees will be based on the <i>relevance</i> of the workshop to the attendee’s goals as described in a short essay as well as a CV and, for junior scientist, a mentor’s letter of support. Both trainees and contributing researchers not selected for oral presentations have the option to present a poster in the poster session. A total of 25 travel support awards ($250 domestic/$750 international) are available both for trainees and for contributing researchers to partially defray costs of attendance; support will be awarded the criteria above as well as need with attention to encouraging diversity. There is no registration fee for accepted participants, and lunch on both days of the workshop will be covered for trainees and contributing researchers. A low-price accommodation option ($50/night) will be available.</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
Additional travel information is available on the CogSci2016 page: http://cognitivesciencesociety.org/conference2016/travelinfo.html</div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<br /></div>
<div align="center" class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt; text-align: center;">
<u5:p style="font-family: 'Times New Roman'; font-size: medium; text-align: start;"></u5:p><span style="font-family: "times new roman"; font-size: small; text-align: start;"></span></div>
<div class="MsoNoSpacing" style="font-family: Calibri, sans-serif; font-size: 11pt; margin: 0in 0in 0.0001pt;">
<b>Application Process: </b>More detailed information on the application process and the venue will be made available at the <a href="http://www.frontiersin.org/events/15th_Neural_Computation_and_Psychology_Workshop_%28NCPW15%29_Contemporary_Neural_Network_Models_Machine/3135" style="color: #954f72;">frontiers website</a> by Feb 10, 2016. The deadline for paper and poster submissions and for applications to attend will be April 1, 2016, and notification of acceptance and travel awards for trainees and participating researchers will be on May 1, 2016.</div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-37808922241232032362015-09-03T20:28:00.001-04:002015-09-03T20:28:20.464-04:00Reproducibility project: A front row seat<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">A <a href="http://dx.doi.org/10.1126/science.aac4716" target="_blank">recent paper in <i>Science</i></a> reports the results of a large-scale effort to test reproducibility in psychological science. The results have caused much discussion (as well they should) in both general public and science forums. I thought I would offer my perspective as the lead author of one of the studies that was included in the reproducibility analysis. I had heard about the project even before being contacted to participate and one of the things that appealed to me about it was that they were trying to be unbiased in their selection of studies for replication: all papers published in three prominent journals in 2008. <a href="http://magnuson.psy.uconn.edu/" target="_blank">Jim Magnuson</a> and I had published a paper in one of those journals (<i>Journal of Experimental Psychology: Learning, Memory, & Cognition</i>) in 2008 (<a href="http://www.danmirman.org/pdfs/MirmanMagnuson2008.pdf" target="_blank">Mirman & Magnuson, 2008</a>), so I figured I would hear from them sooner or later. </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<a name='more'></a><span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">In 2012 I was contacted by one of the members of the Open Science Collaboration requesting either our original experiment files or details of the procedure so they could replicate it as closely as possible. I provided the experiment files and we had a little email discussion during which I provided the details of our data analysis procedure (exclusion of error trials and reaction time outliers, etc.) and verified which of the effects from our original paper was the critical one for replication -- an inhibitory effect of near semantic neighbors on visual word recognition. They conducted a power analysis and ran their final data collection plans by me for my input. I flagged some minor issues, but didn't see anything that would be a significant problem. It was great to be informed every step of the way -- it felt like a true replication effort that was independent and transparent, but I could identify any significant problems.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
My key finding was statistically significant in their replication, though the effect size was smaller than in my original report. Thanks to the project's open sharing of the data and analysis code, I was able to make a version of their Figure 3 with my study identified (X and arrow):</span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-oSAABEmIn7fXdu6eZEopMTDcFD9_UIoUF5pxIJLNCb5IthBxNSfGxlliZ-xBxVmqypeDA-yVhyphenhyphenksXqm9umXX8GzzA68f5txcr0iuaXENJVSHucXS0rYBvq2kiawr5JJ5twTOPhTEjzo/s1600/RPP_arrow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" height="280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-oSAABEmIn7fXdu6eZEopMTDcFD9_UIoUF5pxIJLNCb5IthBxNSfGxlliZ-xBxVmqypeDA-yVhyphenhyphenksXqm9umXX8GzzA68f5txcr0iuaXENJVSHucXS0rYBvq2kiawr5JJ5twTOPhTEjzo/s400/RPP_arrow.png" width="400" /></span></a></div>
<span style="font-family: 'Trebuchet MS', sans-serif;">My experience with the reproducibility project was that they were extremely careful and professional. The studies for replication were selected systematically rather than due to particular skepticism or certainty and I was consulted every step of the way of the replication of my study, which allowed me to both help make it a true replication and to raise any concerns about differences between my original study and the replication.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><b><br /></b>
<b>The hand wringing</b></span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
Most of the discussion about the <i>Science </i>paper has been about whether or not there is a crisis in psychology or whether psychology is a "real" science -- where physics, chemistry, maybe biology are the "real" sciences. </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
First, science is a method, not a content area. One can apply the scientific method to the behavior of atoms, molecules, organisms, or human behavior. Each of those domains has its own challenges, but the science is in the method, not in the content. </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
Second, part of that method is replication. Not replicability, which is a property of a particular phenomenon; but replication, which is a methodological strategy. Observing a phenomenon once is intriguing, observing it repeatedly makes it something worth explaining. Each individual scientific report should be treated as provisional: Jim and I observed an effect that we reported in that 2008 paper, but it could have been a random coincidence or a bizarre property of the context of our experiment. This replication gives me more confidence in the result and I have separately found the effect in a different task and two different populations (<a href="http://www.danmirman.org/pdfs/Mirman2011.pdf" target="_blank">Mirman, 2011</a>; <a href="http://www.danmirman.org/pdfs/MirmanGraziano2013.pdf" target="_blank">Mirman & Graziano, 2013</a>), which makes me more confident in our underlying theory. To my mind, the bigger problem is that there is very little incentive for running replication studies. Journals and funding agencies want to see innovative science and replications are literally the opposite of innovative. People have proposed various clever ways of encouraging and sharing replication studies and some journals have started publishing replication reports. I hope this trend continues and the academic culture begins to accept and reward replications.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
Third, much discussion has focused on the fact that a high proportion of studies did not "replicate" - an effect that was originally statistically significant was not statistically significant in the replication - and that the replication effect sizes were generally smaller than the originally reported effect sizes. The latter was true of my study: the effect replicated, but the replication effect size was smaller than the effect size in our original report, which is reflected by our data point being below the diagonal in the figure. The replication issue is a straightforward consequence of the effect size issue: even assuming that an effect exists in the population (not just in the original sample), if the population effect size is smaller than the one in the original sample, then power analysis based on the original sample effect size will produce under-powered studies that will, sometimes, fail to detect the effect in the population. So the relevant issue is that reported sample effect sizes tend to be larger than population effect sizes, but this is a direct consequence of the "statistical significance filter", also known as "publication bias": statistically significant effects can be published but null results are very rarely published. For example, Jim and I may not have been the only people to test for a near semantic neighbor effect, but maybe the effects in the other studies were smaller and not statistically significant, so they were not been published (probably not even submitted for publication). When you chop off the low end of the effect size distribution, the average of the trimmed distribution will necessarily be larger than the average of the full distribution.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
<b>Where do we go from here? </b></span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">I think we need two major changes:</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">(1) We need to start encouraging and rewarding replication studies. Not just when we think someone is wrong, but as a matter of course, as part of going about the business of psychological science. I've heard many good ideas -- using replication studies as assignments in research methods courses, publishing them as online supplements to the original studies or having an online repository of replications -- these and/or other ideas need to become part of how we do psychological science. </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">(2) We need to accept that we're dealing with variable effects and that each new result should be treated as provisional until it is thoroughly replicated. There are lots aspects to this, but I think the most important one is not to take it personally or get defensive when someone raises doubts or fails to replicate our work. One can run a perfectly good experiment, do all of the analyses the best possible way, and come up with something that is true for the sample but not true for the population. I think it is important to be very aware of how big a leap we are making when we see a phenomenon in 20 college students and draw conclusions about fundamental aspects of human cognition.</span></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><span style="font-family: Trebuchet MS, sans-serif;"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_white.png" style="border: 0;" /></span></a></span><span class="Z3988" style="font-family: Trebuchet MS, sans-serif;" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Science&rft_id=info%3Adoi%2F10.1126%2Fscience.aac4716&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Estimating+the+reproducibility+of+psychological+science&rft.issn=0036-8075&rft.date=2015&rft.volume=349&rft.issue=6251&rft.spage=0&rft.epage=0&rft.artnum=http%3A%2F%2Fwww.sciencemag.org%2Fcgi%2Fdoi%2F10.1126%2Fscience.aac4716&rft.au=Open+Science+Collaboration&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Open Science Collaboration (2015). Estimating the reproducibility of psychological science. <span style="font-style: italic;">Science, 349</span> (6251) DOI: <a href="http://dx.doi.org/10.1126/science.aac4716" rev="review">10.1126/science.aac4716</a></span><br />
<span class="Z3988" style="font-family: Trebuchet MS, sans-serif;" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Journal+of+Experimental+Psychology%3A+Learning%2C+Memory%2C+and+Cognition&rft_id=info%3Adoi%2F10.1037%2F0278-7393.34.1.65&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Attractor+dynamics+and+semantic+neighborhood+density%3A+Processing+is+slowed+by+near+neighbors+and+speeded+by+distant+neighbors.&rft.issn=1939-1285&rft.date=2008&rft.volume=34&rft.issue=1&rft.spage=65&rft.epage=79&rft.artnum=http%3A%2F%2Fdoi.apa.org%2Fgetdoi.cfm%3Fdoi%3D10.1037%2F0278-7393.34.1.65&rft.au=Mirman%2C+D.&rft.au=Magnuson%2C+J.&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Mirman, D., & Magnuson, J. (2008). Attractor dynamics and semantic neighborhood density: Processing is slowed by near neighbors and speeded by distant neighbors. <span style="font-style: italic;">Journal of Experimental Psychology: Learning, Memory, and Cognition, 34</span> (1), 65-79 DOI: <a href="http://dx.doi.org/10.1037/0278-7393.34.1.65" rev="review">10.1037/0278-7393.34.1.65</a></span><br />
<span class="Z3988" style="font-family: Trebuchet MS, sans-serif;" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Cognitive%2C+Affective%2C+%26+Behavioral+Neuroscience&rft_id=info%3Adoi%2F10.3758%2Fs13415-010-0009-7&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Effects+of+near+and+distant+semantic+neighbors+on+word+production&rft.issn=1530-7026&rft.date=2010&rft.volume=11&rft.issue=1&rft.spage=32&rft.epage=43&rft.artnum=http%3A%2F%2Fwww.springerlink.com%2Findex%2F10.3758%2Fs13415-010-0009-7&rft.au=Mirman%2C+D.&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D. (2011). Effects of near and distant semantic neighbors on word production <span style="font-style: italic;">Cognitive, Affective, & Behavioral Neuroscience, 11</span> (1), 32-43 DOI: <a href="http://dx.doi.org/10.3758/s13415-010-0009-7" rev="review">10.3758/s13415-010-0009-7</a></span><br />
<span class="Z3988" style="font-family: Trebuchet MS, sans-serif;" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Journal+of+Cognitive+Neuroscience&rft_id=info%3Adoi%2F10.1162%2Fjocn_a_00408&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=The+Neural+Basis+of+Inhibitory+Effects+of+Semantic+and+Phonological+Neighbors+in+Spoken+Word+Production&rft.issn=0898-929X&rft.date=2013&rft.volume=25&rft.issue=9&rft.spage=1504&rft.epage=1516&rft.artnum=http%3A%2F%2Fwww.mitpressjournals.org%2Fdoi%2Fabs%2F10.1162%2Fjocn_a_00408&rft.au=Mirman%2C+D.&rft.au=Graziano%2C+K.&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., & Graziano, K. (2013). The Neural Basis of Inhibitory Effects of Semantic and Phonological Neighbors in Spoken Word Production <span style="font-style: italic;">Journal of Cognitive Neuroscience, 25</span> (9), 1504-1516 DOI: <a href="http://dx.doi.org/10.1162/jocn_a_00408" rev="review">10.1162/jocn_a_00408</a></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-61121031867888261402015-06-19T12:17:00.000-04:002015-06-19T12:17:15.173-04:00Zeno's paradox of teaching<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">I've wrapped up my Spring term teaching and received my teaching evals. Now that I've (finally) had a chance to teach the same class a few times, I am starting to believe in what I call <i>Zeno's Paradox of Teaching</i>: every time I teach a class, my improvement in teaching quality is half the distance between quality of the last time I taught it and my maximum ability to teach that material.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6FI4UKsXoUNzoADYEaHblMcTkhYXv_riR5seZ02Lj6fv3UuZ7nAWxDBjKiF7zeXBEEjiwJG3DSeqV8qe6NiKwmXNtvaQrGuxSoVre6hZed43XiIBqeqgH7oxLnyL0ww6QUKZZotS9hDI/s1600/Zeno.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6FI4UKsXoUNzoADYEaHblMcTkhYXv_riR5seZ02Lj6fv3UuZ7nAWxDBjKiF7zeXBEEjiwJG3DSeqV8qe6NiKwmXNtvaQrGuxSoVre6hZed43XiIBqeqgH7oxLnyL0ww6QUKZZotS9hDI/s320/Zeno.png" width="320" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">If I'm right about this, then I think it means that it's important to think long-term when approaching teaching:</span><br />
<ol style="text-align: left;">
<li><span style="font-family: Trebuchet MS, sans-serif;">New faculty (like me) should start by teaching primarily core courses, ones that are offered every year, have good support materials, and provide a consistent opportunity for improvement. Specialized seminars can be fun to teach, but if they're not going to be offered every year, then improvement will be slow.</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;">Don't drive yourself (myself) crazy trying to teach the "perfect" class on your (my) first time teaching. Try to do a good job and next time try to improve on it as much as possible.</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;">Zeno's paradox means that I'll never teach quite as well as I think I could teach. The positive message there is that one should continue trying to come up with creative ways to improve a course. The warning there is that perfection is not an appropriate standard and not to be too hard on oneself for failing to reach it.</span></li>
</ol>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-2290055021117042422015-06-08T04:38:00.000-04:002015-06-08T04:38:40.750-04:00A little growth curve analysis Q&A<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">I had an email exchange with <a href="http://www.haskins.yale.edu/staff/malins.html" target="_blank">Jeff Malins</a>, who asked several questions about growth curve analysis. I often get questions of this sort and Jeff agreed to let me post excerpts from our (email) conversation. The following has been lightly edited for clarity and to be more concise.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"></span><br />
<a name='more'></a><br />
<span style="font-family: Trebuchet MS, sans-serif;">Jeff asked:</span><br />
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">I’ve fit some curves for accuracy data using both linear and logistic approaches and in both versions, one of the conditions acts strangely. As is especially evident in the linear plots, the green line is not a line! Is this an issue with the </span><span style="font-family: Courier New, Courier, monospace;">fitted()</span><span style="font-family: Trebuchet MS, sans-serif;"> function you’ve come across before? Or is this is a signal something is amiss with the model?</span></blockquote>
<span style="font-family: Trebuchet MS, sans-serif;">I answered:</span><br />
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">In the logistic model, some curvature is reasonable because the model is linear on the logit scale, but that is curved when projected back to the proportions scale. Since all of the model fits look curved for the logistic model, that seems like a reasonable explanation.</span><span style="font-family: Trebuchet MS, sans-serif;"><br /></span><span style="font-family: Trebuchet MS, sans-serif;">I am not sure what is going wrong in your linear model, but one possibility is that it is some odd consequence of unequal numbers of observations (if that is even relevant here).</span></blockquote>
<span style="font-family: Trebuchet MS, sans-serif;">Unequal number of trials turned out to be part of the problem, which Jeff fixed, then followed up with a few more questions:</span><br />
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(1) If I create a first-order orthogonal time term and then use this in the model (</span><span style="font-family: Courier New, Courier, monospace;">ot1</span><span style="font-family: Trebuchet MS, sans-serif;"> in <a href="http://www.danmirman.org/gca#TOC-A-complete-example" target="_blank">your code</a>), my understanding is this is centered in the distribution as opposed to starting at the origin. So for linear models fit using </span><span style="font-family: Courier New, Courier, monospace;">ot1</span><span style="font-family: Trebuchet MS, sans-serif;">, an intercept term to me seems to be indexing global amplitude differences in the model fits (translation in the y-direction) rather than a y-intercept. Is this correct?</span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(2) My understanding is that one only needs to generate orthogonal time terms if fitting second order models or higher. However, I performed a logistic GCA which was first order and it failed to converge when I used my raw time variable and only converged when I transformed it to an orthogonal polynomial with the same number of steps.</span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(3) I am unclear as to when to include a “1” in the random effects structure for conditions nested within subjects. For example, what is the difference between </span><span style="font-family: Courier New, Courier, monospace;">(1+ot1 | Subject:Condition)</span><span style="font-family: Trebuchet MS, sans-serif;"> and </span><span style="font-family: Courier New, Courier, monospace;">(ot1 | Subject:Condition)</span><span style="font-family: Trebuchet MS, sans-serif;">? I have the former in the linear GCA and the latter in the logistic GCA.</span></blockquote>
<span style="font-family: Trebuchet MS, sans-serif;">My answers:</span><br />
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(1) Yes, that is correct. I might quibble slightly with your terminology and say that you've moved the origin to the center of your time window rather than the baseline time point, but the concept is the same. Also, I find that having the intercept represent the overall average is often a helpful property because traditional "area under the curve" analyses are then represented by the intercept term.</span><span style="font-family: Trebuchet MS, sans-serif;"></span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(2) Well, centering the linear term does affect interpretation of the intercept, which is sometimes worth doing. However, I suspect you were thinking about de-correlating the time terms, in which case you are correct, that only matters when there are multiple time terms (starting with second-order polynomials). Your point about convergence is a slightly trickier issue. Convergence can be touchy and generally works better when the predictors are on similar scales. Raw time variables typically go from 0 to 10 or 20, but other predictors are often 0/1, so there is an order of magnitude difference there. The orthogonal linear time term should be in the -1 to 1 range, which can help with convergence.</span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: Trebuchet MS, sans-serif;">(3) There is no difference between those two random effect definitions: the "random intercepts" </span><span style="font-family: Courier New, Courier, monospace;">(1 | ...)</span><span style="font-family: Trebuchet MS, sans-serif;"> are included by default even if you omit the 1. Sometimes I include the 1 to make my code more transparent when I am teaching about GCA, but I almost never use it in my own code. Including the 1 can also make it easier to think about how to de-correlate random effects, but that's probably too tangential for now.</span></blockquote>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-25530358346585879462015-06-07T12:24:00.000-04:002015-06-07T12:25:23.017-04:00Job Opening: MRRI Institute Investigator (all levels) -- Language and Cognition in Neuropsychological Populations<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;"><a href="http://www.mrri.org/" target="_blank">Moss Rehabilitation ResearchInstitute (MRRI)</a> seeks an Institute Investigator to join our historic program
in language and cognition and help build the next generation of translational
neuroscience/neurorehab research. <o:p></o:p></span></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-family: Trebuchet MS, sans-serif;"><span style="font-size: 11.0pt;">The successful applicant is
expected to conduct an independent program of research and to participate in research
collaborations within and outside MRRI. The ideal candidate is a cognitive,
clinical, or neuroscientist or speech-language pathologist who studies language
or related cognitive disorders, and who may also conduct research in
translating basic science findings to improve clinical practice. Preference
will be given to candidates who complement the faculty’s interests in areas
like language processing, language learning, semantics, action planning,
cognitive control, neuromodulation, neuroplasticity, and/or lesion-symptom
mapping</span><span style="font-size: 11.0pt;">. <o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;"></span></span></div>
<a name='more'></a><br />
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">Candidates must have a Ph.D.
in a relevant area. Evidence of research
productivity and prior grant funding are required, as salaries and labs at MRRI
are partially grant supported. <b>Qualified candidates at all levels are
welcome to apply</b>. We offer a competitive
start-up package, and ongoing salary support is available. <o:p></o:p></span></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">MRRI is known internationally
for its research in neuroscience and neurorehabilitation, including a long
tradition of ground-breaking research in aphasia. Our unique resources include a large research
registry of stroke and TBI research volunteers, and the long-running MossRehab
Aphasia Center, a venue for life participation activities, training, and
research. MRRI is renowned for its supportive, collegial environment, peer
mentoring, and collaborative ties with Philadelphia’s outstanding colleges and
universities. In particular, we have long-standing collaborations with the
neurology and neuroimaging faculty at the University of Pennsylvania, with
grant supported projects in structural and functional neuroimaging, TMS, and
tDCS.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">Einstein Healthcare Network
is proud to offer our employees outstanding career opportunities including
competitive compensation, attractive benefits plan including
medical/dental/vision coverage, generous vacation time, and tuition
reimbursement. <o:p></o:p></span></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">EOE<o:p></o:p></span></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">Interested candidates may
submit a cover letter describing current research programs and proposed future
directions in the MRRI environment, along with CV to: <o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="font-family: Trebuchet MS, sans-serif;"><span style="font-size: 11.0pt;">Kevin Whelihan, Research
Administrator; <br />
MRRI, MossRehab @ Elkins Park<br />
50 Township Line Road<br />
Elkins Park, PA 19027 <br />
or </span><a href="mailto:whelihak@einstein.edu"><span style="font-size: 11.0pt;">whelihak@einstein.edu</span></a><span style="font-size: 11.0pt;"> .<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="font-size: 11.0pt;"><span style="font-family: Trebuchet MS, sans-serif;">Applications will be accepted
until the position is filled. <o:p></o:p></span></span></div>
<span style="font-family: Trebuchet MS, sans-serif;"><span style="font-size: 11pt;">We also welcome informal approaches by email or
phone that begin a conversation that may eventually lead to an application;
such inquiries can be directed to Dr. Myrna Schwartz</span><span style="font-size: 11pt;">. </span><span style="font-size: 12pt;"></span></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-45491455159025467412015-04-20T21:57:00.000-04:002015-04-20T21:57:30.298-04:00Plotting Factor Analysis Results<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">A recent factor analysis project (as discussed previously <a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-1.html">here</a>, <a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-2.html">here</a>, and <a href="http://mindingthebrain.blogspot.com/2015/04/aphasia-factors-vs-subtypes.html" target="_blank">here</a>) gave me an opportunity to experiment with some different ways of visualizing highly multidimensional data sets. Factor analysis results are often presented in tables of factor loadings, which are good when you want the numerical details, but bad when you want to convey larger-scale patterns – loadings of 0.91 and 0.19 look similar in a table but very different in a graph. The detailed code is <a href="http://rpubs.com/danmirman/plotting_factor_analysis" target="_blank">posted on RPubs</a> because embedding the code, output, and figures in a webpage is much, much easier using RStudio's markdown functions. That version shows how to get these example data and how to format them correctly for these plots. </span><span style="font-family: 'Trebuchet MS', sans-serif;">Here I will just post the key plot commands and figures those commands produce.</span><span style="font-family: 'Trebuchet MS', sans-serif;"> </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"></span><br />
<a name='more'></a><br />
<span style="font-family: Trebuchet MS, sans-serif;">First, a bar graph showing each measure's factor loadings with each factor in a separate facet (subplot):</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<br />
<div>
<div style="overflow: auto;">
<div class="geshifilter">
<pre class="r geshifilter-R" style="font-family: monospace;"><span style="color: #666666; font-style: italic;">#For each test, plot the loading as length and fill color of a bar</span>
<span style="color: #666666; font-style: italic;"># note that the length will be the absolute value of the loading but the </span>
<span style="color: #666666; font-style: italic;"># fill color will be the signed value, more on this below</span>
<a href="http://inside-r.org/packages/cran/ggplot">ggplot</a><span style="color: #009900;">(</span>loadings.m<span style="color: #339933;">,</span> aes<span style="color: #009900;">(</span>Test<span style="color: #339933;">,</span> <a href="http://inside-r.org/r-doc/base/abs"><span style="color: #003399; font-weight: bold;">abs</span></a><span style="color: #009900;">(</span>Loading<span style="color: #009900;">)</span><span style="color: #339933;">,</span> fill=Loading<span style="color: #009900;">)</span><span style="color: #009900;">)</span> +
facet_wrap<span style="color: #009900;">(</span>~ Factor<span style="color: #339933;">,</span> <a href="http://inside-r.org/r-doc/base/nrow"><span style="color: #003399; font-weight: bold;">nrow</span></a>=<span style="color: #cc66cc;">1</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#place the factors in separate facets</span>
geom_bar<span style="color: #009900;">(</span>stat=<span style="color: blue;">"identity"</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#make the bars</span>
coord_flip<span style="color: #009900;">(</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#flip the axes so the test names can be horizontal </span>
<span style="color: #666666; font-style: italic;">#define the fill color gradient: blue=positive, red=negative</span>
scale_fill_gradient2<span style="color: #009900;">(</span>name = <span style="color: blue;">"Loading"</span><span style="color: #339933;">,</span>
high = <span style="color: blue;">"blue"</span><span style="color: #339933;">,</span> mid = <span style="color: blue;">"white"</span><span style="color: #339933;">,</span> low = <span style="color: blue;">"red"</span><span style="color: #339933;">,</span>
midpoint=<span style="color: #cc66cc;">0</span><span style="color: #339933;">,</span> guide=F<span style="color: #009900;">)</span> +
ylab<span style="color: #009900;">(</span><span style="color: blue;">"Loading Strength"</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#improve y-axis label</span>
theme_bw<span style="color: #009900;">(</span>base_size=<span style="color: #cc66cc;">10</span><span style="color: #009900;">)</span> <span style="color: #666666; font-style: italic;">#use a black-and-white theme with set font size</span></pre>
</div>
</div>
<a href="http://www.inside-r.org/pretty-r" title="Created by Pretty R at inside-R.org">Created by Pretty R at inside-R.org</a><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6TGVJApJsyOFDNk9p3Nm0jLAxTZxrU1sLoiYtGs-Opj4acil5KlHVwe0zCQKNixhNe2mtk6O6pY6LRQRJAUqSl1hjXjmgUotUrkZB2iMl22l4GBaT14Bz2XI-Fzcnkny-m7AecopXL4g/s1600/fig1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6TGVJApJsyOFDNk9p3Nm0jLAxTZxrU1sLoiYtGs-Opj4acil5KlHVwe0zCQKNixhNe2mtk6O6pY6LRQRJAUqSl1hjXjmgUotUrkZB2iMl22l4GBaT14Bz2XI-Fzcnkny-m7AecopXL4g/s1600/fig1.png" height="285" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Fig. 1 from Mirman et al., 2015, <i>Nature Communications</i></td></tr>
</tbody></table>
<span style="font-family: Trebuchet MS, sans-serif;">Second, the full pairwise correlation matrix with a stacked bar graph showing each measure's (absolute) loading on each factor:</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<br />
<div style="overflow: auto;">
<div class="geshifilter">
<pre class="r geshifilter-R" style="font-family: monospace;"><a href="http://inside-r.org/r-doc/base/library"><span style="color: #003399; font-weight: bold;">library</span></a><span style="color: #009900;">(</span><a href="http://inside-r.org/r-doc/graphics/grid"><span style="color: #003399; font-weight: bold;">grid</span></a><span style="color: #009900;">)</span> <span style="color: #666666; font-style: italic;">#for adjusting plot margins</span>
<span style="color: #666666; font-style: italic;">#place the tests on the x- and y-axes, </span>
<span style="color: #666666; font-style: italic;">#fill the elements with the strength of the correlation</span>
p1 <- <a href="http://inside-r.org/packages/cran/ggplot">ggplot</a><span style="color: #009900;">(</span>corrs.m<span style="color: #339933;">,</span> aes<span style="color: #009900;">(</span>Test2<span style="color: #339933;">,</span> Test<span style="color: #339933;">,</span> fill=<a href="http://inside-r.org/r-doc/base/abs"><span style="color: #003399; font-weight: bold;">abs</span></a><span style="color: #009900;">(</span>Correlation<span style="color: #009900;">)</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span> +
geom_tile<span style="color: #009900;">(</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#rectangles for each correlation</span>
<span style="color: #666666; font-style: italic;">#add actual correlation value in the rectangle</span>
geom_text<span style="color: #009900;">(</span>aes<span style="color: #009900;">(</span>label = <a href="http://inside-r.org/r-doc/base/round"><span style="color: #003399; font-weight: bold;">round</span></a><span style="color: #009900;">(</span>Correlation<span style="color: #339933;">,</span> <span style="color: #cc66cc;">2</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span> size=<span style="color: #cc66cc;">2.5</span><span style="color: #009900;">)</span> +
theme_bw<span style="color: #009900;">(</span>base_size=<span style="color: #cc66cc;">10</span><span style="color: #009900;">)</span> + <span style="color: #666666; font-style: italic;">#black and white theme with set font size</span>
<span style="color: #666666; font-style: italic;">#rotate x-axis labels so they don't overlap, </span>
<span style="color: #666666; font-style: italic;">#get rid of unnecessary axis titles</span>
<span style="color: #666666; font-style: italic;">#adjust plot margins</span>
theme<span style="color: #009900;">(</span>axis.text.x = element_text<span style="color: #009900;">(</span>angle = <span style="color: #cc66cc;">90</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span>
axis.title.x=element_blank<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span>
axis.title.y=element_blank<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span>
plot.margin = <a href="http://inside-r.org/r-doc/grid/unit"><span style="color: #003399; font-weight: bold;">unit</span></a><span style="color: #009900;">(</span><a href="http://inside-r.org/r-doc/base/c"><span style="color: #003399; font-weight: bold;">c</span></a><span style="color: #009900;">(</span><span style="color: #cc66cc;">3</span><span style="color: #339933;">,</span> <span style="color: #cc66cc;">1</span><span style="color: #339933;">,</span> <span style="color: #cc66cc;">0</span><span style="color: #339933;">,</span> <span style="color: #cc66cc;">0</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span> <span style="color: blue;">"mm"</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span> +
<span style="color: #666666; font-style: italic;">#set correlation fill gradient</span>
scale_fill_gradient<span style="color: #009900;">(</span>low=<span style="color: blue;">"white"</span><span style="color: #339933;">,</span> high=<span style="color: blue;">"red"</span><span style="color: #009900;">)</span> +
guides<span style="color: #009900;">(</span>fill=F<span style="color: #009900;">)</span> <span style="color: #666666; font-style: italic;">#omit unnecessary gradient legend</span>
p2 <- <a href="http://inside-r.org/packages/cran/ggplot">ggplot</a><span style="color: #009900;">(</span>loadings.m<span style="color: #339933;">,</span> aes<span style="color: #009900;">(</span>Test<span style="color: #339933;">,</span> <a href="http://inside-r.org/r-doc/base/abs"><span style="color: #003399; font-weight: bold;">abs</span></a><span style="color: #009900;">(</span>Loading<span style="color: #009900;">)</span><span style="color: #339933;">,</span> fill=Factor<span style="color: #009900;">)</span><span style="color: #009900;">)</span> +
geom_bar<span style="color: #009900;">(</span>stat=<span style="color: blue;">"identity"</span><span style="color: #009900;">)</span> + coord_flip<span style="color: #009900;">(</span><span style="color: #009900;">)</span> +
ylab<span style="color: #009900;">(</span><span style="color: blue;">"Loading Strength"</span><span style="color: #009900;">)</span> + theme_bw<span style="color: #009900;">(</span>base_size=<span style="color: #cc66cc;">10</span><span style="color: #009900;">)</span> +
<span style="color: #666666; font-style: italic;">#remove labels and tweak margins for combining with the correlation matrix plot</span>
theme<span style="color: #009900;">(</span>axis.text.y = element_blank<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span>
axis.title.y = element_blank<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span>
plot.margin = <a href="http://inside-r.org/r-doc/grid/unit"><span style="color: #003399; font-weight: bold;">unit</span></a><span style="color: #009900;">(</span><a href="http://inside-r.org/r-doc/base/c"><span style="color: #003399; font-weight: bold;">c</span></a><span style="color: #009900;">(</span><span style="color: #cc66cc;">3</span><span style="color: #339933;">,</span><span style="color: #cc66cc;">1</span><span style="color: #339933;">,</span><span style="color: #cc66cc;">39</span><span style="color: #339933;">,</span>-<span style="color: #cc66cc;">3</span><span style="color: #009900;">)</span><span style="color: #339933;">,</span> <span style="color: blue;">"mm"</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span>
<a href="http://inside-r.org/r-doc/base/library"><span style="color: #003399; font-weight: bold;">library</span></a><span style="color: #009900;">(</span>gridExtra<span style="color: #009900;">)</span> <span style="color: #666666; font-style: italic;">#for combining the two plots</span>
grid.arrange<span style="color: #009900;">(</span>p1<span style="color: #339933;">,</span> p2<span style="color: #339933;">,</span> <a href="http://inside-r.org/r-doc/base/ncol"><span style="color: #003399; font-weight: bold;">ncol</span></a>=<span style="color: #cc66cc;">2</span><span style="color: #339933;">,</span> widths=<a href="http://inside-r.org/r-doc/base/c"><span style="color: #003399; font-weight: bold;">c</span></a><span style="color: #009900;">(</span><span style="color: #cc66cc;">2</span><span style="color: #339933;">,</span> <span style="color: #cc66cc;">1</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span> <span style="color: #666666; font-style: italic;">#side-by-side, matrix gets more space</span></pre>
</div>
</div>
<a href="http://www.inside-r.org/pretty-r" title="Created by Pretty R at inside-R.org">Created by Pretty R at inside-R.org</a><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGnMp8xak2M8gPK8WtGd-MH-WFvQPKZt5vkOYM4ZQmY1t3dBvZGVcPDWTFS922Dkw2HYkZEmYbuR8m02BggCAXMmnD9iUgd64nyiQObnlRX17ci9XYCXnk3eViBI_sJfxOejVi9rgkNDY/s1600/fig2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGnMp8xak2M8gPK8WtGd-MH-WFvQPKZt5vkOYM4ZQmY1t3dBvZGVcPDWTFS922Dkw2HYkZEmYbuR8m02BggCAXMmnD9iUgd64nyiQObnlRX17ci9XYCXnk3eViBI_sJfxOejVi9rgkNDY/s1600/fig2.png" height="308" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Fig. 2 from Mirman et al., in press, <i>Neuropsychologia</i></td></tr>
</tbody></table>
</div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com1tag:blogger.com,1999:blog-8091991448412546705.post-67120446763528179662015-04-20T20:03:00.000-04:002015-04-20T20:03:03.840-04:00Aphasia factors vs. subtypes<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see <a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-1.html" target="_blank">Part 1</a> and <a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-2.html" target="_blank">Part 2</a>) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.</span><br />
<a href="https://www.blogger.com/null" name="more"></a><span style="font-family: Trebuchet MS, sans-serif;"> </span><br />
<a href="https://www.blogger.com/null" name="more"></a><span style="font-family: Trebuchet MS, sans-serif;">The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like <i>z</i>-scores: positive means better-than-average performance, negative means poorer-than-average performance.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMd5zkDFsoe2Ltr7FnsNcIqN1hPTIid_sgIkYQh53uhFUpwqPhyphenhyphen1xp2ZUYWAIzra_q7CsWO6XBdcbCO3opUPwY23lyUM4VFeArGRmPj6dhxqQX4LMznX3PNKwISld7IuQ89WB4pcmt8Ac/s1600/FAScores_Dx-sub.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMd5zkDFsoe2Ltr7FnsNcIqN1hPTIid_sgIkYQh53uhFUpwqPhyphenhyphen1xp2ZUYWAIzra_q7CsWO6XBdcbCO3opUPwY23lyUM4VFeArGRmPj6dhxqQX4LMznX3PNKwISld7IuQ89WB4pcmt8Ac/s1600/FAScores_Dx-sub.png" height="200" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Credit: Mirman et al. (in press), <i>Neuropsychologia</i></td></tr>
</tbody></table>
<span style="font-family: Trebuchet MS, sans-serif;">At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVSQi0gpDoaaER1rmKrms8JKv5p-8XZRuemlzAn9ZgHhHT8ujEvIj2Zm8Cso8lhR-iGdgnQYwdXb1RenetaHEmlDjAT3tWoLI5UkRQ4vsPojFAV8Kd_FTbAEQzjFZr-_2zvAPPY3oViG8/s1600/FAScores_Dx_density.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVSQi0gpDoaaER1rmKrms8JKv5p-8XZRuemlzAn9ZgHhHT8ujEvIj2Zm8Cso8lhR-iGdgnQYwdXb1RenetaHEmlDjAT3tWoLI5UkRQ4vsPojFAV8Kd_FTbAEQzjFZr-_2zvAPPY3oViG8/s1600/FAScores_Dx_density.png" height="240" width="400" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label.</span></div>
<br />
<br />
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_gray.png" style="border: 0;" /></a></span>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Communications&rft_id=info%3A%2F&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Neural+Organization+of+Spoken+Language+Revealed+by+Lesion-Symptom+Mapping&rft.issn=&rft.date=2015&rft.volume=6&rft.issue=6762&rft.spage=1&rft.epage=9&rft.artnum=http%3A%2F%2Fwww.nature.com%2Fncomms%2F2015%2F150416%2Fncomms7762%2Ffull%2Fncomms7762.html&rft.au=Mirman%2C+D.&rft.au=Chen%2C+Q.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Faseyitan%2C+O.K.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). <a href="http://www.nature.com/ncomms/2015/150416/ncomms7762/full/ncomms7762.html">Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping.</a> <span style="font-style: italic;">Nature Communications, 6</span> (6762), 1-9. DOI: 10.1038/ncomms7762.<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Neuropsychologia&rft_id=info%3Adoi%2F10.1016%2Fj.neuropsychologia.2015.02.014&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=The+ins+and+outs+of+meaning%3A+Behavioral+and+neuroanatomical+dissociation+of+semantically-driven+word+retrieval+and+multimodal+semantic+recognition+in+aphasia&rft.issn=00283932&rft.date=2015&rft.volume=&rft.issue=&rft.spage=&rft.epage=&rft.artnum=&rft.au=Mirman%2C+D.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). <a href="http://www.sciencedirect.com/science/article/pii/S0028393215000755" target="_blank">The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia</a>. <span style="font-style: italic;">Neuropsychologia</span>. DOI: 10.1016/j.neuropsychologia.2015.02.014.</span></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-33253120693156354282015-04-17T09:00:00.000-04:002015-04-17T10:03:14.914-04:00Mapping the language system: Part 2<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">This is the second of a multi-part post about a pair of papers that just came out (Mirman et al., <a href="http://www.nature.com/ncomms/2015/150416/ncomms7762/full/ncomms7762.html" target="_blank">2015</a>, <a href="http://www.sciencedirect.com/science/article/pii/S0028393215000755" target="_blank">in press</a>). <a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-1.html" target="_blank">Part 1</a> was about the behavioral data: we started with 17 behavioral measures from 99 participants with aphasia following left hemisphere stroke. Using factor analysis, we reduced those 17 measures to 4 underlying factors: Semantic Recognition, Speech Production, Speech Recognition, and Semantic Errors. For each of these factors, we then used voxel-based lesion-symptom mapping</span><span style="font-family: 'Trebuchet MS', sans-serif;"> </span><span style="font-family: 'Trebuchet MS', sans-serif;">(VLSM)</span><span style="font-family: 'Trebuchet MS', sans-serif;"> to identify the left hemisphere regions where stroke damage was associated with poorer performance. </span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"></span><br />
<a name='more'></a><span style="font-family: 'Trebuchet MS', sans-serif;">The speech factors mapped out parallel ventral and dorsal systems around the Sylvian fissure for speech recognition and speech production, respectively.</span><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFIaTW5GUHpsaZBkN58PDmdtt5IHYZ7xDvQ4e1eeHWy006f41jt9_A1_YJmqww50p5_NlbTNjOreLyK16MHX0kZYwfGFIdWbzJgZSda1zLKtKz6c446LfehY4Kh-_AdHFg0DKFjCtYZRU/s1600/Speech-blog.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFIaTW5GUHpsaZBkN58PDmdtt5IHYZ7xDvQ4e1eeHWy006f41jt9_A1_YJmqww50p5_NlbTNjOreLyK16MHX0kZYwfGFIdWbzJgZSda1zLKtKz6c446LfehY4Kh-_AdHFg0DKFjCtYZRU/s1600/Speech-blog.png" height="139" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Credit: Mirman et al., <i>Nature Communications</i></td></tr>
</tbody></table>
<span style="font-family: 'Trebuchet MS', sans-serif;">Speech production deficits were associated with lesions in the "dorsal speech pathway" </span><span style="font-family: Trebuchet MS, sans-serif;">superior to the Sylvian fissure, primarily in the supramarginal </span><span style="font-family: Trebuchet MS, sans-serif;">gyrus and extending anteriorly into inferior postcentral, precentral, </span><span style="font-family: 'Trebuchet MS', sans-serif;">and premotor cortex (blue-green in the above figure). Speech recognition deficits were associated with lesions in the "ventral speech pathway" inferior to the Sylvian fissure, </span><span style="font-family: 'Trebuchet MS', sans-serif;">primarily in the superior temporal gyrus, including </span><span style="font-family: 'Trebuchet MS', sans-serif;">Wernicke’s area and extending deep into planum </span><span style="font-family: 'Trebuchet MS', sans-serif;">temporale (red-yellow on the above figure). This is somewhat different from the classic Broca-Wernicke-Lichtheim model of language: the speech production system is not localized just to inferior frontal regions ("Broca's area") but extends posteriorly through somatosensory and inferior parietal regions thought to be important for skilled action. In other words, speech production is a skilled action that involves an integrated neural system for motor planning, sensing positions of the articulators, and executing the movements. This is consistent with the frameworks developed by Greg Hickok (e.g., Hickok, 2012; Hickok & Poeppel, 2007) and (independently) Josef Rauschecker (e.g., Rauschecker & Scott, 2009). O</span><span style="font-family: 'Trebuchet MS', sans-serif;">ur ventral speech recognition stream was largely restricted to the superior temporal gyrus and planum temporale (as in Rauschecker's model; Hickok's model includes middle and inferior temporal regions in speech recognition). Also, w</span><span style="font-family: 'Trebuchet MS', sans-serif;">e did not find involvement of the auditory system in speech production -- such involvement is a key aspect of Hickok's model, though our orthogonal factors may have contributed to this lack of overlap</span><span style="font-family: 'Trebuchet MS', sans-serif;">.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">The semantic errors factor was associated with damage to the anterior temporal lobe, which should not be surprising because several previous studies (including previous VLSM with a subset of these participants) have found that left ATL damage is associated with production of semantic errors in picture naming. There is a lot of evidence that the ATLs are neural hubs for a distributed neural system supporting semantic memory. From that perspective, damage to ATL should impair semantic memory and semantic errors are symptom of that impairment. Following that logic, t</span><span style="font-family: 'Trebuchet MS', sans-serif;">he semantic recognition deficits should also be associated with ATL damage but that's not what we found. Instead, we found that semantic recognition deficits were associated with damage to white matter medial to the insula and lateral to the basal ganglia, where three major tracts converge: the inferior fronto-occipital fasciculus (green in the figure below), </span><span style="font-family: Trebuchet MS, sans-serif;">the uncinate fasciculus (light blue in the figure below), and the anterior thalamic radiations (dark blue in the figure below).</span><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_U4jNeub_JazAAExJlNtFIboQjTe3TA2ZJn7uqsEn4QOqKs-DzeAh7qGg7E9E-4EmDWD_Kw2prRL5OvVRDMLPqXOChEODFc154iKy6L9Mnyn487n62LPA_xJt3QK1M1_P_hyphenhyphend8XQAIA/s1600/Fig3_tracts.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja_U4jNeub_JazAAExJlNtFIboQjTe3TA2ZJn7uqsEn4QOqKs-DzeAh7qGg7E9E-4EmDWD_Kw2prRL5OvVRDMLPqXOChEODFc154iKy6L9Mnyn487n62LPA_xJt3QK1M1_P_hyphenhyphend8XQAIA/s1600/Fig3_tracts.png" height="274" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 12.8000001907349px;">Credit: Mirman et al., </span><i style="font-size: 12.8000001907349px;">Nature Communications</i></td></tr>
</tbody></table>
<span style="font-family: 'Trebuchet MS', sans-serif;">In the second paper of the pair (in press at <i>Neuropsychologia</i>) we used a multivariate lesion-symptom mapping approach based on support-vector regression (Zhang et al., 2014) to re-analyze these data and ruled out some methodological VLSM issues as possible causes of this result. That paper discusses the implications of these results in more detail, but the upshot is that each of these tracts is important for semantic memory because they connect the frontal lobe with the distributed neural system involved in semantic memory. Our finding suggests that the convergence of these tracts creates a vulnerable "white matter bottleneck" where a small amount of damage can have a big effect on the connections between the frontal lobe and the rest of the brain.</span></div>
<br />
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_gray.png" style="border: 0;" /></a></span>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Reviews+Neuroscience&rft_id=info%3Apmid%2F22218206&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Computational+neuroanatomy+of+speech+production.&rft.issn=1471-003X&rft.date=2012&rft.volume=13&rft.issue=2&rft.spage=135&rft.epage=145&rft.artnum=&rft.au=Hickok+G&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CLanguage%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Hickok G (2012). Computational neuroanatomy of speech production. <span style="font-style: italic;">Nature Reviews Neuroscience, 13</span> (2), 135-145. PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/22218206" rev="review">22218206</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Reviews+Neuroscience&rft_id=info%3A%2F&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=The+cortical+organization+of+speech+processing&rft.issn=&rft.date=2007&rft.volume=8&rft.issue=May&rft.spage=393&rft.epage=402&rft.artnum=&rft.au=Hickok%2C+Gregory+S&rft.au=Poeppel%2C+David&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CLanguage%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Hickok, G. S., & Poeppel, D. (2007). The cortical organization of speech processing <span style="font-style: italic;">Nature Reviews Neuroscience, 8</span> (May), 393-402.</span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Communications&rft_id=info%3A%2F&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Neural+Organization+of+Spoken+Language+Revealed+by+Lesion-Symptom+Mapping&rft.issn=&rft.date=2015&rft.volume=6&rft.issue=6762&rft.spage=1&rft.epage=9&rft.artnum=http%3A%2F%2Fwww.nature.com%2Fncomms%2F2015%2F150416%2Fncomms7762%2Ffull%2Fncomms7762.html&rft.au=Mirman%2C+D.&rft.au=Chen%2C+Q.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Faseyitan%2C+O.K.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015a). <a href="http://www.nature.com/ncomms/2015/150416/ncomms7762/full/ncomms7762.html">Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping.</a> <span style="font-style: italic;">Nature Communications, 6</span> (6762), 1-9. DOI: 10.1038/ncomms7762.<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Neuropsychologia&rft_id=info%3Adoi%2F10.1016%2Fj.neuropsychologia.2015.02.014&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=The+ins+and+outs+of+meaning%3A+Behavioral+and+neuroanatomical+dissociation+of+semantically-driven+word+retrieval+and+multimodal+semantic+recognition+in+aphasia&rft.issn=00283932&rft.date=2015&rft.volume=&rft.issue=&rft.spage=&rft.epage=&rft.artnum=&rft.au=Mirman%2C+D.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). <a href="http://www.sciencedirect.com/science/article/pii/S0028393215000755" target="_blank">The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia</a>. <span style="font-style: italic;">Neuropsychologia</span>. DOI: 10.1016/j.neuropsychologia.2015.02.014.<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Neuroscience&rft_id=info%3Apmid%2F19471271&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Maps+and+streams+in+the+auditory+cortex%3A+nonhuman+primates+illuminate+human+speech+processing.&rft.issn=1097-6256&rft.date=2009&rft.volume=12&rft.issue=6&rft.spage=718&rft.epage=724&rft.artnum=&rft.au=Rauschecker+J.P.&rft.au=Scott+S.K.&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CLanguage%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Rauschecker J.P., & Scott S.K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. <span style="font-style: italic;">Nature Neuroscience, 12</span> (6), 718-724 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/19471271" rev="review">19471271</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Human+Brain+Mapping&rft_id=info%3Apmid%2F25044213&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Multivariate+lesion-symptom+mapping+using+support+vector+regression.&rft.issn=1065-9471&rft.date=2014&rft.volume=35&rft.issue=12&rft.spage=5861&rft.epage=5876&rft.artnum=&rft.au=Zhang+Y.&rft.au=Kimberg+D.Y.&rft.au=Coslett+H.B.&rft.au=Schwartz+M.F.&rft.au=Wang+Z.&rfe_dat=bpr3.included=1;bpr3.tags=Neuroscience%2CCognitive+Neuroscience">Zhang Y., Kimberg D.Y., Coslett H.B., Schwartz M.F., & Wang Z. (2014). Multivariate lesion-symptom mapping using support vector regression. <span style="font-style: italic;">Human Brain Mapping, 35</span> (12), 5861-5876. PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/25044213" rev="review">25044213</a></span>
</span></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-69011045693146904442015-04-16T09:48:00.001-04:002015-04-16T17:11:43.233-04:00Mapping the language system: Part 1<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">My colleagues and I have a pair of papers coming out in <i><a href="http://www.nature.com/ncomms/2015/150416/ncomms7762/full/ncomms7762.html" target="_blank">Nature Communications</a></i> and <i><a href="http://www.sciencedirect.com/science/article/pii/S0028393215000755" target="_blank">Neuropsychologia</a> </i>that I'm particularly excited about. The data came from <a href="http://mrri.org/people/institute-scientists/myrna-f-schwartz" target="_blank">Myrna Schwartz</a>'s long-running anatomical case series project in which behavioral and structural neuroimaging data were collected from a large sample of individuals with aphasia following left hemisphere stroke. </span><span style="font-family: 'Trebuchet MS', sans-serif;">We pulled together data from 17 measures of language-related performance for 99 participants, each of those participants was also able to provide high-quality structural neuroimaging data to localize their stroke lesion. The behavioral measures ranged from phonological processing (phoneme discrimination, production of phonological errors during picture naming, etc.) to verbal and nonverbal semantic processing (synonym judgments, Camel and Cactus Test, production of semantic errors during picture naming, etc.). I have a lot to say about our project, so there will be a few posts about it. This first post will focus on the behavioral data.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"></span><br />
<a name='more'></a><br />
<span style="font-family: 'Trebuchet MS', sans-serif;">We used factor analysis to reduce the 17 measures to 4 underlying functional systems (also called principal components, or latent variables, or factors), which captured 76% of the variance in the original data:</span><br />
<ol style="text-align: left;">
<li><span style="font-family: Trebuchet MS, sans-serif;"><b>Semantic Recognition</b>: difficulty recognizing the meaning or relationship of concepts, such as synonym judgments, semantic category discrimination, Camel and Cactus Test, and Peabody Picture Vocabulary Test.</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;"><b>Speech Recognition</b>: difficulty with fine-grained speech perception, such as phoneme discrimination and rhyme judgment.</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;"><b>Speech Production</b>: difficulty planning and executing speech actions, such as word and nonword repetition and the tendency to make phonological errors during picture naming (e.g., giraffe --> “girappe”).</span></li>
<li><span style="font-family: Trebuchet MS, sans-serif;"><b>Semantic Errors</b>: making semantic errors during picture naming (e.g., giraffe --> “zebra”), regardless of performance on other tasks that involved processing meaning.</span></li>
</ol>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb972V2_Ga2gA4QInSZRe0JBlkx_HCAFosh8GYTM1nS4vh6fdg5mioeTA-f-9jzOh1YVyF65Z-zf8rZezRJ1w_3fwa8XRfxe0yjZR0MS1Um45NfpehkWyqsIu2m7l-3IASTHT8kssGGNw/s1600/FactorLoadingsStacked.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb972V2_Ga2gA4QInSZRe0JBlkx_HCAFosh8GYTM1nS4vh6fdg5mioeTA-f-9jzOh1YVyF65Z-zf8rZezRJ1w_3fwa8XRfxe0yjZR0MS1Um45NfpehkWyqsIu2m7l-3IASTHT8kssGGNw/s1600/FactorLoadingsStacked.png" height="400" width="400" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">These four factors may seem very intuitive and perhaps inevitable, but there are several alternative outcomes that would have been equally intuitive, so I want to highlight two ways in which these results might be surprising. First, the behavioral tests included measures of verbal short-term memory (immediate serial recall, semantic and phonological span tasks, nonword repetition), so we could have observed a STM factor. Instead, semantic STM was part of the semantic recognition factor and ISR and phonological STM was part of the speech recognition and production factors. This is not to say that STM is not important, but the domain-specific contribution (i.e., phonological or semantic processing) seems to be more important than a domain-general STM contribution. </span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Second, we might have found dissociations between performance on verbal (words) and nonverbal (pictures) semantic tasks. Consider the three semantic measures mentioned above: </span><span style="font-family: 'Trebuchet MS', sans-serif;">synonym judgments, Camel and Cactus Test, and making semantic errors during picture naming. A core semantic deficit (such as semantic dementia) should affect performance on all three, which would produce a single semantic factor. A verbal semantic deficit should primarily affect just synonym judgments (and other semantic tasks that are entirely in the verbal domain) and possibly production of semantic errors (since it includes a verbal component). Instead, we found very high correlations among tasks involving extracting semantic information from either words or pictures (or both, such as word-to-picture matching), and performance on these tasks were not very correlated with production of semantic errors. We take this to mean that there is an important distinction between the functional systems involved in extracting semantic information (Semantic Recognition) and using that information to drive (verbal) behavior.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">The next post (<a href="http://mindingthebrain.blogspot.com/2015/04/mapping-language-system-part-2.html" target="_blank">Part 2</a>) will focus on the lesion-symptom mapping results, which identify the left hemisphere regions critical for each of the four functional systems identified in the factor analysis.</span></div>
</div>
<br />
[EDIT: Added links for <i>Neuropsychologia </i>article.]<br />
<br />
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_gray.png" style="border: 0;" /></a></span><span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Nature+Communications&rft_id=info%3A%2F&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Neural+Organization+of+Spoken+Language+Revealed+by+Lesion-Symptom+Mapping&rft.issn=&rft.date=2015&rft.volume=6&rft.issue=6762&rft.spage=1&rft.epage=9&rft.artnum=http%3A%2F%2Fwww.nature.com%2Fncomms%2F2015%2F150416%2Fncomms7762%2Ffull%2Fncomms7762.html&rft.au=Mirman%2C+D.&rft.au=Chen%2C+Q.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Faseyitan%2C+O.K.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). <a href="http://www.nature.com/ncomms/2015/150416/ncomms7762/full/ncomms7762.html">Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping.</a> <span style="font-style: italic;">Nature Communications, 6</span> (6762), 1-9. DOI: 10.1038/ncomms7762.<br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Neuropsychologia&rft_id=info%3Adoi%2F10.1016%2Fj.neuropsychologia.2015.02.014&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=The+ins+and+outs+of+meaning%3A+Behavioral+and+neuroanatomical+dissociation+of+semantically-driven+word+retrieval+and+multimodal+semantic+recognition+in+aphasia&rft.issn=00283932&rft.date=2015&rft.volume=&rft.issue=&rft.spage=&rft.epage=&rft.artnum=&rft.au=Mirman%2C+D.&rft.au=Zhang%2C+Y.&rft.au=Wang%2C+Z.&rft.au=Coslett%2C+H.B.&rft.au=Schwartz%2C+M.F.&rfe_dat=bpr3.included=1;bpr3.tags=Medicine%2CPsychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). <a href="http://www.sciencedirect.com/science/article/pii/S0028393215000755" target="_blank">The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia</a>. <span style="font-style: italic;">Neuropsychologia</span>. DOI: 10.1016/j.neuropsychologia.2015.02.014</span></span>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-535223443743116022015-03-03T10:44:00.002-05:002015-03-03T10:44:55.523-05:00When lexical competition becomes lexical cooperation<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">Lexical neighborhood effects are one of the most robust findings in spoken word recognition: words with many similar-sounding words ("neighbors") are recognized more slowly and less accurately than words with few neighbors. About 10 years ago, when I was just starting my post-doc training with <a href="http://magnuson.psy.uconn.edu/magnuson/" target="_blank">Jim Magnuson</a>, we wondered about <i>semantic</i> neighborhood effects. We found that things were less straightforward in semantics: <i>near</i> semantic neighbors slowed down visual word recognition, but <i>distant</i> semantic neighbors sped up visual word recognition (<a href="http://www.danmirman.org/pdfs/MirmanMagnuson2008.pdf" target="_blank">Mirman & Magnuson, 2008</a>). I later found that the same pattern in spoken word production (<a href="http://www.danmirman.org/pdfs/Mirman2011.pdf" target="_blank">Mirman, 2011</a>). Working with <a href="http://psych.uconn.edu/labs/solab/" target="_blank">Whit Tabor</a>, we developed a preliminary computational account. Later, when <a href="http://psy.scnu.edu.cn/teachers/show/80.aspx" target="_blank">Qi Chen</a> joined my lab at MRRI, we expanded this computational model to capture orthographic, phonological, and semantic </span><span style="font-family: 'Trebuchet MS', sans-serif;">neighborhood density effects in </span><span style="font-family: 'Trebuchet MS', sans-serif;">visual and spoken word recognition and spoken word production (<a href="http://www.danmirman.org/pdfs/ChenMirman2012.pdf" target="_blank">Chen & Mirman, 2012</a>). The key insight from our model was that neighbors exert both inhibitory and facilitative effects on target word processing with the </span><span style="font-family: 'Trebuchet MS', sans-serif;">inhibitory effect dominating for </span><span style="font-family: 'Trebuchet MS', sans-serif;">strongly active neighbors and the </span><span style="font-family: 'Trebuchet MS', sans-serif;">facilitative effect dominating for</span><span style="font-family: 'Trebuchet MS', sans-serif;"> weakly active neighbors.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">In a new paper soon to be published in <i>Cognitive Science</i></span><span style="font-family: 'Trebuchet MS', sans-serif;"> (</span><a href="http://www.danmirman.org/pdfs/ChenMirman2014.pdf" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">Chen & Mirman, in press</a><span style="font-family: 'Trebuchet MS', sans-serif;">)</span><span style="font-family: 'Trebuchet MS', sans-serif;"> we test a unique prediction from our model. The idea is that phonological neighborhood effects in spoken word recognition are so robust because phonological neighbors are consistently strongly activated during spoken word recognition. If we can reduce their activation by creating a context in which they are not among the likely targets, then their inhibitory effect will not just get smaller, it will become smaller than the facilitative effect, so the net result will be a flip to a facilitative effect. We tested this by using spoken word-to-picture matching with eye-tracking, more commonly known as the "visual world paradigm". When four (phonologically unrelated) pictures appear on the screen, they provide some semantic information about the likely target word. The longer they are on-screen before the spoken word begins, the more this semantic context will influence which lexical candidates will be activated. At one extreme, without any semantic context, we should see the standard inhibitory effect of phonological neighbors; at the other extreme, if only the </span><span style="font-family: 'Trebuchet MS', sans-serif;">pictured items are viable candidates,</span><span style="font-family: 'Trebuchet MS', sans-serif;"> there should be no effect of phonological neighbors. Here is the cool part (if I may say so): at an intermediate point, the semantic context reduces phonological neighbor activation but doesn't eliminate it, so the neighbors will be weakly active and will produce a facilitative effect. </span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">We report simulations of our model concretely demonstrating this prediction and an experiment in which we manipulate the preview duration (how long the pictures are displayed before the spoken word starts) as a way of manipulating the strength of semantic context. The results were (mostly) consistent with this prediction. </span><br />
<div class="separator" style="-webkit-text-stroke-width: 0px; clear: both; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; margin: 0px; orphans: auto; text-align: center; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiga8Pf1IKQD5qR_DxCuManFuod-zuW2JvSqmprgC-9DmgiCvUOynT9EL068Q31MFvoR6bzhcCbyEMqTEerjMrjxWcu8lJ4YD-Iqzh1aFM9bpZsQX0f4uec_vyLt3tnMviFIwgsHC8-70/s1600/PD_color.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiga8Pf1IKQD5qR_DxCuManFuod-zuW2JvSqmprgC-9DmgiCvUOynT9EL068Q31MFvoR6bzhcCbyEMqTEerjMrjxWcu8lJ4YD-Iqzh1aFM9bpZsQX0f4uec_vyLt3tnMviFIwgsHC8-70/s1600/PD_color.png" height="200" width="400" /></a></div>
<span style="font-family: 'Trebuchet MS', sans-serif;">At 500ms preview (middle panel), there is a clear facilitative effect of neighborhood density: the target fixation proportions for high density targets (red line) rise faster than for the low density targets (blue line). This did not happen with either the shorter or longer preview duration and is not expected unless the preview provides semantic input that weakens activation of phonological neighbors, thus making their net effect facilitative rather than inhibitory.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span><span style="font-family: 'Trebuchet MS', sans-serif;">I'm excited about this paper because "lexical competition" is such a core concept in spoken word recognition that it is hard to imagine neighborhood density having a facilitative effect, but that's what our model predicted and the eye-tracking results bore it out. This is one of those full-cycle cases where behavioral data led to a theory, which led to a computational model, which made new predictions, which were tested in a behavioral experiment. That's what I was trained to do and it feels good to have actually pulled it off.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">As a final meta comment: we owe a big "Thank You" to Keith Apfelbaum, Sheila Blumstein, and </span><span style="font-family: 'Trebuchet MS', sans-serif;">Bob McMurray,</span><span style="font-family: 'Trebuchet MS', sans-serif;"> whose </span><a href="http://link.springer.com/article/10.3758/s13423-010-0039-8" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">2011 paper</a><span style="font-family: 'Trebuchet MS', sans-serif;"> was part of the inspiration for this study. Even more importantly, Keith and Bob shared first their data for our follow-up analyses, then their study materials to help us run our experiment. I think this kind of sharing is hugely important for having a science that truly builds and moves forward in a replicable way, but it is all too rare. Apfelbaum, Blumstein, and McMurray not only ran a good study, they also helped other people build on it, which multiplied their positive contribution to the field. I hope one day we can make this kind of sharing the standard in the field, but until then, I'll just appreciate the people who do it.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;"><br /></span></div>
<span style="float: left; padding: 5px;"><a href="http://www.researchblogging.org/"><img alt="ResearchBlogging.org" src="http://www.researchblogging.org/public/citation_icons/rb2_large_gray.png" style="border: 0;" /></a></span>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Psychonomic+Bulletin+%26+Review&rft_id=info%3Apmid%2F21327343&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Semantic+priming+is+affected+by+real-time+phonological+competition%3A+evidence+for+continuous+cascading+systems.&rft.issn=1069-9384&rft.date=2011&rft.volume=18&rft.issue=1&rft.spage=141&rft.epage=149&rft.artnum=&rft.au=Apfelbaum+K+S&rft.au=Blumstein+S+E&rft.au=McMurray+B&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language">Apfelbaum K S, Blumstein S E, & McMurray B (2011). Semantic priming is affected by real-time phonological competition: evidence for continuous cascading systems. <span style="font-style: italic;">Psychonomic Bulletin & Review, 18</span> (1), 141-149 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/21327343" rev="review">21327343</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Psychological+Review&rft_id=info%3Apmid%2F22352357&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Competition+and+cooperation+among+similar+representations%3A+toward+a+unified+account+of+facilitative+and+inhibitory+effects+of+lexical+neighbors.&rft.issn=0033-295X&rft.date=2012&rft.volume=119&rft.issue=2&rft.spage=417&rft.epage=430&rft.artnum=&rft.au=Chen+Q&rft.au=Mirman+D&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Chen Q, & Mirman D (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. <span style="font-style: italic;">Psychological Review, 119</span> (2), 417-430 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/22352357" rev="review">22352357</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Cognitive+Science&rft_id=info%3Apmid%2F25155249&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Interaction+Between+Phonological+and+Semantic+Representations%3A+Time+Matters.&rft.issn=0364-0213&rft.date=2015&rft.volume=&rft.issue=in+press&rft.spage=&rft.epage=&rft.artnum=&rft.au=Chen+Q&rft.au=Mirman+D&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Chen Q, & Mirman D (2015). Interaction Between Phonological and Semantic Representations: Time Matters. <span style="font-style: italic;">Cognitive Science</span> (in press) PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/25155249" rev="review">25155249</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Cognitive%2C+Affective+%26+Behavioral+Neuroscience&rft_id=info%3Apmid%2F21264640&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Effects+of+near+and+distant+semantic+neighbors+on+word+production.&rft.issn=1530-7026&rft.date=2011&rft.volume=11&rft.issue=1&rft.spage=32&rft.epage=43&rft.artnum=&rft.au=Mirman+D&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Cognitive+Neuroscience%2C+Neurolinguistics">Mirman D (2011). Effects of near and distant semantic neighbors on word production. <span style="font-style: italic;">Cognitive, Affective & Behavioral Neuroscience, 11</span> (1), 32-43 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/21264640" rev="review">21264640</a></span><br />
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.jtitle=Journal+of+Experimental+Psychology%3A+Learning%2C+Memory%2C+and+Cognition&rft_id=info%3Apmid%2F18194055&rfr_id=info%3Asid%2Fresearchblogging.org&rft.atitle=Attractor+dynamics+and+semantic+neighborhood+density%3A+processing+is+slowed+by+near+neighbors+and+speeded+by+distant+neighbors.&rft.issn=0278-7393&rft.date=2008&rft.volume=34&rft.issue=1&rft.spage=65&rft.epage=79&rft.artnum=&rft.au=Mirman+D&rft.au=Magnuson+J+S&rfe_dat=bpr3.included=1;bpr3.tags=Psychology%2CNeuroscience%2CCognitive+Psychology%2C+Language%2C+Quantitative+Psychology">Mirman D, & Magnuson J S (2008). Attractor dynamics and semantic neighborhood density: processing is slowed by near neighbors and speeded by distant neighbors. <span style="font-style: italic;">Journal of Experimental Psychology: Learning, Memory, and Cognition, 34</span> (1), 65-79 PMID: <a href="http://www.ncbi.nlm.nih.gov/pubmed/18194055" rev="review">18194055</a></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-83262574858984217302015-02-23T17:01:00.003-05:002015-02-23T17:01:37.743-05:00How to learn R: A flow chart<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">I often find myself giving people suggestions about how to learn R, so I decided to put together a flow chart. This is geared toward typical psychology or cognitive science researchers planning to do basic data analysis in R. This is how to get started -- it won't make you an expert, but it should get you past your SPSS/Excel addiction. One day I'll expand it to include advanced topics.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<img height="640" src="https://docs.google.com/drawings/d/1_FdihMul1yPaIzblu_Dp6iXlEde68sSrP0EWs-Nk8zQ/pub?w=505&h=869" width="371" /></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-49762353651796394772015-02-02T10:56:00.001-05:002015-02-02T10:56:57.350-05:00My "Top 5 R Functions"<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">In preparation for a R Workgroup meeting, I started thinking about what would be my "Top 5 R Functions". I ruled out the functions for basic mechanics - </span><span style="font-family: Courier New, Courier, monospace;">save</span><span style="font-family: Trebuchet MS, sans-serif;">, </span><span style="font-family: Courier New, Courier, monospace;">load</span><span style="font-family: Trebuchet MS, sans-serif;">, </span><span style="font-family: Courier New, Courier, monospace;">mean</span><span style="font-family: Trebuchet MS, sans-serif;">, etc. - they're obviously critical, but every programming language has them, so there's nothing especially "R" about them. I also ruled out the fancy statistical analysis functions like </span><span style="font-family: Courier New, Courier, monospace;">(g)lmer</span><span style="font-family: Trebuchet MS, sans-serif;"> -- most people (including me) start using R because they want to run those analyses so it seemed a little redundant. I started using R because I wanted to do growth curve analysis, so it seems like a weak endorsement to say that I like R because it can do growth curve analysis. No, I like R because it makes (many) somewhat complex data operations really, really easy. Understanding how take advantage of these R functions is what transformed my view of R from purely functional (I need to do analysis X and R has functions for doing analysis X) to an all-purpose tool that allows me to do data processing, management, analysis, and visualization extremely quickly and easily. So, here are the 5 functions that did that for me:</span><br />
<br />
<ol style="text-align: left;">
<li><span style="font-family: Courier New, Courier, monospace;">subset()</span><span style="font-family: Trebuchet MS, sans-serif;"> for making subsets of data (natch)</span></li>
<li><span style="font-family: Courier New, Courier, monospace;">merge() </span><span style="font-family: Trebuchet MS, sans-serif;">for combining data sets in a smart and easy way</span></li>
<li><span style="font-family: Courier New, Courier, monospace;">melt() </span><span style="font-family: Trebuchet MS, sans-serif;">for converting from wide to long data formats</span></li>
<li><span style="font-family: Courier New, Courier, monospace;">dcast()</span><span style="font-family: Trebuchet MS, sans-serif;"> for converting from long to wide data formats, and for making summary tables</span></li>
<li><span style="font-family: Courier New, Courier, monospace;">ddply()</span><span style="font-family: Trebuchet MS, sans-serif;"> for doing split-apply-combine operations, which covers a huge swath of the most tricky data operations </span></li>
</ol>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">For anyone interested, I posted <a href="http://rpubs.com/danmirman/Rgroup-part1" target="_blank">my R Workgroup notes on how to use these functions</a> on RPubs. Side note: after a little configuration, I found it super easy to write these using knitr, "knit" them into a webpage, and post that page on RPubs.</span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Conspicuously missing from the above list is </span><span style="font-family: Courier New, Courier, monospace;">ggplot</span><span style="font-family: Trebuchet MS, sans-serif;">, which I think deserves a special lifetime achievement award for how it has transformed how I think about data exploration and data visualization. I'm planning that for the next R Workgroup meeting.</span></div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com2tag:blogger.com,1999:blog-8091991448412546705.post-81693029448505204812014-10-07T13:45:00.000-04:002014-10-07T13:45:22.084-04:00Why pursue a Ph.D.?<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">This video is directed at STEM fields, so I am not sure everything in it applies perfectly to cognitive neuroscience. But, if you're going to go to grad school, I think this is the right kind of perspective to bring:</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<iframe allowfullscreen="" frameborder="0" height="313" mozallowfullscreen="" src="//player.vimeo.com/video/80236275" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="http://vimeo.com/80236275">Why Pursue A Ph.D.? Three Practical Reasons (12-minute video)</a> from <a href="http://vimeo.com/user3871605">Philip Guo</a> on <a href="https://vimeo.com/">Vimeo</a>.<br />
<br />
<span style="font-family: Trebuchet MS, sans-serif;">(via <a href="http://flowingdata.com/" target="_blank">FlowingData</a>)</span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-83694325321055541122014-08-13T09:48:00.000-04:002014-08-13T09:48:13.350-04:00Plotting mixed-effects model results with effects package<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">As separate by-subjects and by-items analyses have been replaced by mixed-effects models with crossed random effects of subjects and items, I've often found myself wondering about the best way to plot data. The simple-minded means and SE from trial-level data will be inaccurate because they won't take the nesting into account. If I compute subject means and plot those with by-subject SE, then I'm plotting something different from what I analyzed, which is not always terrible, but definitely not ideal. It seems intuitive that the condition means and SE's are computable from the model's parameter estimates, but that computation is not trivial, particularly when you're dealing with interactions. Or, rather, that computation was not trivial until I discovered the </span><span style="font-family: Courier New, Courier, monospace;"><b><a href="http://socserv.socsci.mcmaster.ca/jfox/Misc/effects/index.html" target="_blank">effects</a></b></span><span style="font-family: Trebuchet MS, sans-serif;"> package.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"></span><br />
<a name='more'></a><span style="font-family: Trebuchet MS, sans-serif;"><br /></span><br />
<span style="font-family: Trebuchet MS, sans-serif;">To show how this would work, I pillaged some data from a word-to-picture matching pilot study. Younger adults (college students) and older adults (mostly 50s and 60s) did a word-to-picture matching task in the presence of either cohort competitors (camera - camel) or semantic competitors (lion - tiger).</span><br />
<br />
<span style="font-family: Courier New, Courier, monospace;">> summary(RT.demo)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> Subject Target Condition ACC RT Group </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2102 : 40 chicken: 36 Cohort :690 Min. :1 Min. :1503 YC:701 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2103 : 40 hat : 36 Semantic:675 1st Qu.:1 1st Qu.:2131 OC:664 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2104 : 40 penny : 36 Median :1 Median :2362 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2106 : 40 potato : 36 Mean :1 Mean :2442 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2109 : 40 radio : 36 3rd Qu.:1 3rd Qu.:2684 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> 2116 : 40 stool : 36 Max. :1 Max. :4847 </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> (Other):1125 (Other):1149 </span><br />
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">> ggplot(RT.demo, aes(Condition, RT, fill=Group, color=Group)) + geom_violin() + theme_bw(base_size=12)</span></div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF4zifIbRt6pjov32cuj4uOmjQbtvBk8plwjl4Gb0wSyXMQizL8x9r64j0NYl04at4E5mUJTZ6zka2J-O7R-olExCL9pP69RFT8-d9K7u7vtmD1bYuPDB1S4I9pY-ZBUCMt1Iqx2QDx2U/s1600/Fig1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF4zifIbRt6pjov32cuj4uOmjQbtvBk8plwjl4Gb0wSyXMQizL8x9r64j0NYl04at4E5mUJTZ6zka2J-O7R-olExCL9pP69RFT8-d9K7u7vtmD1bYuPDB1S4I9pY-ZBUCMt1Iqx2QDx2U/s1600/Fig1.png" height="240" width="320" /></a></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Not surprisingly, the response times for older adults are slower than for younger adults, but it looks like this might be particularly true in the presence of semantic competitors. Let's test that with a mixed model with crossed random effects of subjects and items.</span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">> m <- lmer(RT ~ Condition*Group + (Condition | Subject) + (1 | Target), data=RT.demo)</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">> coef(summary(m))</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> Estimate Std. Error t value</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">(Intercept) 2230.057 64.749 34.44</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">ConditionSemantic -7.881 68.565 -0.11</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">GroupOC 413.287 65.097 6.35</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">ConditionSemantic:GroupOC 104.096 33.110 3.14</span></div>
<div>
<br /></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">So it looks like the older adults are about 400ms slower than the younger adults in the cohort condition and another 100ms slower in the semantic condition. Now we can use the </span><span style="font-family: Courier New, Courier, monospace;"><b>effects</b></span><span style="font-family: Trebuchet MS, sans-serif;"> package to convert these parameter estimates into condition mean and SE estimates. The key function is </span><span style="font-family: Courier New, Courier, monospace;">effect()</span><span style="font-family: Trebuchet MS, sans-serif;">, which takes a term from the model and the model object. We can use </span><span style="font-family: Courier New, Courier, monospace;">summary()</span><span style="font-family: Trebuchet MS, sans-serif;"> on the effect list object to get the information we need.</span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">> library(effects)</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">> ef <- effect("Condition:Group", m)</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;">> summary(ef)</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Condition*Group effect</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Group</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Condition YC OC</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Cohort 2230.057 2643.344</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Semantic 2222.176 2739.559</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Lower 95 Percent Confidence Limits</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Group</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Condition YC OC</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Cohort 2103.037 2516.026</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Semantic 2088.161 2605.384</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Upper 95 Percent Confidence Limits</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Group</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Condition YC OC</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Cohort 2357.076 2770.662</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> Semantic 2356.190 2873.734</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">For the purposes of plotting, we want to convert the effect list object into a data frame. Conveniently, there is a </span><span style="font-family: Courier New, Courier, monospace;">as.data.frame()</span><span style="font-family: Trebuchet MS, sans-serif;"> function:</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<div style="font-family: 'Courier New', Courier, monospace;">
> x <- as.data.frame(ef)</div>
<div style="font-family: 'Courier New', Courier, monospace;">
> x</div>
<div style="font-family: 'Courier New', Courier, monospace;">
Condition Group fit se lower upper</div>
<div style="font-family: 'Courier New', Courier, monospace;">
1 Cohort YC 2230.057 64.74945 2103.037 2357.076</div>
<div style="font-family: 'Courier New', Courier, monospace;">
2 Semantic YC 2222.176 68.31514 2088.161 2356.190</div>
<div style="font-family: 'Courier New', Courier, monospace;">
3 Cohort OC 2643.344 64.90167 2516.026 2770.662</div>
<div style="font-family: 'Courier New', Courier, monospace;">
4 Semantic OC 2739.559 68.39711 2605.384 2873.734</div>
<div style="font-family: 'Courier New', Courier, monospace; font-size: small;">
<br /></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Now we can plot this:</span></div>
<div>
<div style="font-family: 'Courier New', Courier, monospace;">
> ggplot(x, aes(Condition, fit, color=Group)) + geom_point() + geom_errorbar(aes(ymin=fit-se, ymax=fit+se), width=0.4) + theme_bw(base_size=12)</div>
<div class="separator" style="clear: both; font-family: 'Courier New', Courier, monospace; font-size: small; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS79CQagckAgT1M9qzvR1OCGMTcE5hGUBJWHAI8tHKT9qEgQkPa7GH3P5RkaQuCyRS13lJDXsEqDfQIZNMbSczq5gcDxZ95OE4YYvsetxByOH06o13URs0lL92HUKFdiHgxrjGnpv5L_4/s1600/Fig2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS79CQagckAgT1M9qzvR1OCGMTcE5hGUBJWHAI8tHKT9qEgQkPa7GH3P5RkaQuCyRS13lJDXsEqDfQIZNMbSczq5gcDxZ95OE4YYvsetxByOH06o13URs0lL92HUKFdiHgxrjGnpv5L_4/s1600/Fig2.png" height="240" width="320" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">Or for people who like dynamite plots:</span><br />
<span style="font-family: Courier New, Courier, monospace;">> ggplot(x, aes(Group, fit, color=Condition, fill=Condition)) + geom_bar(stat="identity", position="dodge") + geom_errorbar(aes(ymin=fit-se, ymax=fit+se), width=0.4, position=position_dodge(width=0.9)) + theme_bw(base_size=12)</span><br />
<div class="separator" style="clear: both; font-family: 'Courier New', Courier, monospace; font-size: small; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJxSQdOnGwk2dz7RASN2lOW7R_dCFujFQQFhjXVFNcGSiAd2cO5sbJw2grebHoibU5gAlY01IO4FDJ5aSrJAoeLtjtHugRtY6sxKw5mkkThpwpZoAYAKy_f6wdS_N5-GRG1vuN8Pi2ihA/s1600/Fig3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJxSQdOnGwk2dz7RASN2lOW7R_dCFujFQQFhjXVFNcGSiAd2cO5sbJw2grebHoibU5gAlY01IO4FDJ5aSrJAoeLtjtHugRtY6sxKw5mkkThpwpZoAYAKy_f6wdS_N5-GRG1vuN8Pi2ihA/s1600/Fig3.png" height="240" width="320" /></a></div>
<div style="font-family: 'Courier New', Courier, monospace; font-size: small;">
<br /></div>
</div>
</div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com15tag:blogger.com,1999:blog-8091991448412546705.post-10889536417280500272014-08-05T12:48:00.000-04:002014-08-05T12:48:43.695-04:00Visualizing Components of Growth Curve Analysis<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;"><i>This is a guest post by <a href="http://www.mattwinn.com/" target="_blank">Matthew Winn</a>:</i></span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">One of the more useful skills I’ve learned in the past couple years is growth curve analysis (GCA), which helps me analyze eye-tracking data and other kinds of data that take a functional form. Like some other advanced statistical techniques, it is a procedure that can be done without complete understanding, and is likely to demand more than one explanation before you really “get it”. In this post, I will illustrate the way that I think about it, in hopes that it can “click” for some more people. The objective is to break down a complex curve into individual components.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"></span><br />
<a name='more'></a><span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">You probably already know how to break down components of a function. For example, suppose we have the following function: (y=2x + 3)</span><br />
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBMRHS6LlH1AQkPGFbJYNEtfdulgEbd-MjGGFvsB0eN8NzxXdp09bx_XZDW-Kdpn-eXMx60bbnE07pAL5b8WVPl-b4koDMU3SdUnae7QHdHEjgXrKy63lPA5lO56AEDeXmA4MPKUKFB7A/s1600/02_simple_function.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBMRHS6LlH1AQkPGFbJYNEtfdulgEbd-MjGGFvsB0eN8NzxXdp09bx_XZDW-Kdpn-eXMx60bbnE07pAL5b8WVPl-b4koDMU3SdUnae7QHdHEjgXrKy63lPA5lO56AEDeXmA4MPKUKFB7A/s1600/02_simple_function.png" height="224" width="320" /></span></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">This can be broken down into the linear slope component (2 * x) and the intercept component (+3)</span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<span style="clear: left; float: left; font-family: Trebuchet MS, sans-serif; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzPBI-V2cN7bVPf8hWS_EsPc5-iZmK-IWzUS3fVmd1KvjhWx_Imy6-Km3s2DEyOsIAX48eO11HopYYs5Flc6WpOlgRvsKMiH-IsHhR4V0njzTml1kKyyzmuDXTq-eIwFlO8PFIdA-83Co/s1600/03_simple_components.png" height="200" width="640" /></span></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">When you add the left panel and the center panel, you get the right panel. You didn’t need to read this post to figure it out. But the point is that GCA can be broken down just as simply as this! </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<h2 style="text-align: left;">
<span style="font-family: Trebuchet MS, sans-serif;">
A Clean Hypothetical Example</span></h2>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Here’s a complex curve of some hypothetical data of pupil dilation measured over time. Time moves along from -2 seconds to a maximum of 1 second. These time values were set relative to a specific landmark. We are interested in modeling the growth of pupil dilation over time, which looks something like this... </span></div>
</div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuJtZUW56ux-ZtrH786YYe8bskWalvlrZfQTtTGxD9zkCD2tYJJA6HavJfJQ6pfi31D36G37t_YOMBHLO8H1B_s1F_xffGkdbzbxVTgxgDAHqhHtTFwl9My-XMi9jN3mtLLqPIWPPGOpo/s1600/04_group_a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuJtZUW56ux-ZtrH786YYe8bskWalvlrZfQTtTGxD9zkCD2tYJJA6HavJfJQ6pfi31D36G37t_YOMBHLO8H1B_s1F_xffGkdbzbxVTgxgDAHqhHtTFwl9My-XMi9jN3mtLLqPIWPPGOpo/s1600/04_group_a.png" height="224" width="320" /></span></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">using components like these...</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKI7SNdpbC50Vtgn75gg6eD6AwcdoWIlR0lyQDPgEhtPbxvDUDHcp6lFkD4UsKTWllekwcJl5DybQtWFDWTXjVBqByvdqYQhi3orO_C_3ABGRND-c-o_MIUnYLH-HT_TntdJy9MomVq8k/s1600/05_time_plot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKI7SNdpbC50Vtgn75gg6eD6AwcdoWIlR0lyQDPgEhtPbxvDUDHcp6lFkD4UsKTWllekwcJl5DybQtWFDWTXjVBqByvdqYQhi3orO_C_3ABGRND-c-o_MIUnYLH-HT_TntdJy9MomVq8k/s1600/05_time_plot.png" height="256" width="320" /></span></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">The pupil diameter curve has a linear (slope) and a quadratic (curved) component, as well as an intercept (overall level). Our job is to measure the contributions of each. </span></div>
<div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Now we can see that each of those time polynomials show up in the complex plot. Sure, the quadratic component looks upside-down, but that just means the coefficient is negative. </span></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<span style="clear: left; float: left; font-family: Trebuchet MS, sans-serif; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9FIPH_fUEqBMJKho4aMd55wD6Vmk_BWZiV03l9l_7XBKGnkwLC1VcsXlyAHB2Jip-vFk2aPW1syYsR9sX-IXrYNzlxPmSTIMWsRVS4F6B8U7mwP6-aB1RCNy8mmnWiGX2hqZk-HXf98Q/s1600/06_group_A_components.png" height="248" width="640" /></span></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Recall the simple y = ax + b plot; if you add all of these components reading left to right, you will end up with the complex plot on the right. </span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Here are a few other clean hypothetical examples with different coefficients for the intercept, linear and quadratic components. Think of each as simple addition reading left to right. </span></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<span style="clear: left; float: left; font-family: Trebuchet MS, sans-serif; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtg2qd6mMCjglSPaXnodabWEMjQDmMtQxUK7PcA3Ttrl4CsYeLtz1AvftErjmBXYOYicCq6SrfLCZuIHz5xk48oS0V_qxfYUVMjcW6vIjEBt1KrwH9tRdKy8-9OzDMuehrY06ho0Wf4Uc/s1600/07_demo_curve_components.png" height="480" width="640" /></span></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Now we can see that orthogonal polynomials are just different components being added together. </span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<h2 style="text-align: left;">
<span style="font-family: Trebuchet MS, sans-serif;">
Putting It To Use: Comparing Across Conditions </span></h2>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Now that we see the components visualized separately, it could be useful to compare the components across your different experimental conditions. After all, we report statistical differences between the individual components.</span></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Study background: Participants heard sentences that were processed to have varying degrees of signal quality that we call spectral resolution. Basically, the sound spectrum is broken down into a discrete number of frequency channels, where a greater number of channels gives clearer sound quality than fewer channels. Think of it as the number of pixels in an image; more is better. The sentences were 2 – 4 seconds long, and after each sentence, there was a 1.5-second delay, after which participants repeated back the sentence. The outcome measure (in addition to the accuracy level) was change in pupil diameter during the sentence. Greater pupil dilation is known to correspond to (among other things) greater cognitive load during various tasks. We hypothesized that sentences processed with fewer channels would elicit greater pupil dilation because they would require more effort to understand. </span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">The following plot suggests that this hypothesis was correct. The aggregated data are points with lines for standard error, and the barred white lines represent the statistical model to be described below. </span></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR22nfrMemIxCRl5ywTl1i1j-FiCLtshw-kpnd19mVjtBEmcJ66t025gS6er6k0K0UDtI0PxNv-FD9rYLTofO2-9X5au7M0PE8ZuTI3OchlWt7HXuIVsZN2SZ4RIJ3-0DS78tyO-vxspA/s1600/08_pupil_vocoder_CGA.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR22nfrMemIxCRl5ywTl1i1j-FiCLtshw-kpnd19mVjtBEmcJ66t025gS6er6k0K0UDtI0PxNv-FD9rYLTofO2-9X5au7M0PE8ZuTI3OchlWt7HXuIVsZN2SZ4RIJ3-0DS78tyO-vxspA/s1600/08_pupil_vocoder_CGA.png" height="326" width="400" /></span></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">Growth curve analysis was done to describe the portion of this plot from -2000 to +500 ms relative to the sentence offset. This corresponded to the growth of pupil dilation from baseline to roughly the max level. </span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">As you might expect, we reported differences in the intercept, linear and quadratic components across all these conditions. There were a large number of significant effects that emerged, but that’s not my point here (you can read the paper for that!). The point is that a helpful tool in understanding these differences is visualizing them individually rather than in aggregate. After all, a systematic increase in the intercept across the conditions is not easy to see when there are sloped and curved components that distract the eye. </span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">In the following grid of plots, we have flipped the scheme of the previous plot by 90 degrees. Now the different polynomial components (intercept, linear, quadratic) don’t move left to right, but instead move top to bottom. </span></div>
</div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6fmiiHEcDS8FXKyuP8aOFDM-zn7eANS5NaOUIdsXWFO_kK67l8mmCXnjVGO_g-HUe26a9oKOBZEJZDodgVZA60a3Es9jOytceApfXXZyfIexSaZeUbfY2T_EZyIX2OeSIn5unRVK363Y/s1600/09_pupil_data_comparisons.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: Trebuchet MS, sans-serif;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6fmiiHEcDS8FXKyuP8aOFDM-zn7eANS5NaOUIdsXWFO_kK67l8mmCXnjVGO_g-HUe26a9oKOBZEJZDodgVZA60a3Es9jOytceApfXXZyfIexSaZeUbfY2T_EZyIX2OeSIn5unRVK363Y/s1600/09_pupil_data_comparisons.png" height="498" width="640" /></span></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Trebuchet MS, sans-serif;">The objective of this plot is to let the reader directly compare the components of each condition’s curve side by side. The model coefficients for each component are indicated by the number in the box in each panel. One can imagine the addition of cubic and quartic components for other data.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">With this plot, it is much easier to see that:</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;">1. The intercept level grows systematically across the conditions</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;">2. Each condition’s linear component gets steeper, moving from left to right.</span><br />
<span style="font-family: 'Trebuchet MS', sans-serif;">3. Each condition gets progressively more curved, as the quadratic component gets increasingly more negative.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: 'Trebuchet MS', sans-serif;">I don’t expect this to be a standard way to visualize GCA, if only because journal space is in short supply. But as a blog post, or as supplemental material, perhaps this approach can help de-mystify GCA. </span></div>
</div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-9925413492736070812014-04-04T10:49:00.000-04:002014-04-04T10:49:29.406-04:00Flip the script, or, the joys of coord_flip()<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">Has this ever happened to you?</span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBvnsmkCacnBf0PAa8yD6ja9egAZ0tiZQynaRo_fjLmexB6UYOj0mQ7uzhQ2BTAVe9hngA7bQRmUpDAYXE5e2g2jWTQZiK20cDSOmqaiPy1KgvxhF_BcrXO7HnDP3NPAZk4SKnNoAHAl0/s1600/chickwts1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBvnsmkCacnBf0PAa8yD6ja9egAZ0tiZQynaRo_fjLmexB6UYOj0mQ7uzhQ2BTAVe9hngA7bQRmUpDAYXE5e2g2jWTQZiK20cDSOmqaiPy1KgvxhF_BcrXO7HnDP3NPAZk4SKnNoAHAl0/s1600/chickwts1.png" height="320" width="320" /></a></div>
<br />
<span style="font-family: Trebuchet MS, sans-serif;">I hate it when the labels on the x-axis overlap, but this can be hard to avoid. I can stretch the figure out, but then the data become farther apart and the space where I want to put the figure (either in a talk or a paper) may not accommodate that. I've never liked turning the labels diagonally, so recently I've started using </span><span style="font-family: Courier New, Courier, monospace;">coord_flip()</span><span style="font-family: Trebuchet MS, sans-serif;"> to switch the x- and y-axes:</span><br />
<span style="font-family: Courier New, Courier, monospace;">ggplot(chickwts, aes(feed, weight)) + stat_summary(fun.data=mean_se, geom="pointrange") + coord_flip()</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5msiyrhKTdChRCyOgGNFeWIUIld_HyOBuT5Hi_qmBpK4v0q-GJRJLrQEZS2pnnOjYvUhbYjnObeyAySd0W-jtkLFo4LJ6ngsDxhycAEDfvXzGUJk3z1HZMeedZCw7C2xKP0t1gCesFbg/s1600/chickwts2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5msiyrhKTdChRCyOgGNFeWIUIld_HyOBuT5Hi_qmBpK4v0q-GJRJLrQEZS2pnnOjYvUhbYjnObeyAySd0W-jtkLFo4LJ6ngsDxhycAEDfvXzGUJk3z1HZMeedZCw7C2xKP0t1gCesFbg/s1600/chickwts2.png" height="320" width="320" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">It took a little getting used to, but I think this works well. It's especially good for factor analyses (where you have many labeled items):</span><br />
<span style="font-family: Courier New, Courier, monospace;">library(psych)</span><br />
<span style="font-family: Courier New, Courier, monospace;">pc <- principal(Harman74.cor$cov, 4, rotate="varimax")</span><br />
<span style="font-family: Courier New, Courier, monospace;">loadings <- as.data.frame(pc$loadings[, 1:ncol(pc$loadings)])</span><br />
<span style="font-family: Courier New, Courier, monospace;">loadings$Test <- rownames(loadings)</span><br />
<br />
<span style="font-family: Courier New, Courier, monospace;">ggplot(loadings, aes(Test, RC1)) + geom_bar() + coord_flip() + theme_bw(base_size=10)</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmudLNk5VTE9Q2watN6Tqad2Oj29Bmh__bHkDGKQw7idHPDwPr4IifxLlryG1VIWrqMEgaD5TX7efWM_kyNGJsq7qyqbM-hv5zXeaE4y45qgdxfJCyZEL_8bp1jFZme8BzBeBxxejCxFA/s1600/fa1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmudLNk5VTE9Q2watN6Tqad2Oj29Bmh__bHkDGKQw7idHPDwPr4IifxLlryG1VIWrqMEgaD5TX7efWM_kyNGJsq7qyqbM-hv5zXeaE4y45qgdxfJCyZEL_8bp1jFZme8BzBeBxxejCxFA/s1600/fa1.png" height="400" width="320" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">It also works well if you want to plot parameter estimates from a regression model (where the parameter names can get long):</span><br />
<span style="font-family: Courier New, Courier, monospace;">library(lme4)</span><br />
<span style="font-family: Courier New, Courier, monospace;">m <- lmer(weight ~ Time * Diet + (Time | Chick), data=ChickWeight, REML=F)</span><br />
<span style="font-family: Courier New, Courier, monospace;">coefs <- as.data.frame(coef(summary(m)))</span><br />
<span style="font-family: Courier New, Courier, monospace;">colnames(coefs) <- c("Estimate", "SE", "tval")</span><br />
<span style="font-family: Courier New, Courier, monospace;">coefs$Label <- rownames(coefs)</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<br />
<span style="font-family: Courier New, Courier, monospace;">ggplot(coefs, aes(Label, Estimate)) + geom_pointrange(aes(ymin = Estimate - SE, ymax = Estimate + SE)) + geom_hline(yintercept=0) + coord_flip() + theme_bw(base_size=10)</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3vxoWrbCEqVGZPyCV0OozCWpWea88L-xa2uNEjIrJDMdXLVuldDYuIpfPUCfqidpBl0WvE8z6M3jxSykIZHQu60fudejyvX5dPAYjfgK_nFd1mIexbiuWyA7KNkJTnTAEUN3yu0A3UkA/s1600/mlr1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3vxoWrbCEqVGZPyCV0OozCWpWea88L-xa2uNEjIrJDMdXLVuldDYuIpfPUCfqidpBl0WvE8z6M3jxSykIZHQu60fudejyvX5dPAYjfgK_nFd1mIexbiuWyA7KNkJTnTAEUN3yu0A3UkA/s1600/mlr1.png" height="400" width="400" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com0tag:blogger.com,1999:blog-8091991448412546705.post-61978063465800818052014-03-03T14:14:00.001-05:002014-03-03T14:14:10.228-05:00Guidebook for growth curve analysis<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: Trebuchet MS, sans-serif;">I don't usually like to use complex statistical methods, but every once in a while I encounter a method that is so useful that I can't avoid using it. Around the time I started doing eye-tracking research (as a post-doc with <a href="http://magnuson.psy.uconn.edu/" target="_blank">Jim Magnuson</a>), people were starting recognize the value of using longitudinal data analysis techniques to analyze fixation time course data. Jim was ahead of most in this regard (<a href="http://magnuson.psy.uconn.edu/pdfs/magnuson_dixon_tanenhaus_aslin_CogSci2007.pdf" target="_blank">Magnuson et al., 2007</a>) and a special issue of the <i>Journal of Memory and Language</i> on data analysis methods gave as a great opportunity to describe how to apply "Growth Curve Analysis" (GCA) - a type of multilevel regression - to fixation time course data (<a href="http://www.danmirman.org/pdfs/MirmanDixonMagnuson2008.pdf" target="_blank">Mirman, Dixon, & Magnuson, 2008</a>). Unbeknownst to us, <a href="http://talklab.psy.gla.ac.uk/" target="_blank">Dale Barr</a> was working on very similar methods, though for somewhat different reasons, and our articles ended up neighbors in the special issue (<a href="http://www.sciencedirect.com/science/article/pii/S0749596X07001015" target="_blank">Barr, 2008</a>).</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://images.tandf.co.uk/common/jackets/crclarge/978146658/9781466584327.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="Growth Curve Analysis and Visualization Using R" border="0" src="http://images.tandf.co.uk/common/jackets/crclarge/978146658/9781466584327.jpg" /></a></div>
<span style="font-family: Trebuchet MS, sans-serif;">In the several years since those papers came out, it has become clear to me that other researchers would like to use GCA, but reading our paper and downloading our code examples was often not enough for them to be able to apply GCA to their own data. There are excellent multilevel regression textbooks out there, but I think it is safe to say that it's a rare cognitive or behavioral scientist who has the time and inclination to work through a 600-page </span><span style="font-family: 'Trebuchet MS', sans-serif;">advanced regression </span><span style="font-family: 'Trebuchet MS', sans-serif;">textbook. It seemed like a more practical guidebook to implementing GCA was needed, so I </span><a href="http://www.crcpress.com/product/isbn/9781466584327" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">wrote one</a><span style="font-family: 'Trebuchet MS', sans-serif;"> and it has just been published by Chapman & Hall / CRC Press as part of their </span><a href="http://www.crcpress.com/browse/series/crctherser" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">R Series</a><span style="font-family: 'Trebuchet MS', sans-serif;">.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">My idea was to write a relatively easy-to-understand book that dealt with the practical issues of implementing GCA using R. I assumed basic knowledge of behavioral statistics (standard coursework in graduate behavioral science programs) and minimal familiarity with R, but no expertise in computer programming or the specific R packages required for implementation (primarily </span><span style="font-family: Courier New, Courier, monospace;">lme4 </span><span style="font-family: Trebuchet MS, sans-serif;">and </span><span style="font-family: Courier New, Courier, monospace;">ggplot2</span><span style="font-family: Trebuchet MS, sans-serif;">). In addition to the core issues of fitting growth curve models and interpreting the results, the book covers plotting time course data and model fits and analyzing individual differences. Example data sets and solutions to the exercises in the book are available on my <a href="http://www.danmirman.org/gca" target="_blank">GCA website</a>.</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br /></span>
<span style="font-family: Trebuchet MS, sans-serif;">Obviously, the main point of this book is to help other cognitive and behavioral scientists to use GCA, but I hope it will also encourage them to make better graphs and to</span><span style="font-family: 'Trebuchet MS', sans-serif;"> analyze individual differences. I think individual differences are very important to cognitive science, but most statistical methods treat them as just noise, so maybe having better methods will lead to better science, though this might be a subject for a different post. Comments and feedback about the book are, of course, most welcome. </span></div>
Dan Mirmanhttp://www.blogger.com/profile/09484166723075799719noreply@blogger.com8