Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Saturday, August 11, 2018

Joining the editorial board of PLOS ONE

I have joined the Editorial Board of PLOS ONE. There are a few things about PLOS ONE that particularly appeal to me:
  • Broad scope is great for interdisciplinary research. My own research is primarily driven by experimental psychology, neuroscience, and computer science, as well as linguistics and neuropsychology/neurology. Before writing a manuscript, I often have to decide whether I will be submitting it to a cognitive psychology journal or a clinically-oriented (neuropsychology or neurology) journal or a neuroscience journal. This decision is not always easy and it has a major impact on how the manuscript needs to be written and who will review it. Since the scope of PLOS ONE covers the full range of natural and social sciences as well as medical research, I (you) don't need to worry about that. Just clearly describe the motivation, methods, results, and conclusions of the study and trust that Editors like me will find appropriate reviewers.
  • Accepts various article types. In addition to standard research articles, PLOS ONE accepts systematic reviews, methods papers (including descriptions of software, databases, and other tools), qualitative research, and negative results. If your manuscript is reporting original research, then it is a viable submission.
  • Publication decisions based on scientific rigor, not perceived impact (see full Criteria for Publication). It is difficult to try to guess what kind of impact a paper will have on the field and unnecessary because the field can figure that out on its own. As a reviewer, I focus on scientific rigor and whether the methods and results align with the motivation and conclusion. It's nice that PLOS ONE has the same focus. This emphasis on technical and ethical standards also means that PLOS ONE can publish good replication studies and negative results, which is critical for reducing publication bias and moving our field forward.
  • Fast decision times. Editors are expected to make decisions within a few days and reviewers are asked to complete their reviews in 10 days. Of course, this is no guarantee that a manuscript will have a fast decision -- it can take a long time to find reviewers and reviewers do not always meet their deadlines. But I think giving reviewers 10 days instead of 4-6 weeks (typical for psychology journals) and expecting editors to make fast decisions is a step in the right direction.
  • Open access at reasonable cost. This is not the place to discuss the relative merits of the standard reader-pay publication model and the open access author-pay model used by PLOS ONE. Suffice it to say that I like the open access model and I appreciate that PLOS ONE is doing it at a cost ($1595 USD) that is on the low end compared to other established open access journals.

Thursday, October 6, 2016

New media and priorities

I was disappointed to read a (a draft of) a forthcoming APS Observer article by Susan Fiske in which she complains about how new media have allowed "unmoderated attacks" on individuals and their research programs. Other bloggers have written at some length about this (Andrew Gelman, Chris Chambers, Uri Simonsohn), I particularly recommend the longer and very thoughtful post by Tal Yarkoni. A few points have emerged as the most salient to me:

First, scientific criticism should be evaluated on its accuracy and constructiveness. Our goal should be accurate critiques that provide constructive ideas about how to do better. Efforts to improve the peer review process often focus on those factors, along with timeliness. As it happens, blogs are actually great for this: posts can be written quickly and immediately followed by comments that allow for back-and-forth so that any inaccuracies can be corrected and constructive ideas can emerge. Providing critiques in a polite way is a nice goal, but it is secondary. (Tal Yarkoni's post discusses this issue very well).

Second, APS is the publisher of Psychological Science, a journal that was once prominent and prestigious, but has gradually become a pop psychology punchline. Perhaps I should not have been surprised that they're publishing an unmoderated attack on new media.

Third, things have changed very rapidly (this is the main point of Andrew Gelman's post). When I was in graduate school (2000-2005), I don't remember hearing concerns about replication and standard operating procedures included lots of stuff that I would now consider "garden of forking paths"/"p-hacking". 2011 was a major turning point: Daryl Bem reported his evidence of ESP (side note: he was working on that since at least the mid-to-late 90's when I was undergrad at Cornell and heard him speak about it). At the time, the flaws in that paper were not at all clear. That was also the year a paper called “False-positive psychology” was published (in Psychological Science), which showed that “researcher degrees of freedom” (or "p-hacking") make actual false positive rates much higher than the nominal p < 0.05 values. The year after that, in 2012, Greg Francis's paper ("Too good to be true") came out showing that multi-experiment papers reporting consistent replications of small effect sizes are themselves very unlikely and may be reflecting selection bias, p-hacking, or other problems. 2012 also the year I was contacted by the Open Science Collaboration to contribute to their large-scale replication effort, which eventually led to a major report on the reproducibility of psychological research.

My point is that these issues, which are a huge deal now, were not very widely known even 5-6 years ago and almost nobody was talking about them 10 years ago. To put it another way, just about all tenured Psychology professors were trained before the term "p-hacking" even existed. So, maybe we should admit that all this rapid change can be a bit alarming and disorienting. But we're scientists, we're in the business of drawing conclusions from data, and the data clearly show that our old way of doing business has some flaws, so we should try to fix those flaws. Lots of good ideas are being implemented and tested -- transparency (sharing data and analysis code), post-publication peer review, new impact metrics for hiring/tenure/promotion that reward transparency and reproducibility. And many of those ideas came from those unmoderated new media discussions.

Tuesday, March 1, 2016

MAPPD 2.0

About 5 or 6 years ago my colleagues at Moss Rehabilitation Research Institute and I made public a large set of behavioral data from language and cognitive tasks performed by people with aphasia. Our goal was to facilitate larger-scale research on spoken language processing and how it is impaired following left hemisphere stroke. We are pleased to announce that we have completed a thorough redesign of Moss Aphasia Psycholinguistics Project Database site. The MAPPD 2.0 interface is much simpler and easier to use, geared toward letting users download the data they want and analyze it themselves. 

The core of this database is single-trial picture naming and word repetition data for over 300 participants (including 20 neurologically intact control participants) with detailed target word and response information. The database also contains basic demographic and clinical information for each participant with aphasia, as well as performance on a host of supplementary tests of speech perception, semantic cognition, short-term/working memory, and sentence comprehension. A more detailed description of the included tests, coding schemes, and usage suggestions is available in our original description of the database (Mirman et al., 2010) and in the site's documentation.

Friday, February 19, 2016

Acceptance and rejection rates

There was a recent blog post at Frontiers pointing out that journals' publicly-available rejection rates are not associated with their impact factors. Their post discusses several factors that contribute to this, but I've been thinking about how rejection rates are calculated, particularly publicly stated rejection rates. For example, the 2013 rejection rate for both JEP:LMC and JEP:HPP is 78% and JEP:General is slightly higher at 83%. These are top-tier experimental psychology journals and those rejection rates seem intuitively appropriate for selective outlets, but I think they might be inflated because many papers are rejected with an invitation to revise and resubmit.

Thursday, September 3, 2015

Reproducibility project: A front row seat

A recent paper in Science reports the results of a large-scale effort to test reproducibility in psychological science. The results have caused much discussion (as well they should) in both general public and science forums. I thought I would offer my perspective as the lead author of one of the studies that was included in the reproducibility analysis. I had heard about the project even before being contacted to participate and one of the things that appealed to me about it was that they were trying to be unbiased in their selection of studies for replication: all papers published in three prominent journals in 2008.  Jim Magnuson and I had published a paper in one of those journals (Journal of Experimental Psychology: Learning, Memory, & Cognition) in 2008 (Mirman & Magnuson, 2008), so I figured I would hear from them sooner or later. 

Friday, June 19, 2015

Zeno's paradox of teaching

I've wrapped up my Spring term teaching and received my teaching evals. Now that I've (finally) had a chance to teach the same class a few times, I am starting to believe in what I call Zeno's Paradox of Teaching: every time I teach a class, my improvement in teaching quality is half the distance between quality of the last time I taught it and my maximum ability to teach that material.
If I'm right about this, then I think it means that it's important to think long-term when approaching teaching:
  1. New faculty (like me) should start by teaching primarily core courses, ones that are offered every year, have good support materials, and provide a consistent opportunity for improvement. Specialized seminars can be fun to teach, but if they're not going to be offered every year, then improvement will be slow.
  2. Don't drive yourself (myself) crazy trying to teach the "perfect" class on your (my) first time teaching. Try to do a good job and next time try to improve on it as much as possible.
  3. Zeno's paradox means that I'll never teach quite as well as I think I could teach. The positive message there is that one should continue trying to come up with creative ways to improve a course. The warning there is that perfection is not an appropriate standard and not to be too hard on oneself for failing to reach it.

Tuesday, October 7, 2014

Why pursue a Ph.D.?

This video is directed at STEM fields, so I am not sure everything in it applies perfectly to cognitive neuroscience. But, if you're going to go to grad school, I think this is the right kind of perspective to bring:


Why Pursue A Ph.D.? Three Practical Reasons (12-minute video) from Philip Guo on Vimeo.

(via FlowingData)

Monday, January 27, 2014

Graduate school advice

Since it is the season for graduate school recruitment interviews, I thought I would share some of my thoughts. This is also partly prompted by two recent articles in the journal Neuron. If you're unfamiliar with it, Neuron is a very high-profile neuroscience journal, so the advice is aimed at graduate students in neuroscience, though I think the advice broadly applies to students in the cognitive sciences (and perhaps other sciences as well). The first of these articles deals with what makes a good graduate mentor and how to pick a graduate advisor; the second article has some good advice on how to be a good graduate advisee.

I broadly agree with the advice in those articles and here are a few things I would add:

Monday, November 25, 2013

Does Malcolm Gladwell write science or science fiction?

Malcolm Gladwell is great at writing anecdotes, but he dangerously masquerades these as science. Case studies can be incredibly informative -- they form the historical foundation of cognitive neuroscience and continue to be an important part of cutting-edge research. But there is an important distinction between science, which relies on structured data collection and analysis, and anecdotes, which rely on an entertaining narrative structure. His claim that dyslexia might be a "desireable difficulty" is maybe the most egregious example of this. Mark Seidenberg, who is a leading scientist studying dyslexia and an active advocate, has written an excellent commentary about Gladwell's misrepresentation of dyslexia. The short version is that dyslexia is a serious problem that, for the vast vast majority of people, leads to various negative outcomes. The existence of a few super-successful self-identified dyslexics may be encouraging, maybe even inspirational, but it absolutely cannot be taken to mean that dyslexia might be good for you.

In various responses to his critics, Gladwell has basically said that people who know enough about the topic to recognize that (some of) his conclusions are wrong, shouldn't be reading his books ("If my books appear to a reader to be oversimplified, then you shouldn't read them: you're not the audience!"). This is extremely dangerous: readers who don't know about dyslexia, about its prevalence or about its outcomes, would be led to the false conclusion that dyslexia is good for you. The problem is not that his books are oversimplified; the problem is that his conclusions are (sometimes) wrong because they are based on a few convenient anecdotes that do not represent the general pattern.

Another line of defense is that Gladwell's books are only meant to raise interesting ideas and stimulate new ways of thinking in a wide audience, not to be a scholarly summary of the research. Writing about science in a broadly accessible way is a perfectly good goal -- my own interest in cognitive neuroscience was partly inspired by the popular science writing of people like Oliver Sacks and V.S. Ramachandran. The problem is when the author rejects scientific accuracy in favor of just talking about "interesting ideas". Neal Stephenson once said that what makes a book "science fiction" is that it is fundamentally about ideas. It is great to propose new ideas and explore what they might mean. But if we follow that logic, then Malcolm Gladwell is not a science writer, he is a science fiction writer.

Tuesday, October 15, 2013

Graduate student positions available at Drexel University

The Applied Cognitive and Brain Sciences (ACBS) program at Drexel University invites applications for Ph.D. students to begin in the Fall of 2014. Faculty research interests in the ACBS program span the full range from basic to applied science in Cognitive Psychology, Cognitive Neuroscience, and Cognitive Engineering, with particular faculty expertise in computational modeling and electrophysiology. Accepted students will work closely with their mentor in a research-focused setting, housed in a newly-renovated, state-of-the-art facility featuring spacious graduate student offices and collaborative workspaces. Graduate students will also have the opportunity to collaborate with faculty in Clinical Psychology, the School of Biomedical Engineering and Health Sciences, the College of Computing and Informatics, the College of Engineering, the School of Medicine, and the University's new Expressive and Creative Interaction Technologies (ExCITe) Center.

Specific faculty members seeking graduate students, and possible research topics after the jump.

Wednesday, May 22, 2013

Choosing a journal

For almost every manuscript I've been involved with, my co-authors and I have had to have a discussion about where (to which journal) we should submit it. Typically, this is a somewhat fuzzy discussion about the the fit between the topic of our manuscript and various journals, the impact factor of those journals (even though I'm not fond of impact factors), their editorial boards (I find that the review process is much more constructive and effective when the editor is knowledgeable about the topic and sends the manuscript to knowledgeable reviewers), manuscript guidelines such as word limits, and turn-around times (which can vary from a few weeks to several months). 

I've just learned about a very cool online tool, called JANE (Journal/Author Name Estimator), that provides recommendations. The recommendations are based on similarity between your title or abstract and articles published in various journals (their algorithm is described in a paper). This similarity score comprises the confidence of the recommendation and the journal recommendations come with an Article Influence score, which is a measure of how often articles in the journal get cited (from eigenfactor.org). I tried it out using the titles and abstracts of some recent (but not yet published) manuscripts and I thought it provided very appropriate recommendations. Not surprisingly, the recommendations were a little better when I provided the abstract instead of the title, but I was impressed with how well it did just based on the title (maybe this means that I write informative titles?). JANE can also be used to find authors who have published on your topic, which could be useful for suggesting reviewers and generally knowing who is working in your area, but I found this search type to be noisier, probably simply due to sample size -- a typical author has many fewer publications than a typical journal. JANE won't answer all your journal and manuscript questions, but I am looking forward to using JANE next time I find myself debating where to submit a manuscript.

Monday, December 17, 2012

Gender equality in science

An interesting post over at BishopBlog takes on the lack of men in Psychology. One of the reasons BishopBlog is a favorite of mine is that you get real data along with interpretations and opinions. Two points in the post strongly resonated with my experience. 

One is the decline of women in science by career stage. A few years ago, the NSF did a major study of women and minorities in science and identified attrition as a key reason for the under-representation of women in science. Following Dr. Bishop's example, a little complementary data (from 2010; these and lots more data are available here): in Neuroscience, the graduate student population was slightly biased toward women (52.7% are female), but the postdoctoral fellow population was biased toward men (only 45.7% were female). Given the fairly large sample sizes (2798 graduate students and 818 postdocs), this difference was highly reliable (chi-square test of independence, p < 0.001). 

The second is the effect of sub-field. I am particularly sensitive to this because I seem to work in two of the most gender-biased sub-fields: computational modeling seems strongly male-dominated, but cognitive neuropsychology seems strongly female-dominated. I couldn't find data for those fields exactly, but the APA membership data that Dr. Bishop mentioned show a huge disparity: women make up only about 25% of the members in Experimental Psychology and Behavioral Neuroscience, close to half in Clinical Neuropsychology, and about 70% in Developmental Psychology.

This issue is certainly complex and there is no simple solution. That said, there are some strategies that we know would help and can be implemented relatively easily. For example, we know that there is bias in the review process (e.g., Peters & Ceci, 1982), so why not make it double-blind? This is already the standard in some fields, but remains generally optional or unavailable in cognitive science and cognitive neuroscience. It is true that reviewers may be able to guess the identity of the author(s) some of the time, but isn't guessing correctly some of the time better than knowing all of the time? This would (partially) level the playing field between genders as well as between junior and senior scientists and should lead to a more fair system.


References
Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5, 187-255.

Saturday, September 22, 2012

The power to see the future is exciting and terrifying

In a recent comment in Nature, Daniel Acuna, Stefano Allesina, and Konrad Kording describe a statistical model for predicting h-index. In case you are not familiar with it, h-index is a citation-based measure of scientific impact. An h-index of n means that you have n publications with at least n citations. I only learned about h-index relatively recently and I think it is a quite elegant measure -- simple to compute, not too biased by a single highly-cited paper or by many low-impact (uncited) papers. Acuna, Allesina, and Kording took publicly available data and developed a model for predicting future h-index based on number of articles, current h-index, years since first publication, number of distinct journals published in, and number of articles in the very top journals in the field (Nature, Science, PNAS, and Neuron). Their model accounted for about 66% of the variance in future h-index among neuroscientists, which I think is pretty impressive. Perhaps the coolest thing about this project is the accompanying website that allows users to predict their own h-index.

Since hiring and tenure decisions are intended to reflect both past accomplishments and expectations of future success, this prediction model is potentially quite useful. Acuna et al. are appropriately circumspect about relying on a single measure for making such important decisions and they are aware that over-reliance on a single metric to produce "gaming" behavior. So the following is not meant as a criticism of their work, but two examples jumped to my mind: (1) Because number of distinct journals is positively associated with future h-index (presumably it is an indicator of breadth of impact), researchers may choose to send their manuscripts to less appropriate journals in order to increase the number of journals in which their work has appeared. Those journals, in turn, would be less able to provide appropriate peer review and the articles would be less visible to the relevant audience, so their impact would actually be lower. (2) The prestige of those top journals already leads them to be targets for falsified data -- Nature, Science, and PNAS are among the leading publishers of retractions (e.g., Liu, 2006). Formalizing and quantifying that prestige factor can only serve to increase the motivation for unethical scientific behavior.

That said, I enjoyed playing around with the simple prediction calculator on their website. I'd be wary if my employer wanted to use this model to evaluate me, but I think it's kind of a fun way to set goals for myself: the website gave me a statistical prediction for how my h-index will increase over the next 10 years, now I'm going to try to beat that prediction. Since h-index is (I think) relatively hard to "game", this seems like a reasonably challenging goal.

ResearchBlogging.org Acuna, D. E., Allesina, S., & Kording, K. P. (2012). Predicting scientific success. Nature, 489 (7415), 201-202. DOI: 10.1038/489201a
Liu, S. V. (2006). Top Journals’ Top Retraction Rates. Scientific Ethics, 1 (2), 91-93.

Monday, August 20, 2012

The translational pipeline

Over the weekend I read yet another excellent article by Atul Gawande in the most recent issue of the New Yorker. There are many interesting things in this article and I highly recommend it, but there was one minor comment that really resonated with my own experience. Dr. Gawande mentioned that it's hard to get health care providers (doctors, nurses, clinicians of all types) to accept changes. This resistance to change is one of the obstacles in the translational research pipeline identified by my colleague John Whyte (e.g., Whyte & Barrett, 2012). The other major obstacle is using a theoretical understanding of some process, mechanism, or impairment to develop a potential treatment. 

I deeply value basic science (after all, it is most of what I do) and I recognize the importance of specialization -- the skills required for good basic science are not the same as the skills required for developing and testing treatments. Nevertheless, sometimes I worry that we basic scientists don't even speak the same language as the researchers trying to develop and test interventions. The clinical fields that border cognitive science (education, rehabilitation medicine, etc.) certainly stand to benefit from rigorous development of cognitive and neuroscience theory and this is the standard motivation given by basic scientists when applying for funding to the National Institutes of Health. 

Over the last few years I've come to realize that the benefits also run in the other direction: interventions can provide unique tests of theories. Making new, testable predictions is one of the hallmarks of a good a theory, but if the new predictions are limited to much-used, highly constrained laboratory paradigms then it can feel like we're just spinning our wheels. Making predictions for interventions, or even just for individual differences, is one way to test a theory and to simultaneously expand its scope. As NIH puts more emphasis on its health mission, I hope cognitive and neural scientists will see this as an opportunity to expand the scope of our theories rather than as an inconvenient constraint.

ResearchBlogging.org Whyte J, & Barrett AM (2012). Advancing the evidence base of rehabilitation treatments: A developmental approach. Archives of Physical Medicine and Rehabilitation, 93 (8 Suppl 2) PMID: 22683206

Thursday, August 16, 2012

Brain > Mind?

My degrees are in psychology, but I consider myself a (cognitive) neuroscientist. That's because I am interested in how the mind works and I think studying the brain can give us important and useful insights into mental functioning. But it is important not to take this too far. In particular, I think it is unproductive to take the extreme reductionist position that "the mind is merely the brain". I've spelled out my position (which I think is shared by many cognitive neuroscientists) in a recent discussion on the Cognitive Science Q&A site cogsci.stackexchange.com. The short version is that I think it is trivially true that the mind is just the brain, but the brain is just molecules, which are just atoms, which are just particles, etc., etc. and if you're interested in understanding human behavior, particle physics is of little use. In other words, when I talk about the mind, I'm talking about a set of physical/biological processes that are best described at the level of organism behavior.

The issue of separability of the mind and brain is also important when considering personal responsibility, as John Monterosso and Barry Schwartz pointed out in a recent piece in the New York Times and in their study (Monterosso, Royzman, & Schwartz, 2005). (Full disclosure: Barry's wife, Myrna Schwartz, is a close colleague at MRRI). Their key finding was that perpetrators of crimes were judged to be less culpable given a physiological explanation (such as neurotransmitter imbalance) than an experiential imbalance (such as having been abused as a child), even though the link between the explanation and the behavior was matched. That is, when participants were told that (for example) 20% of people with this neurotransmitter imbalance commit such crimes or 20% of people who had been abused as children commit such crimes, the ones with the neurotransmitter imbalance were judged to be less culpable. 

Human behavior is complex and explanations can be framed at different levels of analysis. Neuroscience can provide important insights and constraints for these explanations, but precisely because psychological processes are based in neural processes, neural processes cannot be any more "automatic" than psychological processes, nor can neural evidence be any more "real" than behavioral evidence.

ResearchBlogging.org Monterosso, J., Royzman, E.B., & Schwartz, B. (2005). Explaining Away Responsibility: Effects of Scientific Explanation on Perceived Culpability Ethics & Behavior, 15 (2), 139-158 DOI: 10.1207/s15327019eb1502_4