[222003710010] |
No tenure, no way!
[222003710020] |The New York Times is carrying an interesting but misguided discussion of tenure today.
[222003710030] |As usual, the first commentator warns that without tenure, academic freedom will die:
[222003710040] |As at-will employees, adjunct faculty members can face dismissal or nonrenewal when students, parents, community members, administrators, or politicians are offended at what they say.
[222003710050] |If you can be fired tomorrow, you do not really have academic freedom.
[222003710060] |Self-censorship often results.
[222003710070] |Mark Taylor of Columbia replies, essentially, "oh yah?"
[222003710080] |To those who say the abolition of tenure will make faculty reluctant to be demanding with students or express controversial views, I respond that in almost 40 years of teaching, I have not known a single person who has been more willing to speak out after tenure than before.
[222003710090] |Instead, tenure induces stasis, a point to which Richard Vedder, an economist at Ohio University, agrees:
[222003710100] |The fact is that tenured faculty members often use their power to stifle innovation and change.
[222003710110] |Money
[222003710120] |You might, reading through these discussions, almost think that universities have been slowly doing weakening the tenure system because they want to increase diversity, promote a flexible workforce, and reduce the power of crabby old professors.
[222003710130] |Maybe some administrators do feel that way.
[222003710140] |But lurking behind all of this discussion is money.
[222003710150] |Here's Taylor:
[222003710160] |If you take the current average salary of an associate professor and assume this tenured faculty member remains an associate professor for five years and then becomes a full professor for 30 years, the total cost of salary and benefits alone is $12,198,578 at a private institution and $9,992,888 at a public institution.
[222003710170] |I'm not sure where he's getting these numbers.
[222003710180] |The numbers at Harvard for the same period is $6,320,500 for salary alone.
[222003710190] |Assuming benefits cost as much as the salary alone gets us up to our $12,000,000, but that's for Harvard, not the average university.
[222003710200] |Perhaps Taylor is assuming the professor starts today and includes inflation in future salaries, but 35 years of inflation is a lot.
[222003710210] |I'm using present-day numbers and assuming real salaries remain constant.
[222003710220] |In any case, money seems to be the real factor, mentioned by more or less all the contributors.
[222003710230] |Here's Vedder:
[222003710240] |My academic department recently granted tenure to a young assistant professor.
[222003710250] |In so doing, it created a financial liability of over two million dollars, because it committed the institution to providing the individual lifetime employment.
[222003710260] |With nearly double digit unemployment and universities furloughing and laying off personnel, is tenure a luxury we can still afford?
[222003710270] |Adrianna Kezar of USC notes that non-tenured faculty are often not given offices or supplies, which presumably also saves the university money.
[222003710280] |Professors make choices, too.
[222003710290] |So universities save a lot of money by eliminating tenure.
[222003710300] |And certainly universities need to find savings where they can.
[222003710310] |What none of the contributors to the discussion acknowledge, beyond an oblique aside by Vedder, is that tenure has a financial value to professors as well as universities.
[222003710320] |Removing tenure in a sense is a pay cut, and both present and potential academics will respond to that pay cut.
[222003710330] |Becoming a professor is not a wise financial decision.
[222003710340] |The starting salary of a lawyer leaving a top law school is greater than what most PhDs from the same schools will make at the height of their careers should they stay in academia.
[222003710350] |And lawyers' salaries, as I'm often reminded, can be similarly dwarfed by people with no graduate education that go straight into finance.
[222003710360] |Most of us who nonetheless go into academia do so because we love it.
[222003710370] |The point is that we have options.
[222003710380] |Making the university system less attractive will mean fewer people will want to go into it.
[222003710390] |It's really that simple.
[222003730010] |Language Games
[222003730020] |Translation Party
[222003730030] |Idea: type in sentence in English.
[222003730040] |The site then queries Google Translator, translating into Japanese and then back again until it reaches "equilibrium," where the sentence you get out is the sentence you put in.
[222003730050] |Some sentences just never converge.
[222003730060] |Ten points to whoever finds the most interesting non-convergence.
[222003740010] |What are the best cognitive science blogs?
[222003740020] |If you look to your right, you'll see I've been doing some long-needed maintenance to my blog roll.
[222003740030] |As before, I'm limiting it to blogs that I actually read (though not all the blogs I read), and I have it organized by subject matter.
[222003740040] |As I did this, I noticed that the selection of cognitive science and language blogs is rather paltry.
[222003740050] |Most of the science blogs I read -- including many not included in the blog rolls -- are written by physical scientists.
[222003740060] |Sure there are more of them than us, but even so it seems there should be more good cognitive science and language blogs.
[222003740070] |So I'm going to crowd-source this and ask you, dear readers, who should I be reading that I'm not?
[222003750010] |I liked "Salt," but...
[222003750020] |What's with movies in which fMRI can be done remotely.
[222003750030] |In an early scene, the CIA do a remote brain scan of someone sitting in a room.
[222003750040] |And it's fully analyzed, too, with ROIs shown.
[222003750050] |I want that technology -- it would make my work so much easier!
[222003750060] |UPDATE I'm not the only one with this complaint.
[222003750070] |Though Popular Mechanics goes a bit easy on the movie by saying fMRI is "not quite at the level Salt portrays."
[222003750080] |That's a bit like saying space travel is not quite at the level Star Trek portrays.
[222003750090] |There may someday be a remote brain scanner, but it won't be based on anything remotely like existing fMRI technology, which requires incredibly powerful, supercooled and loud magnets.
[222003750100] |Even if you solved the noise problems, there's nothing to be done about the fact that the knife embedded in the Russian spy's shoe (yes -- it is that kind of movie) would have gone flying to the center of the magnetic field, along with many of the other metal objects in the room.
[222003770010] |Honestly, Research Blogging, Get over yourself
[222003770020] |A few years ago, science blog posts started decorating themselves with a simple green logo.
[222003770030] |This logo was meant to credential the blog post as being one about peer-reviewed research, and is supplied by Research Blogging.
[222003770040] |As ResearchBlogging.org explains:
[222003770050] |ResearchBlogging.org is a system for identifying the best, most thoughtful blog posts about peer-reviewed research.
[222003770060] |Since many blogs combine serious posts with more personal or frivolous posts, our site offers away to find only the most carefully-crafted about cutting-edge research, often written by experts in their respective fields.
[222003770070] |That's a good goal and one I support.
[222003770080] |If you read further down, you see that this primarily amounts to the following: if the post is about a peer-reviewed paper, it's admitted to the network.
[222003770090] |If it's not, it isn't.
[222003770100] |I guess the assumption is that the latter is not carefully-crafted or about cutting-edge research.
[222003770110] |And that's where I get off the bus.
[222003770120] |Peer Review is Not Magic
[222003770130] |One result of the culture wars is that scientists have needed a way of distinguishing real data from fantasy.
[222003770140] |If you look around the Internet, no doubt half or even more than half of what is written suggests there's no global warming, that vaccines cause autism, etc.
[222003770150] |Luckily, fanatics rarely publish in peer-reviewed journals, so once we restrict the debate to what is in peer-reviewed journals, pretty much all the evidence suggests global warming, no autism-vaccine link, etc.
[222003770160] |So pointing to peer-review is a useful rhetorical strategy.
[222003770170] |That, at least, is what I assume has motivated all the stink about peer-review in recent years, and ResearchBlogging.org's methods.
[222003770180] |But it's out of place in the realm of science blogs.
[222003770190] |It's useful to think about what peer review is.
[222003770200] |A reviewer for a paper reads the paper.
[222003770210] |The reviewer does not (usually) attempt to replicate the experiment.
[222003770220] |The reviewer does not have access to the data and can't check that the analyses were done correctly.
[222003770230] |At best, the reviewer evaluates the conclusions the authors draw, and maybe even criticizes the experimental protocol or the statistical analyses used (assuming the reviewers understand statistics, which in my field is certainly not always the case).
[222003770240] |But the reviewer doesn't can't check that the data weren't made up, that the experimental protocol was actually followed, that there were no errors in data analysis, etc.
[222003770250] |In other words, the reviewer can do only and exactly what a good science blogger does.
[222003770260] |So good science blogging is, at its essence, a kind of peer review.
[222003770270] |Drawbacks
[222003770280] |Now, you might worry about the fact that the blogger could be anyone.
[222003770290] |There's something to that.
[222003770300] |Of course, ResearchBlogging.org has the same problem.
[222003770310] |Just because someone is blogging about peer-reviewed paper doesn't mean they understand it (or that they aren't lying about it, which happens surprisingly often with the fluoride fanatics).
[222003770320] |So while peer review might be a useful way of vetting the paper, it won't help us vet the blog.
[222003770330] |We still have to do that ourselves (and science bloggers seem to do a good job of vetting).
[222003770340] |A weakness
[222003770350] |Ultimately, I think it's risky to put all our cards on peer review.
[222003770360] |It's a good system, but its possible to circumvent.
[222003770370] |We know that some set of scientists read the paper and thought it was worth publishing (with the caveats mentioned above).
[222003770380] |Of course, those scientists could be anybody, too -- it's up to the editor.
[222003770390] |So there's nothing really stopping autism-vaccine fanatics from establishing their own peer-reviewed journal, with reviewers who are all themselves autism-vaccine fanatics.
[222003770400] |To an extent, that already happens.
[222003770410] |As long as there's a critical mass of scientists who think a particular way, they can establish their own journal, submit largely to that journal and review each other's submissions.
[222003770420] |Thus, papers that couldn't have gotten published at a more mainstream journal can get a home.
[222003770430] |I think anyone who has done a literature search recently knows there are a lot of bad papers out there (in my field, anyway, though I imagine the same is true in others).
[222003770440] |Peer review is a helpful vetting process, and it does make papers better.
[222003770450] |But it doesn't determine fact.
[222003770460] |That is something we still have to find for ourselves.
[222003770470] |**** Observant readers will have noticed that I use ResearchBlogging.org myself for it's citation system.
[222003770480] |What can I say?
[222003770490] |It's useful.
[222003780010] |1/3 of Americans can't speak?
[222003780020] |A number of people have been blogging about a recent, still unpublished study suggesting that "a significant proportion of native English speakers are unable to understand some basic sentences."
[222003780030] |Language Log has a detailed explanation of the methods, but in essence participants were asked to match sentences to pictures.
[222003780040] |A good fraction made large numbers of mistakes, particularly those who had been high-school drop-outs.
[222003780050] |What's going on here?
[222003780060] |To an extent, this shouldn't be that surprising.
[222003780070] |We all know there are people who regularly mangle language.
[222003780080] |But, as Mark Liberman at Language Log points out, at least some of these data are no doubt ascribable to the "paper airplane effect":
[222003780090] |At one point we thought we had discovered that a certain fraction of the population is surprisingly deaf to certain fairly easy speech-perception distinctions; the effect, noted in a population of high-school-student subjects, was replicable; but observing one group of subjects more closely, we observed that a similar fraction spent the experiment surreptitiously launching paper airplanes and spitballs at one another.
[222003780100] |It's worth remembering that, while many participants in an experiment take it seriously and are happy to help out the researcher, some are just there for the money they get paid.
[222003780110] |Since we're required to pay people whether they pay attention to the experiment or not, they really don't have any incentive to try hard.
[222003780120] |Does it surprise anyone that high-school drop-outs are particularly likely to be bad at/uninterested in taking tests?
[222003780130] |It's probably relevant that the researchers involved in this study are linguists.
[222003780140] |There are some linguists who run fabulous experiments, but as a general rule, linguists don't have much training in doing experiments or much familiarity with what data looks like.
[222003780150] |So it's not surprising that the researchers in question -- and the people to whom they presented the data -- weren't aware of the paper airplane effect.
[222003780160] |(I should say that psychology is by no means immune to this problem.
[222003780170] |Whenever a new method is adopted, it takes a while before there's a critical mass of people who really understand it, and in the meantime a lot of papers with spurious conclusions get written.
[222003780180] |I'm thinking of fMRI here.)
[222003790010] |Joining Twitter. Sigh.
[222003790020] |The last few weeks I've been making some changes at this blog.
[222003790030] |One is to write fewer but higher-quality posts.
[222003790040] |Hopefully you noticed the latter and not just the former.
[222003790050] |At the same time, I have been finding more and more articles and posts that demand sharing, but about which I have little or nothing to say, except that you should read it.
[222003790060] |This has led me to add a twitter feed above the posts.
[222003790070] |You can read there or follow directly.
[222003790080] |We'll see how it goes.
[222003790090] |Feedback is welcome.
[222003790100] |After all, I do this for the audience.
[222003790110] |UPDATED
[222003790120] |Another change: This blog is *relatively* new to FieldOfScience, but posts go back to 2007.
[222003790130] |Some of those older posts are worth revisiting, and I'll be reposting (occasionally with updates) a few of the better ones from time to time under the label "golden oldies".
[222003790140] |Again, if people having feelings about this, let me know.
[222003800010] |Anonymity
[222003800020] |It seems that most science bloggers use pseudonyms.
[222003800030] |To an extent, I do this, though it's trivial for anyone who is checking to figure out who I am (I know, since I get emails sent to my work account from people who read the blog).
[222003800040] |This was a conscious choice, and I have my reasons.
[222003800050] |1. I suppose one would choose anonymity just in case one's blogging pissed off people who are in a position to hurt you.
[222003800060] |That would be mostly people in your own field.
[222003800070] |Honestly, I doubt it would take anyone in my field long to figure out what university I was at.
[222003800080] |Like anyone, I write most about the topics my friends and colleagues are discussing, and that's a function of who my friends and colleagues are.
[222003800090] |(In fact, a few years ago, someone I knew was able to guess what class I was taking, based on my blog topics.)
[222003800100] |2. I write a lot about the field, graduate school, and the job market.
[222003800110] |But within academia, every field is different.
[222003800120] |For that matter, even if you just wanted to discuss graduate student admission policy within psychology, the fact is that there is a huge amount of variation from department to department.
[222003800130] |So I can really only write about my experiences.
[222003800140] |For you to be able to use that information, you have a have a sense of what kind of school I'm at (a large, private research university) and in what field (psychology).
[222003800150] |I read a number of bloggers who write about research as an institution, about the job market, etc., but who refuse to say what field they're in.
[222003800160] |This makes it extremely difficult to know what to make of what they say.
[222003800170] |For instance, take my recent disagreement with Prodigal Academic.
[222003800180] |Prodigal and some other bloggers were discussing the fact that few people considering graduate school in science know how low the odds of getting a tenure-track job are.
[222003800190] |I suggested that actually they aren't misinformed about academia per se, but about the difference between a top-tier school and even a mid-tier school.
[222003800200] |I point out that at a top-tier psychology program, just about everybody who graduates goes on to get a tenure-track job.
[222003800210] |Prodigal says that in her field, at least, that's not true (and she suspects it's not true in my field, either).
[222003800220] |The difference is that you can actually go to the websites of top psychology programs and check that I'm right.
[222003800230] |We can't do the same for Prodigal, because we have no idea what field she's in.
[222003800240] |We just have to take her word for it.
[222003800250] |3. I suspect many people choose pseudonyms because they don't want to censor what they say.
[222003800260] |They don't want to piss anybody off.
[222003800270] |I think that to maintain my anonymity, I would have to censor a great deal of what I say.
[222003800280] |For one thing, I couldn't blog about the one thing I know best: my own work.
[222003800290] |There is the risk of pissing people off.
[222003800300] |And trust me, I worry about it.
[222003800310] |But being careful about not pissing people off is probably a good thing, whether you're anonymous or know.
[222003800320] |Angry people rarely change their minds, and presumably we anger people precisely when we disagree with them.
[222003800330] |--------
[222003800340] |So why don't I actually blog under my name?
[222003800350] |I want people who Google me by name to find my academic website and my professional work first, not the blog.
[222003820010] |Tenure, a dull roar
[222003820020] |Slate ran an unfortunate, bizarre piece on tenure last week.
[222003820030] |FemaleScienceProfessor has a good take-down.
[222003820040] |Among problems, it repeats the claim that the average tenured professor costs the average university around $11,000,000 across his/her career -- a number that is either misleading, miscalculated, or (most likely) an outright lie.
[222003820050] |But, as FemaleScienceProfessor points out, tenure itself costs next to nothing, so anyone who says eliminating tenure will save money really means cutting professor salaries will save money but doesn't want to be on the record saying so.
[222003820060] |If this seems like deja vu, it is.
[222003820070] |I just wrote a post about a similarly confused feature in the New York Times.
[222003820080] |That post is still worth reading (imho).
[222003820090] |Which raises the question of why tenure is under attack.
[222003820100] |I have two guesses: 1) it's a way of ignoring the progressive defunding of public universities, or 2) part of the broader war on science.
[222003820110] |There are possibly a few people who genuinely think tenure is a bad idea, but not because eliminating it will save money (it won't), because it'll soften the publish-or-perish ethos (yes, the claim has been made), or because it'll refocus universities on teaching (absurd, irrelevant, and beside the point).
[222003820120] |Which leaves concerns about an inflexible workforce and the occasional dead-weight professor, but that's not on my list of top ten problems in education, and I don't think it should be on anyone else's -- there are bigger fish to fry.
[222003830010] |Why is learning a language so darn hard (golden oldie)
[222003830020] |I work in an toddler language lab, where we study small children who are breezing through the process of language acquisition.
[222003830030] |They don't go to class, use note cards or anything, yet they pick up English seemingly in their sleep (see my previous post on this).
[222003830040] |Just a few years ago, I taught high school and college students (read some of my stories about it here) and the scene was completely different.
[222003830050] |They struggled to learn English.
[222003830060] |Anyone who has tried to learn a foreign language knows what I mean.
[222003830070] |Although this is well-known, it's a bit of mystery why.
[222003830080] |It's not the case that my Chinese students didn't have the right mouth shape for English (I've heard people -- not scientists -- seriously propose this explanation before).
[222003830090] |It's also not just that you can learn only one language.
[222003830100] |There are plenty of bilinguals out there.
[222003830110] |Jesse Snedeker (my PhD adviser as of Monday) and her students recently completed a study of cross-linguistic late-adoptees -- that is, children who were adopted between the ages of 2 and 7 into a family that spoke a different language from that of the child's original home or orphanage.
[222003830120] |In this case, all the children were from China.
[222003830130] |They followed the same pattern of linguistic development -- both in terms of vocabulary and grammar -- as native English speakers and in fact learned English faster than is typical (they steady caught up with same-age English-speaking peers).
[222003830140] |So why do we lose that ability?
[222003830150] |One model, posited by Michael Ullman at Georgetown University (full disclosure: I was once Dr. Ullman's research assistant), has to do with the underlying neural architecture of language.
[222003830160] |Dr. Ullman argues that basic language processes are divided into vocabulary and grammar (no big shock there) and that vocabulary and grammar are handled by different parts of the brain.
[222003830170] |Simplifying somewhat, vocabulary is tied to temporal lobe structures involved in declarative memory (memory for facts), while grammar is tied to procedural memory (memory for how to do things like ride a bicycle) structures including the prefrontal cortex, the basal ganglia and other areas.
[222003830180] |As you get older, as we all know, it becomes harder to learn new skills (you can't teach an old dog new tricks).
[222003830190] |That is, procedural memory slowly loses the ability to learn new things.
[222003830200] |Declarative memory stays with us well into old age, declining much more slowly (unless you get Alzheimer's or other types of dementia).
[222003830210] |Based on Dr. Ullman's model, then, you retain the ability to learn new words but have more difficulty learning new grammar.
[222003830220] |And grammar does appear to be the typical stumbling block in learning new languages.
[222003830230] |Of course, I haven't really answered my question.
[222003830240] |I just shifted it from mind to brain.
[222003830250] |The question is now: why do the procedural memory structures lose their plasticity?
[222003830260] |There are people studying the biological mechanisms of this loss, but that still doesn't answer the question we'd really like to ask, which is "why are our brains constructed this way?"
[222003830270] |After all, wouldn't it be ideal to be able to learn languages indefinitely?
[222003830280] |I once put this question to Helen Neville, a professor at the University of Oregon and expert in the neuroscience of language.
[222003830290] |I'm working off of a 4-year-old memory (and memory isn't always reliable), but her answer was something like this:
[222003830300] |Plasticity means that you can easily learn new things.
[222003830310] |The price is that you forget easily as well.
[222003830320] |For facts and words, this is a worthwhile trade-off.
[222003830330] |You need to be able to learn new facts for as long as you live.
[222003830340] |For skills, it's maybe not a worthwhile trade-off.
[222003830350] |Most of the things you need to be able to do you learn to do when you are relatively young.
[222003830360] |You don't want to forget how to ride a bicycle, how to walk, or how to put a verb into the past tense.
[222003830370] |That's the best answer I've heard.
[222003830380] |But I'd still like to be able to learn languages without having to study them.
[222003830390] |originally posted 9/12/07
[222003850010] |When is the logically impossible possible?
[222003850020] |Child's Play has posted the latest in a series of provoking posts on language learning.
[222003850030] |There's much to recommend the post, and it's one of the better defenses of statistical approaches to language learning around on the Net.
[222003850040] |It would benefit from some corrections, though, and into the gap I humbly step...
[222003850050] |The post sets up a classic dichotomy:
[222003850060] |Does language “emerge” full-blown in children, guided by a hierarchy of inbuilt grammatical rules for sentence formation and comprehension?
[222003850070] |Or is language better described as a learned system of conventions —one that is grounded in statistical regularities that give the appearance of a rule-like architecture, but which belie a far more nuanced and intricate structure?
[222003850080] |It's probably obvious from the wording which one they favor.
[222003850090] |It's also less obviously a false dichotomy.
[222003850100] |There probably was a very strong version of Nativism that at one point looked like their description of Option #1, but very little Nativist theory I've read from the last few decades looks anything like that.
[222003850110] |Syntactic Bootstrapping and Syntactic Bootstrapping are both much more nuanced (and interesting) theories.
[222003850120] |Some Cheek!
[222003850130] |Here's where the post gets cheeky:
[222003850140] |For over half a century now, many scientists have believed that the second of these possibilities is a non starter.
[222003850150] |Why?
[222003850160] |No one’s quite sure —but it might be because Chomsky told them it was impossible.
[222003850170] |Wow?
[222003850180] |You mean nobody really thought it through?
[222003850190] |That seems to be what Child's Play thinks, but it's a misrepresentation of history.
[222003850200] |There are a lot of very good reasons to favor Nativist positions (that is, ones with a great deal of built-in structure).
[222003850210] |As Child's Play discuss -- to their credit -- any language admits an infinite number of grammatical sentences, so any finite grammar will fail (they treat this as a straw-man argument, but I think historically that was once a serious theory).
[222003850220] |There are a number of other deep learning problems that face Empiricist theories (Pinker has an excellent paper on the subject from around 1980).
[222003850230] |There are deep regularities across languages -- such as linking rules -- that are crazy coincidences or reflect innate structure.
[222003850240] |The big one, from my standpoint, is that any reasonable theory of language is going to have to have, in the adult state, a great deal of structure.
[222003850250] |That is, one wants to know why "John threw the ball AT Sally" means something different from "John threw the ball TO Sally."
[222003850260] |Or why "John gave Mary the book" and "John gave the book to Mary" mean subtly different things (if you don't see that, try substituting "the border" with "Mary").
[222003850270] |A great deal of meaning is tied up in structure, and representing structure as statistical co-occurrences doesn't obviously do the job.
[222003850280] |Unlike Child's Play, I'm not going to discount any possibility of the opposing theories to get the job done (though I'm pretty sure they can't).
[222003850290] |I'm simply pointing out that Nativism didn't emerge from a sustained period of collective mental alienation.
[222003850300] |Logically Inconsistent
[222003850310] |Here we get to the real impetus for this response, which is this extremely odd section towards the end:
[222003850320] |We only get to this absurdist conclusion because Miller &Chomsky’s argument mistakes philosophical logic for science (which is, of course, exactly what intelligent design does).
[222003850330] |So what’s the difference between philosophical logic and science?
[222003850340] |Here’s the answer, in Einstein’s words, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”
[222003850350] |In context, this means something like "Just because our theories have been shown to be logically impossible doesn't mean they are impossible."
[222003850360] |I've seen similar arguments before, and all I can say each time is:
[222003850370] |Huh?
[222003850380] |That is, they clearly understand logic quite differently from me.
[222003850390] |If something is logically impossible, it is impossible.
[222003850400] |2 + 2 = 100 is logically impossible, and no amount of experimenting is going to prove otherwise.
[222003850410] |The only way a logical proof can be wrong is if (a) your assumptions were wrong, or (b) your reasoning was faulty.
[222003850420] |For instance, the above math problem is actually correct if the answer is written in base 2.
[222003850430] |In general, one usually runs across this type of argument when there is a logical argument against a researcher's pet theory, and said researcher can't find a flaw with the argument.
[222003850440] |They simply say, "I'm taking a logic holiday."
[222003850450] |I'd understand saying, "I'm not sure what the flaw in this argument is, though I'm pretty sure there is one."
[222003850460] |It wouldn't be convincing (or worth publishing), but I can see that.
[222003850470] |Simply saying, "I've decided not to believe in logic because I don't like what it's telling me" is quite another thing.
[222003870010] |Science, Grime and Republicans
[222003870020] |Every time I go to Russia, the first thing I notice is the air.
[222003870030] |I would say it's like sucking on a car's exhaust pipe, but -- and this is key to my story -- the air in American exhaust pipes is actually relatively fresh.
[222003870040] |You have to image black soot spewing forth from a grimy, corroded pipe.
[222003870050] |Pucker up.
[222003870060] |[That's the first thing I notice, unless I'm in St Petersburg -- In many parts of Petersburg the smell of urine overwhelms the industrial pollution.
[222003870070] |And I say this as someone who loves Petersburg.]
[222003870080] |So whenever I read that regulations are strangling business, I think of Russia.
[222003870090] |The trash everywhere.
[222003870100] |My friends, living in a second floor apartment, complaining how the grime that comes in through the window (they can't afford airconditioning) turns everything in the apartment grey.
[222003870110] |Gulping down breaths of sandpaper.
[222003870120] |The hell-hole that oil extraction has made of Sakhalin.
[222003870130] |Seriously, I don't know why more post-apocalyptic movies aren't shot in Sakhalin.
[222003870140] |Neither words nor pictures can describe the remnants of clear-cut, burnt-over forest -- looking at it, not knowing how long it's been like that, since such forests (I'm told) will almost certainly never grow back.
[222003870150] |It's something everybody should see once.
[222003870160] |At least Russia has a great economy, thanks to deregulation.
[222003870170] |Or not.
[222003870180] |New Russians, of course, live quite well, but most people I know (college-educated middle class) are, by American standards, dirt poor.
[222003870190] |And even New Russians have to breath that shitty, shitty air.
[222003870200] |Reality
[222003870210] |Listening to people complain that environmental regulation is too costly and largely without value, you'd be forgiven for thinking such places didn't exist.
[222003870220] |You might believe that places without environmental regulations are healthy, wealthy and wise, rather than, for the most part, impoverished and with lousy air and water.
[222003870230] |This is the problem with the modern conservative movement in the US, and why I'm writing this post in a science blog.
[222003870240] |Some time ago, conservatives had a number of ideas that seemed plausible.
[222003870250] |It turns out, many of them were completely wrong.
[222003870260] |The brightest of the bunch abandoned these thoroughly-discredited ideas and moved on to new ones.
[222003870270] |Others, forced to choose between reality and their priors, chose the priors.
[222003870280] |The most famous articulation of this position comes from an anonymous Bush aid, quoted by Ron Suskind:
[222003870290] |The aide said that guys like me were "in what we call the reality-based community," which he defined as people who "believe that solutions emerge from your judicious study of discernible reality." ...
[222003870300] |"That's not the way the world really works anymore," he continued.
[222003870310] |"We're an empire now, and when we act, we create our own reality.
[222003870320] |And while you're studying that reality—judiciously, as you will—we'll act again, creating other new realities, which you can study too, and that's how things will sort out.
[222003870330] |We're history's actors…and you, all of you, will be left to just study what we do."
[222003870340] |Even More Reality
[222003870350] |It doesn't stop there.
[222003870360] |Discretionary government spending, one hears, is the cause of our deficits, despite the fact that the deficit is larger than all discretionary government spending.
[222003870370] |Tax breaks for the rich stimulate the economy, whereas infrastructure improvements are useless.
[222003870380] |Paul Krugman's blog is one long chronicle of absurd economic fantasy coming from the Right.
[222003870390] |Gay marriage harms traditional marriage -- despite the fact that places where gay marriage and civil unions exist (e.g., New England) tend to have lower divorce rates and lower out-of-wedlock birth rates.
[222003870400] |European-style medicine is to be avoided at all costs, despite the fact that the European medical system costs less and delivers better results than the American system.
[222003870410] |Global warming.
[222003870420] |Evolution.
[222003870430] |And so on.
[222003870440] |A Strong Opposition
[222003870450] |I actually strongly believe in the value of a vibrant, healthy opposition.
[222003870460] |In my work, I prefer collaborators with whom I don't agree, on the belief that this tension ultimately leads to better work.
[222003870470] |Group-think is a real concern.
[222003870480] |There may be actual reasons to avoid a particular environmental regulation, European-style health care, a larger stimulus bill, etc. -- but to the extent that those reasons are based on empirical claims, the claims should actually be right.
[222003870490] |You don't get to just invent facts.
[222003870500] |So in theory, I could vote for a good Republican.
[222003870510] |But even if there were to be one running for office now -- and I don't think there are any -- they'd still caucus with the self-destructive, nutters that make up most of the modern party.
[222003870520] |This is not to say Democrats have no empirical blind spots (they seem to be just as likely to believe that nonsense about vaccines and Autism, for instance), but on the whole, Democrats believe in reality.
[222003870530] |More to the point, most (top) scientists and researchers are Democrats, which has to influence the party (no data here, but I have yet to meet a Republican scientist, so they can't be that common).
[222003870540] |So if you believe in reality, if you believe in doing what works rather than what doesn't, if you care at all about the future of our country, and if you are eligible to vote in the US elections this Fall, vote for the Democrat (or Left-leaning independent, etc., if there's one with a viable chance of winning).
[222003880010] |Help Games with Words get a job!
[222003880020] |As job application season comes around, I'm trying to move some work over from the "in prep" and "under revision" columns to the "submitted" column (which is why I'm working on a Sunday).
[222003880030] |There is one old project that's just waiting for more data before resubmission.
[222003880040] |I've already put up calls here for readers to participate, so you've probably participated.
[222003880050] |But if anyone is willing to pass on this call for participation to their friends, it would be much appreciated.
[222003880060] |I personally think this is the most entertaining study I've run online, but for whatever reason it's never attracted the same amount of traffic as the others, so progress has been slow.
[222003880070] |You can find the experiment (The Video Test) here.
[222003890010] |Shilling
[222003890020] |I recently received the following invitation:
[222003890030] |Hi my name is [redacted] and I’m a blog spotter.
[222003890040] |I basically scour popular blogs in an effort to find great writers.
[222003890050] |I loved your post on Science, Grime and Republicans, nice job!
[222003890060] |I’d like to get straight to the point.
[222003890070] |Our client wants people like you to sponsor their products and will pay you to do so.
[222003890080] |They’ve launched an educational product on September 7th that teaches others how to make money on the internet by using Facebook and Social Media.
[222003890090] |We want to pay you for recommending that product to your loyal blog readers and we will pay you up to $200 for each person that you refer.
[222003890100] |If you make just one sale a day you’re looking at making around $6000 per month.
[222003890110] |All you need to do is create a few blog posts that recommend this product.
[222003890120] |You may also use one of our nice banners and place it on your blog.
[222003890130] |Rumor would suggest that a fair number of bloggers do strike such bargains (no idea about the proportion).
[222003890140] |So just in case any of my loyal readers are wondering, I will never take money to recommend a product.
[222003890150] |If I recommend a product, it's because I like it.
[222003900010] |Sorry, Sharing My Data is Illegal
[222003900020] |I recently got back from collecting data in Russia.
[222003900030] |This particular study brought into focus for me the issues involved in making experimental data public.
[222003900040] |In this study, I videotape people as they listen to stories, look at pictures, and answer questions about the stories.
[222003900050] |The videotape is key, since what I'm actually analyzing is the participants' eye-gaze direction during different parts of the stories (this can be used to partially determine what the participants were thinking at different points in time).
[222003900060] |Sharing raw data would mean sharing the videos...which I can't do.
[222003900070] |These videos are confidential, and there's no easy way of making them anonymous, since they are close-up videos of people's faces.
[222003900080] |I could ask participants to sign a waver allowing me to put up their videos on the Internet, but I suspect most of my participants would just refuse to participate.
[222003900090] |Many were concerned enough about the video as was.
[222003900100] |Now, I could share the semi-processed data -- that is, not the videos themselves but the information gleaned from them.
[222003900110] |I already discussed some of the problems with that, namely that getting the data into a format that's easy for someone else to analyze is extremely time-consuming.
[222003900120] |This isn't an issue with just one study -- more than half the studies I run are eye-tracking studies.
[222003900130] |Many of the rest are EEG studies, which can have several gigabytes of data each and thus it's simply impractical to share the data (plus, when dealing with brain data anonymity is even more a concern).
[222003900140] |I do some kid studies where I simply write down participants' responses, but if your goal was the check to make sure I'm recording my data correctly, that wouldn't help -- what you'd want are tapes of the experiments, but good luck convincing the ethics board to allow me to post videos of young children participants in experiments on the Internet.
[222003900150] |[Those are my laboratory studies.
[222003900160] |Data from my Web-based studies is actually relatively easy to share -- though you'd have to be proficient in ActionScript to understand it.]
[222003900170] |Certainly, there are many behavioral researchers that wouldn't have this problem.
[222003900180] |But there are many who would.
[222003900190] |Mandating that everyone make their data publicly available would mean that many kinds of experiments simply couldn't be done anymore.
[222003910010] |Thank you, Oprah
[222003910020] |Oprah's magazine linked to my collaborator's web-based lab.
[222003910030] |I'm a little miffed at the lack of the link love, but I still got something out of it -- we now have over 20,000 participants in the experiment we've been running on her site.
[222003910040] |So thank you, Oprah.
[222003910050] |Busy analyzing...
[222003920010] |Games and Words
[222003920020] |I diligently tag posts on this blog, not because I actually think anybody clicks in the cloud to find specific types of posts, but because it's interesting to see, over time, what I usually write about.
[222003920030] |There's another way of doing this.
[222003920040] |Wordle.net will allow you to input the feed for a blog, and it will extract the most common words from the last number of posts.
[222003920050] |I'm gratified to see that the most common word in this blog is "data," followed by "studies," "participants" and "blog".
[222003920060] |The high frequency of "blog" and the URL for this site are a byproduct of my ATOM feed, which lists the URL of the blog after every post.
[222003920070] |Unfortunately, the restriction of Wordle.net to the most recent posts means that some words are over-weighted.
[222003920080] |For instance, my recent post about shilling for products mentioned the word "product" enough times to make that word prominent in this word cloud.
[222003930010] |Wait -- Jonah Lehrer Wants Reading to be Harder?
[222003930020] |Recently Jonah Lehrer, now at Wired, wrote a ode to books, titled The Future of Reading.
[222003930030] |Many people are sad to see the slow replacement of physical books by e-readers -- though probably not many people who have lugged 50 pounds of books in a backpack across Siberia, though that's a different story.
[222003930040] |The take-home message appears 2/3 of the way down:
[222003930050] |So here’s my wish for e-readers.
[222003930060] |I’d love them to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult.
[222003930070] |Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme.
[222003930080] |Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway.
[222003930090] |We won’t just scan the words –we will contemplate their meaning.
[222003930100] |As someone whose to-read list grows several times faster than I actually do any reading, I've never wished to read more slowly.
[222003930110] |But Lehrer is a science writer, and (he thinks) there's more to this argument than just aesthetics.
[222003930120] |As far as I can tell, though, it's based on a profound misunderstanding of the science.
[222003930130] |Since he manages to get through the entire post without ever citing a specific experiment, it's hard to tell for sure, but here's what I've managed to piece together.
[222003930140] |Reading Research Here's Lehrer:
[222003930150] |Let me explain.
[222003930160] |Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading.
[222003930170] |It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts.
[222003930180] |One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading.
[222003930190] |The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning.
[222003930200] |According to Dehaene, this ventral pathway is turned on by “routinized, familiar passages” of prose, and relies on a bit of cortex known as visual word form area (VWFA).
[222003930210] |So far, so good.
[222003930220] |Dehaene is a brilliant researcher who has had an enormous effect on several areas of cognition (I'm more familiar with his work on number).
[222003930230] |I'm a bit out-of-date on reading research (and remember Lehrer doesn't actually cite anything to back up his argument), but this looks like an updated version of the old distinction between whole-word reading and real-time composition.
[222003930240] |That is, it goes without saying that you must "sound out" novel words that you've never encountered before, such as gafrumpenznout.
[222003930250] |However, it seems that as you become more familiar with a particular word (maybe Gafrumpenznout is your last name), you can recognize the word quickly without sounding it out.
[222003930260] |Here's the abstract from a relevant 2008 Dehaene group paper:
[222003930270] |Fast, parallel word recognition, in expert readers, relies on sectors of the left ventral occipito-temporal pathway collectively known as the visual word form area.
[222003930280] |This expertise is thought to arise from perceptual learning mechanisms that extract informative features from the input strings.
[222003930290] |The perceptual expertise hypothesis leads to two predictions: (1) parallel word recognition, based on the ventral visual system, should be limited to words displayed in a familiar format (foveal horizontal words with normally spaced letters); (2) words displayed in formats outside this field of expertise should be read serially, under supervision of dorsal parietal attention systems.
[222003930300] |We presented adult readers with words that were progressively degraded in three different ways (word rotation, letter spacing, and displacement to the visual periphery).
[222003930310] |When the words are degraded in these various ways, participants had a harder time reading and recruited different parts of the brain.
[222003930320] |A (slightly) more general public-friendly version of this story appears in this earlier paper.
[222003930330] |This appears to be the paper that Lehrer is referring to, since he says that Dehaene, in experiments, activates the dorsal pathways "in a variety of ways, such as rotating the letters or filling the prose with errant punctuation."
[222003930340] |And the Vision Science Behind It
[222003930350] |This work makes a lot of sense, given what we know about vision.
[222003930360] |Visual objects -- such as letters -- "crowd" each other.
[222003930370] |In other words, when there are several that are close together, it's hard to see any of them.
[222003930380] |This effect is worse in peripheral vision.
[222003930390] |Therefore, to see all the letters in a long-ish word, you may need to fixate on multiple parts of the word.
[222003930400] |However, orthography is heavily redundant.
[222003930410] |One good demonstration of this is rmvng ll th vwls frm sntnc.
[222003930420] |You can still read with some of the letters missing (and of course some languages, like Hebrew, never print vowels).
[222003930430] |Moreover, sentence context can help you guess what a particular word is.
[222003930440] |So if you're reading a familiar word in a familiar context, you may not need to see all the letters well in order to identify it.
[222003930450] |The less certain you are of what the word is, the more carefully you'll have to look at it.
[222003930460] |The Error
[222003930470] |So far, this research appears to be about visual identification of familiar objects.
[222003930480] |Lehrer makes a big leap, though:
[222003930490] |When you are a reading a straightforward sentence, or a paragraph full of tropes and cliches, you’re almost certainly relying on this ventral neural highway.
[222003930500] |As a result, the act of reading seems effortless and easy.
[222003930510] |We don’t have to think about the words on the page ...
[222003930520] |Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts.
[222003930530] |We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.
[222003930540] |This suggests that the act of reading observes a gradient of awareness.
[222003930550] |Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly.
[222003930560] |Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway.
[222003930570] |All the extra work –the slight cognitive frisson of having to decipher the words –wakes us up.
[222003930580] |It's based on this that he argues that e-readers should make it harder to read, because then we'd pay more attention to what we're reading.
[222003930590] |The problem is that he seems to have confused the effort expended in recognizing the visual form of a word -- the focus of Dehaene's work -- with effort expended in interpreting the meaning of the sentence.
[222003930600] |Moreover, he seems to think that the harder it is to understand something, the more we'll understand it -- which seems backwards to me.
[222003930610] |Now it is true that the more deeply we process something the better we remember it, but it's not clear that making something hard to see necessarily means we process it more deeply.
[222003930620] |In any case, we'd want some evidence that this is so, which Lehrer doesn't cite.
[222003930630] |Which brings me back to citation.
[222003930640] |Dehaene did just publish a book on reading, which I haven't read because it's (a) long, and (b) not available on the Internet.
[222003930650] |Maybe Dehaene makes the claim that Lehrer is attributing to him in that book.
[222003930660] |Maybe there's even evidence to back that claim up.
[222003930670] |As far as I can tell, that work wasn't done by Dehaene (as Lehrer implies) since I can't find it on Dehaene's website.
[222003930680] |Though maybe it's there under a non-obvious title (Dehaene publishes a lot!).
[222003930690] |This would be solved if Lehrer would cite his sources.
[222003930700] |Caveat
[222003930710] |I like Lehrer's writing, and I've enjoyed the few interactions I've had with him.
[222003930720] |I think occasional (frequent?) confusion is a necessary hazard of being a science writer.
[222003930730] |I have only a very small number of topics I feel I understand well enough to write about them competently.
[222003930740] |Lehrer, by profession, must write about a very wide range of topics, and it's not humanly possible to understand many of them very well.
[222003930750] |________________ DEHAENE, S., COHEN, L., SIGMAN, M., &VINCKIER, F. (2005).
[222003930760] |The neural code for written words: a proposal Trends in Cognitive Sciences, 9 (7), 335-341 DOI: 10.1016/j.tics.2005.05.004
[222003930770] |Cohen L, Dehaene S, Vinckier F, Jobert A, &Montavont A (2008).
[222003930780] |Reading normal and degraded words: contribution of the dorsal and ventral visual pathways.
[222003930790] |NeuroImage, 40 (1), 353-66 PMID: 18182174
[222003930800] |Photos: margolove, kms !
[222003940010] |Intelligent Nihilism
[222003940020] |The latest issue of Cognitive Science, which is rapidly becoming one of my favorite journals, carries an interesting and informative debate on the nature of language, thought, cognition and learning, between John Hummel at University of Illinois-Urbana-Champaign, and Michael Ramscar, at Stanford University.
[222003940030] |This exchange of papers highlights what I think is the current empirical standstill between two very different world-views.
[222003940040] |Hummel takes up the cause of "traditional" models on which thought and language is deeply symbolic and involves algebraic rules.
[222003940050] |Ramscar defends more "recent" alternative models that are built on associate learning -- essentially, an update on the story that was traditional before the symbolic models.
[222003940060] |Limitations of Relational Systems
[222003940070] |The key to Hummel's argument, I think, is his focus on explicitly relational systems:
[222003940080] |John can love Mary, or be taller than Mary, or be the father of Mary, or all of the above.
[222003940090] |The vocabulary of relations in a symbol system is open-ended ... and relations can take other relations as arguments (e.g., Mary knows John loves her).
[222003940100] |More importantly, not only can John love Mary, but Sally can love Mary, too, and in both cases it is the very same "love" relation ...
[222003940110] |The Mary that John loves can be the very same Mary that is loved by Sally.
[222003940120] |This capacity for dynamic recombination is at the heart of a symbolic representation and is not enjoyed by nonsymbolic representations.
[222003940130] |That is, language has many predicates (e.g., verbs) that seem to allow arbitrary arguments.
[222003940140] |So talking about the meaning of love is really talking about the meaning of X loves Y: X has a particular type of emotional attachment to Y. You're allowed to fill in "X" and "Y" more or less how you want, which is what makes them symbols.
[222003940150] |Hummel argues that language is even more symbolic than that: not only do we need symbols to refer to arguments (John, Mary, Sally), but we also need symbols to refer to predicates as well.
[222003940160] |We can talk about love, which is itself a relation between two arguments.
[222003940170] |Similarly, we can talk about friendship, which is an abstract relation.
[222003940180] |This is a little slippery if you're new to the study of logic, but doing this requires a second-order logic, which has a number of formal properties.
[222003940190] |Where Hummel wants to go with this is that associationist theories, like Ramscar's, can't represent second-order logical systems (and probably aren't even up to the task of the types of first-order systems we might want).
[222003940200] |Intuitively, this is because associationist theories represent similarities between objects (or at least how often both occur together), and it's not clear how they would represent dissimilarities, much less represent the concept of dissimilarity:
[222003940210] |John can be taller than Mary, a beer bottle taller than a beer can, and an apartment building is taller than a house.
[222003940220] |But in what sense, other than being taller than something, is John like a beer bottle or an apartment building?
[222003940230] |Making matters worse, Mary is taller than the beer bottle and the house is taller than John.
[222003940240] |Precisely because of their promiscuity, relational concepts defy learning in terms of simple associative co-occurrences.
[222003940250] |It's not clear in these quotes, but there's a lot of math to back this stuff up: second-order logic systems are extremely powerful and can do lots of useful stuff.
[222003940260] |Less powerful computational systems simply can't do as much.
[222003940270] |The Response
[222003940280] |Ramscar's response is not so much to deny the mathematical truths Hummel is proposing.
[222003940290] |Yes, associationist models can't capture all that symbolic systems can do, but language is not a symbolic system:
[222003940300] |We think that mapping natural language expressions onto the promiscuous relations Hummel describes is harder than her does.
[222003940310] |Far harder: We think you cannot do it.
[222003940320] |Ramscar identifies a couple old problems: one is polysemy, the fact that words have multiple meanings (John can both love Mary and love a good argument, but probably not in the same way).
[222003940330] |Fair enough -- nobody has a fully working explanation of polysemy.
[222003940340] |The other problem is the way in which the symbols themselves are defined.
[222003940350] |You might define DOG in terms of ANIMAL, PET, FOUR-LEGGED, etc.
[222003940360] |Then those symbols also have to be defined in terms of other symbols (e.g., FOUR-LEGGED has to be defined in terms of FOUR and LEG).
[222003940370] |Ramscar calls this the turtles-all-the-way-down argument.
[222003940380] |This is fair in the sense that nobody has fully worked out a symbolic system that explains all of language and thought.
[222003940390] |It's unfair in that he doesn't have all the details of this theory worked out, either, and his model is every bit as turtles-all-the-way-down.
[222003940400] |Specifically, concepts are defined in terms of cooccurrences of features (a dog is a unique pattern of co-occurring tails, canine teeth, etc.).
[222003940410] |Either those features are themselves symbols, or they are always patterns of co-occuring features (tail = co-occurrence of fur, flexibility, cylindrical shape, etc.), which are themselves patterns of other other feature co-occurrences, etc.
[222003940420] |(It's also unfair in that he's criticizing a very old symbolic theory; there are newer, possibly better ones around, too.)
[222003940430] |Implicit in his argument is the following: anything that symbolic systems can do that associationist systems can't do are things that humans can't do either.
[222003940440] |He doesn't address this directly, but presumably this means that we don't represent abstract concepts such as taller than or friendship, or, if we do, it's via a method very different from formal logic (what that would be is left unspecified).
[222003940450] |It's A Matter of Style
[222003940460] |Here's what I think is going on: symbolic computational systems are extremely powerful and can do lots of fancy things (like second-order logics).
[222003940470] |If human brains instantiate symbolic systems, that would explain very nicely lots of the fancy things we can do.
[222003940480] |However, we don't really have any sense of how neurons could instantiate symbols, or even if it's possible.
[222003940490] |So if you believe in symbolic computation, you're basically betting that neurons can do more than it seems.
[222003940500] |Associationist systems face the opposite problem: we know a lot about associative learning in neurons, so this seems like an architecture that could be instantiated in the brain.
[222003940510] |The problem is that associative learning is an extremely underpowered learning system.
[222003940520] |So if you like associationist systems, you're betting that humans can't actually do many of the things (some of) us think humans can do.
[222003940530] |Over at Child's Play, Dye claimed that the argument in favor of Universal Grammar was a form of Intelligent Design: we don't know how that could be learned/evolve, so it must be innate/created.
[222003940540] |I'll return the favor by labeling Ramscar's argument Intelligent Nihilism: we don't how the brain could give rise to a particular type of behavior, so humans must not be capable of it.
[222003940550] |The point I want to make is we don't have the data to choose between these options.
[222003940560] |You do have to work within a framework if you want to do research, though, and so you pick the framework that strikes you as most plausible.
[222003940570] |Personally, I like symbolic systems.
[222003940580] |---------- John E. Hummel (2010).
[222003940590] |Symbolic versus associative learning Cognitive Science, 34, 958-865
[222003940600] |Michael Ramscar (2010).
[222003940610] |Computing machinery and understanding Cognitive Science, 34, 966-971
[222003940620] |photos: Anirudh Koul (jumping), wwarby (turtles), kaptain kobold (Darwin)
[222003950010] |Cloudier
[222003950020] |I wasn't able to get Edward's suggestion to work, but I did sit down and paste all my posts back to 4/27/2010 into Wordle.
[222003950030] |In this more representative sample, it seems people actually beats out word and language, though probably the combination of word and words would win if Wordle could handle inflectional morphology.
[222003960010] |Negative Evidence: Still Missing after all these Years
[222003960020] |My pen-pal Melodye has posted a thought-provoking piece at Child's Play on negative evidence.
[222003960030] |As she rightly points out, issues of negative evidence have played a crucial role in the development of theories of language acquisition.
[222003960040] |But she doesn't think that's a good thing.
[222003960050] |Rather, it's "ridiculous, [sic] and belies a complete lack of understanding of basic human learning mechanisms."
[222003960060] |The argument over negative evidence, as presented by Melodye, is ridiculous, but that seems to stem from (a) conflating two different types of negative evidence, and (b) misunderstanding what the argument was about.Fig. 1.
[222003960070] |Melodye notes that rats can learn from negative evidence, so why can't humans?
[222003960080] |We'll see why.
[222003960090] |Here's Melodye's characterization of the negative evidence argument:
[222003960100] |[T]he argument is that because children early on make grammatical ‘mistakes’ in their speech (e.g., saying ‘mouses’ instead of ‘mice’ or ‘go-ed’ instead of ‘went’), and because they do not receive much in the way of corrective feedback from their parents (apparently no parent ever says “No, Johnny, for the last time it’s MICE”), it must therefore beimpossible to explain how children ever learn to correct these errors.
[222003960110] |How —ask the psychologists —could little Johnny ever possibly ‘unlearn’ these mistakes?
[222003960120] |This supposed puzzle is taken by many in developmental psychology to be one of a suite of arguments that have effectively disproved the idea that language can be learned without an innate grammar.
[222003960130] |What's the alternative?
[222003960140] |Children are predicting what word is going to come up in a sentence.
[222003960150] |[I]f the child is expecting ‘mouses’ or ‘gooses,’ her expectations will be violated every time she hears ‘mice’ and ‘geese’ instead.
[222003960160] |And clearly that will happen a lot.
[222003960170] |Over time, this will so weaken her expectation of ‘mouses’ and ‘gooses,’ that she will stop producing these kinds of words in context.
[222003960180] |I can't speak for every Nativist, or for everyone who has studied over-regularization, but since Melodye cites Pinker extensively and specifically, and since I've worked on over-regularization within the Pinkerian tradition, I think I can reasonably speak for at least a variant of Pinkerianism.
[222003960190] |And I think Melodye actually agrees with us almost 100%.
[222003960200] |My understanding of Pinker's Words and Rules account -- and recall that I published work on this theory with one of Pinker's students, so I think my understanding is well-founded -- is that children originally over-regularize the plural of mouse as mouses, but eventually learn that mice is the plural of mouse by hearing mice a lot.
[222003960210] |That is, our account is almost identical to Melodye's except it doesn't include predictive processing.
[222003960220] |I actually agree that if children are predicting mouses and hear mice, that should make it easier to correct their mistaken over-regularization.
[222003960230] |But the essential story is the same.
[222003960240] |Where I've usually seen Nativists bring up this particular negative evidence argument (and remember there's another) is in the context of Behaviorism, on which rats (and humans) learned through being explicitly rewarded for doing the right thing and explicitly punished for doing the wrong thing.
[222003960250] |The fact that children learning language are almost never corrected (as Melodye notes) is evidence against that very particular type of Empiricist theory.
[222003960260] |That is, we don't (and to my knowledge, never have) argued that children can only learn the word mice through Universal Grammar.
[222003960270] |Again, it's possible (likely?) that someone has made that argument.
[222003960280] |But not us.[1]
[222003960290] |Negative Evidence #2
[222003960300] |There is a deeper problem with negative evidence that does implicate, if not Universal Grammar, at least generative grammars.
[222003960310] |That is, as Pinker notes in the article cited by Melodye, children generalize some things and not others.
[222003960320] |Compare:
[222003960330] |(1) John sent the package to Mary.
[222003960340] |(2) John sent Mary the package.
[222003960350] |(3) John sent the package to the border.
[222003960360] |(4) *John sent the border the package.
[222003960370] |That * means that (4) is ungrammatical, or at least most people find it ungrammatical.
[222003960380] |Now, on a word-prediction theory that tracks only surface statistics (the forms of words, not their meaning or syntactic structure), you'd probably have to argue that whenever children have heard discussions of packages being sent to Mary, they've heard either (1) or (2), but in discussions of sending packages to borders, they've only ever heard (3) and never (4).
[222003960390] |This is surprising, and thus they've learned that (4) is no good.
[222003960400] |The simplest version of this theory won't work, though.
[222003960410] |Since children (and you) have presumably never heard any of the sentences below (where Gazeindenfrump and Bleizendorf are people's names, the dax is an object, and a dacha is a kind of house used in Russia):
[222003960420] |(5) Gazeidenfrump sent the dax to Bleizendorf.
[222003960430] |(6) Gazeidenfrump sent Bleizendorf the dax.
[222003960440] |(7) Gazeidenfrump sent the dax to the dacha.
[222003960450] |(8) *Gazeidenfrump sent the dacha the dax.
[222003960460] |Since we've heard (and expected) sentence #8 just as many times as we heard/expected (5-7), failures of predictions can't explain why we know (8) is bad but (5-7) isn't. (BTW If you don't like my examples, there are many, many more in the literature; these are the best I can think of off the top of my head.)
[222003960470] |So we can't be tracking just the words themselves, but something more abstract.
[222003960480] |Pinker has an extended discussion of this problem in his 1989 book, in which he argues that the constraint is semantic: we know that you can use the double-object construction (e.g., 2, 4, 6 or 8) only if the recipient of the object can actually possess the object (that is, the dax becomes Bleizendorf's, but it doesn't become the dacha's, since dachas -- and borders -- can't own things).
[222003960490] |I'm working off of memory now, but I think -- but won't swear -- that Pinker's solution also involves some aspects of the syntactic/semantic structures above being innate.
[222003960500] |Pinker's account is not perfect and may end up being wrong in some places, but it remains the fact that negative evidence (implicit or not) can't alone explain where children (and adults) do or do not generalize.
[222003960510] |----- Notes:
[222003960520] |[1] Melodye quotes Pinker saying "The implications of the lack of negative evidence for children's overgeneralization are central to any discussion of learning, nativist or empiricist."
[222003960530] |That is the quote that she says is "quite frankly, ridiculous."
[222003960540] |Here is the full quote.
[222003960550] |I'll let you decide whether it's ridiculous:
[222003960560] |This nature–nurture dichotomy is also behind MacWhinney’s mistaken claim that the absence of negative evidence in language acquisition can be tied to Chomsky, nativism, or poverty-of-the-stimulus arguments.
[222003960570] |Chomsky (1965, p. 32) assumed that the child’s input ‘consist[s] of signals classified as sentences and nonsentences _’ –in other words, negative evidence.
[222003960580] |He also invokes indirect negative evidence (Chomsky, 1981).
[222003960590] |And he has never appealed to Gold’s theorems to support his claims about the innateness of language.
[222003960600] |In fact it was a staunch ANTI-nativist, Martin Braine (1971), who first noticed the lack of negative evidence in language acquisition, and another empiricist, Melissa Bowerman (1983, 1988), who repeatedly emphasized it.
[222003960610] |The implications of the lack of negative evidence for children’s overgeneralization are central to any discussion of learning, nativist or empiricist.
[222003960620] |----- Quotes: PINKER, S. (2004).
[222003960630] |Clarifying the logical problem of language acquisition Journal of Child Language, 31 (4), 949-953 DOI: 10.1017/S0305000904006439
[222003960640] |photo: Big Fat Rat
[222003970010] |Spam Filter
[222003970020] |Blogger has helpfully added some advanced spam detection for comments.
[222003970030] |One interesting feature is that I still get an email saying a comment has been left even if the comment is flagged as spam and isn't posted.
[222003970040] |This makes it a little harder for me to moderate than you might wish.
[222003970050] |So if your post has been flagged as spam, either be patient and wait until I discover it, or send me an email directly and I'll un-flag it.
[222003980010] |Thank you, Amazon!
[222003980020] |As regular readers know, I've been brushing up my CV in anticipation of some application deadlines.
[222003980030] |This mostly means trying to move papers from the in prep column to the in submission column.
[222003980040] |(I'd love to get some new stuff into the in press column, but with the glacial pace of review, that's unlikely to happen in the time frame I'm working with).
[222003980050] |This means, unfortunately, I'm working during a beautiful Saturday morning.
[222003980060] |This would be more depressing if it weren't for the wonder that is Amazon Mechanical Turk.
[222003980070] |I ran two experiments yesterday (48 subjects each), have another one running right now, and will shortly put up a fourth.
[222003980080] |The pleasure of getting new data -- of finding things out -- is why I'm in this field.
[222003980090] |It's almost as fun as walking along the river on a beautiful Saturday morning.
[222003980100] |Almost.
[222003980110] |How I feel about Amazon Mechanical Turk -- the psycholinguist's new best friend.
[222003980120] |photo: Daniel*1977
[222003990010] |Masked review?
[222003990020] |I just submitted a paper to the Journal of Experimental Psychology: General.
[222003990030] |Like many journals, this journal allows masked review -- that is, at least in theory, the reviewers won't know who you are.
[222003990040] |On the whole, I'm not sure how useful blind review is.
[222003990050] |If you're pushing an unpopular theoretical position, I expect reviewers will jump on you no matter what.
[222003990060] |If you're simply personally so unpopular that no one will sign off on your work, you have problems that masked review won't solve.
[222003990070] |But the real reason I chose not to use blind review was laziness -- it's a pain to go through the manuscript and remove anything that might give away who you are (assuming this is even possible -- for instance, if you cite your own unpublished work a lot, that's going to give away the game and there's not much you can do about that, except not cite that work).
[222003990080] |But I'm curious how other people feel about this.
[222003990090] |Do you usually request masked review?
[222003990100] |Those who have reviewed papers, do you treat masked papers differently from signed ones?
[222003990110] |photo: Ben Fredericson (xjrlokix)
[222004000010] |Overheard: Scientific Prejudice
[222004000020] |A senior colleague recently attended an Autism conference.
[222004000030] |Language is frequently impaired in Autism, and so many of the neuroscientists there were trying to look at the effects of their animals models of Autism on language.
[222004000040] |Yes, you read that correctly: animal models of language.
[222004000050] |In many cases, rats.
[222004000060] |This colleague and I both believe in some amount of phylogenetic continuity: some aspects of language are no doubt built on mechanisms that existed in our distant ancestors (and therefore may exist in other modern-day animals).
[222004000070] |But given that we have, at best, a rudimentary understanding of the mechanisms underlying language in humans -- and certainly little or no agreement in the field at present -- arguing that a particular behavior in a rat is related to some aspect of language is at best wild-eyed conjecture (and I say this with a great deal of respect for the people who have been taking this problem seriously).
[222004000080] |Unfortunately, this colleague didn't get very far in discussing these issues with these researchers.
[222004000090] |One, for instance, said, "I know rat squeaks are related to language because they're auditory!"
[222004000100] |Sure, so's sneezing:
[222004000110] |The problem with such conversations, as this colleague pointed out, is that neuroscientists often don't take us cognitive types seriously.
[222004000120] |After all, they work on a "harder" science.
[222004000130] |(For those who haven't seen it yet, read this by DrugMonkey -- tangential but fun.)
[222004000140] |A friend of mine, who is a physicist, once told me that he wasn't sure why psychology was even called a "science" since psychologists don't do experiments -- never mind that I was IMing him from my lab at the time (which he knew).
[222004000150] |When I applied to graduate school, I applied to one interdisciplinary program that included cognitive people and also physiology folk.
[222004000160] |During my interview with one professor who worked on monkey physiology, he interrupted me as I was describing the work I had done as an undergraduate.
[222004000170] |"Nothing of value about language," he told me, "can be learned by studying humans."
[222004000180] |When I suggested that perhaps there weren't any good animal models of language to work with, he said, "No, that's just a prejudice on the part of you cognitive people."
[222004000190] |Keep in mind that there were several faculty in his department who studied language in humans.
[222004000200] |This is why such mixed departments aren't always particularly collegial places to work.
[222004000210] |I bring this up not to rag on neuroscientists or physicists, but to remind the psychologists in the audience that we have this exact same problem.
[222004000220] |I don't know how many people have told me that linguistics is mostly bullshit (when I was an undergraduate, one professor of psychology told me: "Don't study linguistics, Josh.
[222004000230] |It will ruin you as a scientist.") and that philosophy has nothing to offer.
[222004000240] |When you talk to them in detail, though, you quickly realize that most of them have no idea what linguists or philosophers do, what their methods or, or why those fields have settled on those methods.
[222004000250] |And that's the height of arrogance: linguists and philosophers incude, in their numbers, some of the smartest people on the planet.
[222004000260] |It only stands to reason that they know something of value.
[222004000270] |I'm not defending all the methods used by linguists.
[222004000280] |They could be improved.
[222004000290] |(So could methods used by physicists, too.)
[222004000300] |But linguists do do experiments, and they do work with empirical data.
[222004000310] |And they're damn smart.
[222004000320] |Just sayin'.
[222004000330] |Photos: mcfarlando, Jessica Florence.
[222004010010] |The Best Graduate Programs in Psychology
[222004010020] |UPDATE * The report discussed below is even more problematic than I thought.
[222004010030] |The National Academies' just published an assessment of U.S. graduate research programs.
[222004010040] |Rather than compiling a single ranking, they rank programs in a number of different ways -- and also published data on the variables used to calculate those different rankings -- so you can sort the data however you like.
[222004010050] |Another aspect to like is that the methodology recognizes uncertainty and measurement error, so they actually estimate an upper-bound and lower-bound on all of the rankings (what they call the 5th and 95th percentile ranking, respectively).
[222004010060] |Ranked, Based on Research
[222004010070] |So how do the data come out?
[222004010080] |Here are the top programs in terms of "research activity" (using the 5th percentile rankings):
[222004010090] |1. University of Wisconsin-Madison, Psychology 2.
[222004010100] |Harvard University, Psychology 3.
[222004010110] |Princeton University, Psychology 4.
[222004010120] |San Diego State University-UC, Clinical Psychology 5.
[222004010130] |University of Rochester, Social-Personality Psychology 6.
[222004010140] |Stanford University, Psychology 7.
[222004010150] |University of Rochester, Brain &Cognitive Sciences 8.
[222004010160] |University of Pittsburgh-Pittsburgh Campus, Psychology 9.
[222004010170] |University of Colorado at Boulder, Psychology 10.
[222004010180] |Brown University, Cognitive and Linguistic Sciences: Cognitive Sciences
[222004010190] |Yes, it's annoying that some schools have multiple psychology departments and thus each is ranked separately, leading to some apples v. oranges comparisons (e.g., vision researchers publish much faster than developmental researchers, partly because their data is orders of magnitude faster/easier to collect; a department with disproportionate numbers of vision researchers is going to have an advantage).
[222004010200] |What is nice is that these numbers can be broken down in terms of the component variables.
[222004010210] |Here are rankings in terms of publications per faculty per year and citations per publication:
[222004010220] |Publications per faculty per year
[222004010230] |1. State University of New York at Albany, Biopsychology 2.
[222004010240] |University of Wisconsin-Madison, Psychology 3.
[222004010250] |Syracuse University Main Campus, Clinical Psychology 4.
[222004010260] |San Diego State University-UC, Clinical Psychology 5.
[222004010270] |Harvard University, Psychology 6.
[222004010280] |University of Pittsburgh-Pittsburgh Campus, Psychology 7.
[222004010290] |University of Rochester, Social-Personality Psychology 8.
[222004010300] |Florida State University, Psychology 9.
[222004010310] |University of Colorado-Boulder, Psychology 10.
[222004010320] |State University of New York-Albany, Clinical Psychology
[222004010330] |Average Citations per Faculty
[222004010340] |1. University of Wisconsin-Madison, Psychology 2.
[222004010350] |Harvard University, Psychology 3.
[222004010360] |San Diego State University-UC, Clinical Psychology 4.
[222004010370] |Princeton University, Psychology 5.
[222004010380] |University of Rochester, Social_Personality Psychology 6.
[222004010390] |Johns Hopkins University, Psychological and Brain Sciences 7.
[222004010400] |University of Pittsburgh-Pittsburgh Campus, Psychology 8.
[222004010410] |University of Colorado-Boulder, Psychology 9.
[222004010420] |Yale University, Psychology 10.
[222004010430] |Duke University, Psychology
[222004010440] |So what seems to be going on is that there are a lot of schools on the first list which publish large numbers of papers that nobody cites.
[222004010450] |If you combine the two lists in order to get average number of citations per year per faculty, here are the rankings.
[222004010460] |I'm including numbers this time so you can see the distance between the top few and the others.
[222004010470] |The #1 program doubles the rate of citations of the #10 program.
[222004010480] |Average Citations per Faculty per Year
[222004010490] |1. University of Wisconsin-Madison, Psychology - 13.4 2.
[222004010500] |Harvard University, Psychology - 12.7 3.
[222004010510] |San Diego State University-UC, Clinical Psychology - 11.0 4.
[222004010520] |Princeton University, Psychology - 10.6 5.
[222004010530] |University of Rochester, Social-Personality Psychology - 10.6 6.
[222004010540] |Johns Hopkins University, Psychological and Brain Sciences - 8.8 7.
[222004010550] |University of Pittsburgh-Pittsburgh Campus, Psychology - 8.3 8.
[222004010560] |University of Colorado-Boulder, Psychology - 8.0 9.
[222004010570] |Yale University, Psychology - 7.5 10.
[222004010580] |Duke University, Psychology - 6.9
[222004010590] |The biggest surprise for me on these lists is that University of Pittsburg is on it (it's not a program I hear about often) and that Stanford is not.
[222004010600] |Student Support
[222004010610] |Never mind about the research, how do the students do?
[222004010620] |It's hard to say, partly because the variables measured aren't necessarily the ones I would measure, and partly because I don't believe their data.
[222004010630] |The student support &outcomes composite is build out of:
[222004010640] |Percent of first year students with full financial support Average completion rate within 6 years Median time to degree Percent of students with academic plans Program collects data about post-graduation employment
[222004010650] |That final variable is something that would only be included by the data-will-save-us-all crowd; it doesn't seem to have any direct relationship to student support or outcome.
[222004010660] |The fact that they focus on first year funding only is odd.
[222004010670] |I think it's huge that my program guarantees 5 years -- and for 3 of those we don't have to teach.
[222004010680] |Similarly, one might care whether funding is tied to faculty or given to the students directly.
[222004010690] |Or whether there are grants to attend conferences, mini-grants to do research not supported by your advisor, etc.
[222004010700] |But leaving aside whether they measured the right things, did they even measure what they measured correctly?
[222004010710] |The number that concerns me is "percent of students with academic plans," which is defined in terms of the percentage that have lined up either a faculty position or a post-doctoral fellowship by graduations, and which is probably the most important variable of those they list in terms of measuring the success of a research program.
[222004010720] |They find that no school has a rate of over 55% (Princeton).
[222004010730] |Harvard is at 26%.
[222004010740] |To put none to fine a point on it, hat's absurd.
[222004010750] |Occasionally our department sends out a list of who's graduating and what they are doing next.
[222004010760] |Unfortunately, I haven't saved any of them, but typically all but 1 or 2 people are continuing on to academic positions (there's often someone who is doing consulting instead, and occasionally someone who just doesn't have a job lined up yet).
[222004010770] |So the number should be closer to 90-95% -- not just at Harvard, but presumably at peer institutions.
[222004010780] |This makes me worried about their other numbers.
[222004010790] |In any case, since the "student support" ranking is so heavily dependent on this particular variable, and that variable is clearly measured incorrectly, I don't think there's much point in looking at the "student support" ranking closely.
[222004020010] |New Grad School Rankings Don't Pass the Smell Test
[222004020020] |The more I look at the new graduate school rankings, the more deeply confused I am.
[222004020030] |Just after publishing my last post, it suddenly dawned on me that something was seriously wrong with the number of publications per faculty data.
[222004020040] |Looking again at the Harvard data, the spreadsheet claims 2.5 publications per faculty for the time period 2000-2006.
[222004020050] |I think this is supposed to be per faculty per year, though the it's not entirely clear.
[222004020060] |As will be shown below, there's no way that number can be correct.
[222004020070] |First, though, here's what the report says about how the number was calculated:
[222004020080] |Data from the Thompson Reuters (formerly Institute for Scientific Information) list of publications were used to construct this variable.
[222004020090] |It is the average over seven years, 2000-2006, of the number of articles for each allocated faculty member divided by the total number of faculty allocated to the program.
[222004020100] |Data were obtained by matching faculty lists supplied by the programs to Thompson Reuters and cover publications extending back to 1981.
[222004020110] |For multi-authored articles, a publication is awarded for each author on the paper who is also on a faculty list.
[222004020120] |For computer science, refereed papers from conferences were used as well as articles.
[222004020130] |Data from résumés submitted by the humanities faculty were also used to construct this variable.
[222004020140] |They are made up of two measures: the number of published books and the number of articles published during the period 1986 to 2006 that were listed on the résumé.
[222004020150] |The calculated measure was the sum of five times the number of books plus the number of articles for each allocated faculty member divided by the faculty allocated to the program.
[222004020160] |In computing the allocated faculty to the program, only the allocations of the faculty who submitted résumés were added to get the allocation.
[222004020170] |The actual data
[222004020180] |I took a quick look through the CVs of a reasonable subset of faculty who were at Harvard during that time period.
[222004020190] |Here are their approximate publications per year (modulo any counting errors on my part -- I was scanning quickly).
[222004020200] |I should note that some faculty list book chapters separately on their CVs, but some do not.
[222004020210] |If we want to exclude book chapters, some of these numbers would go down, but only slightly.
[222004020220] |Caramazza 10.8 *Hauser 13.6 Carey 4.7 Nakayama 5.9 Schacter 14.6 Kosslyn 10.3 Spelke 7.7 Snedeker 1.1 Wegner 2.3 Gilbert 4.0
[222004020230] |One thing that pops out is that people doing work involving adult vision (Caramazza, Nakayama, Kosslyn) publish a lot more than developmental folk (Carey, Spelke, Snedeker).
[222004020240] |The other thing is that publication rates are very high (except for my fabulous advisor, who was not a fast publisher in her early days, but has been picking up speed since 2006, and Wegner, who for some reason in 2000-2002 didn't publish any papers).
[222004020250] |What on Earth is going on?
[222004020260] |I have a couple hypotheses.
[222004020270] |First, I know the report used weights when calculating composite scores for the rankings, so perhaps 2.5 reflects a weighted number, not an actual number of publications.
[222004020280] |That would make sense except that nothing I've found in the spreadsheet itself, the description of variables, or the methodology PDF supports that view.
[222004020290] |Another possibility is that above I accounted for only about 1/4-1/3 of the faculty.
[222004020300] |Perhaps I'm over-counting power publishers.
[222004020310] |Perhaps.
[222004020320] |But unless the people I left off this list weren't publishing at all, it would be very hard to get an average of 2.5 publications per faculty per year.
[222004020330] |And I know I excluded some other power publishers (Cavanagh was around then, for instance).
[222004020340] |A possible explanation?
[222004020350] |The best explanation I can think of is that the report actually is including a bunch of faculty who didn't publish at all.
[222004020360] |This is further supported by the fact that the report claims that only 78% of Harvard faculty had outside grants, whereas I'm pretty sure all professors in our department -- except perhaps brand new ones who are still on start-up funds -- have (multiple) outside grants.
[222004020370] |But there are other faculty in our department who are not professors and do not (typically) do (much) research -- and thus do not publish or have outside grants.
[222004020380] |Right now our department lists 2 "lecturers" and 4 "college fellows."
[222004020390] |They're typically on short appointments (I think about 2 years).
[222004020400] |They're not tenure track, they don't have labs, they don't advise graduate students, and I'm not even sure they have offices.
[222004020410] |So in terms of ranking a graduate program, they're largely irrelevant.
[222004020420] |(Which isn't a slight against them -- I know two of the current fellows, and they're awesome folk.)
[222004020430] |So of 33 listed faculty this year, 6 are not professors with labs and thus are publishing at much lowered rates (if at all) and don't have outside grants.
[222004020440] |That puts us in the ballpark of the grant data in the report (82%).
[222004020450] |I'm not sure if it's enough to explain the discrepancy in publication rates, but it certainly would get us closer.
[222004020460] |Again, it is true that the lecturers and fellows are listed as faculty, and the report would be in within its rights to count them ... but not if the report wants to measure the right thing.
[222004020470] |The report is purporting to measure the quality and quantity of the research put out by the department, so counting non-research faculty is misleading at best.
[222004020480] |Conclusion
[222004020490] |Between this post and the last, I've found some serious problems in the National Academies' graduate school rankings report.
[222004020500] |Several of the easiest-to-quantify measures they include simply don't pass the smell test.
[222004020510] |They are either measuring the wrong thing, or they're complete bullshit.
[222004020520] |Either way, it's a problem.
[222004020530] |(Or, as I said before, the numbers given have been transformed in some peculiar, undocumented way.
[222004020540] |Which I suppose would mean at least they were measuring the right thing, though reporting misleading numbers is still a problem.) ------ *Before anyone makes any Hauser jokes, he was on the faculty, so he would have been included in the National Academies' report, which is what we're discussing here.
[222004020550] |In any case, removing him would not drastically change the publications-per-faculty rate.
[222004030010] |Findings: The Causality Implicit in Language
[222004030020] |Finding Causes
[222004030030] |Consider the following:
[222004030040] |(1) Sally hates Mary. a.
[222004030050] |How likely is this because Sally is the kind of person who hates people? b.
[222004030060] |How likely is this because Mary is the kind of person whom people hate?
[222004030070] |Sally hates Mary doesn't obviously supply the relevant information, but starting with work by Roger Brown and Debora Fish in 1983, numerous studies have found that people nonetheless rate (a) as more likely than (b).
[222004030080] |In contrast, people find Sally frightens Mary more indicative of Sally than of Mary (the equivalent of rating (b) higher than (a)).
[222004030090] |Sentences like Sally likes Mary are called “object-biased,” and sentences like Sally frightens Mary are called “subject-biased.”
[222004030100] |There are many of sentences of both types.
[222004030110] |Brown and Fish, along with many of the researchers who followed them, explain this in terms of an inference from knowledge about how the world works:
[222004030120] |Consider the two verbs flatter and slander… Just about everyone (most or all persons) can be flattered or slandered.
[222004030130] |There is no special prerequisite.
[222004030140] |It is always possible to be the object of slander or flattery …By sharp contrast, however, not everyone, by any means, not even most or, perhaps, many are disposed to flatter or to slander… [Thus] to know that one party to an interaction is disposed to flatter is to have some basis for predicting flattery whereas to know only that one party can be flattered is to know little more than that that party is human.
[222004030150] |(Brown and Fish 1983, p. 265)
[222004030160] |Similar results are found by using other ways of asking about who is at fault:
[222004030170] |(2) Sally hates Mary. a. Who is most likely responsible?
[222004030180] |Sally or Mary?
[222004030190] |(The photo on the right came up on Flickr when I searched for pictures about "causes".
[222004030200] |It turns out Flickr is not a good place to look for pictures about "hating," "frightening," or "causes".
[222004030210] |But I liked this picture.)
[222004030220] |Understanding Pronouns
[222004030230] |Now consider:
[222004030240] |(3) Sally hates Mary because she...
[222004030250] |(4) Sally frightens Mary because she...
[222004030260] |Most people think that "she" refers to Mary in (3) but Sally in (4).
[222004030270] |This is a bias -- not absolute -- but it is robust and easy to replicate.
[222004030280] |Again, there are many verbs which are "object-biased" like hates and many which are "subject-biased" like frightens.
[222004030290] |Just as in the causal attribution effect above, this pronoun effect seems to be a systematic effect of (at least) the verb used.
[222004030300] |This fact was first discovered by Catherine Garvey and Alfonso Caramazza in the mid-70s and has been studied extensively first.
[222004030310] |The typical explanation of the pronoun effect is that the word "because" implies that you are about to get an explanation of what just happened.
[222004030320] |Explanations usually refer to causes.
[222004030330] |So you expect the clause starting with she to refer to the cause of first part of the sentence.
[222004030340] |Therefore, people must think that Mary caused Sally hates Mary but Sally caused Sally frightens Mary.
[222004030350] |Causes and Pronouns
[222004030360] |Both effects are called "implicit causality," and researchers have generally assumed that the causal attribution effect and the pronoun effect are basically one and the same.
[222004030370] |An even stronger version of this claim would be that the pronoun effect relies on the causal attribution effect.
[222004030380] |People resolve the meaning of the pronouns in (3) and (4) based on who they think the cause of the first part of the sentence is.
[222004030390] |The causal attribution task in (1) and (2) is supposed to measure exactly that: who people think the cause is.
[222004030400] |Although people have been doing this research for around three decades, nobody seems to have actually checked whether this is true -- that is, are verbs that are subject-biased in terms of causal attribution also subject-biased in terms of pronoun interpretation?
[222004030410] |I recently ran a series of three studies on Amazon Mechanical Turk to answer this question.
[222004030420] |The answer is "no."
[222004030430] |This figure shows the relationship between causal attribution biases (positive numbers mean the verb is subject-biased, negative means its object-biased) and pronoun biases (100 = completely subject-biased, 0 = completely object-biased).
[222004030440] |Though there is a trend line in the right direction, it's essentially artifactual.
[222004030450] |I tested four different types of verbs (the details of the verb classes take longer to explain than they are interesting), and it happens that none of them were subject-biased in terms of pronoun interpretation but object-biased in terms of causal attribution (good thing, since otherwise I would have had nowhere to put the legend).
[222004030460] |There probably are some such verbs; I just only tested a few types.
[222004030470] |I ran three different experiments using somewhat different methods, and all gave similar results (that's Experiment 2 above).
[222004030480] |More evidence
[222004030490] |A number of previous studies showed that causal attribution is affected by who the subject and object are.
[222004030500] |For instance, people are more object-biased in interpreting The employee hated the boss than for The boss hated the employee.
[222004030510] |That is, they seem to think that whether the boss is more likely to be the cause whether the boss is the one hating or hated.
[222004030520] |This makes some sense: bosses are in a better position to effect employees than vice versa.
[222004030530] |I was able to find this effect in my causal attribution experiments, but there was no effect on pronoun resolution.
[222004030540] |That is, people thought "he" referred to the employee in (5) and the boss in (6) at pretty much the same rate.
[222004030550] |(5) The boss hated the employee because he...
[222004030560] |(6) The employee hated the boss because he...
[222004030570] |Conclusion
[222004030580] |This strongly suggests that these two effects are two different effects, due to different underlying mechanisms.
[222004030590] |I think this will come as a surprise to most people who have studied these effects in the past.
[222004030600] |It also is a surprise in terms of what we know about language processing.
[222004030610] |There is lots of evidence that people use any and all relevant information when they are interpreting language.
[222004030620] |Why aren't people using the conceptualization of the world as revealed by the causal attribution task when interpreting pronouns?
[222004030630] |And what are people doing when they interpret pronouns in these contexts?
[222004030640] |I do have the beginnings of an answer to the latter question, but since the data in this experiment doesn't speak it, that will have to wait for a future post.
[222004030650] |--------- Brown, R., &Fish, D. (1983).
[222004030660] |The psychological causality implicit in language Cognition, 14 (3), 237-273 DOI: 10.1016/0010-0277(83)90006-9
[222004030670] |Garvey, C., &Caramzza, A. (1974).
[222004030680] |Implicit causality in verbs Linguistic Inquiry, 5, 459-464
[222004030690] |Picture: Cobalt123.
[222004040010] |Tables, Charts & Figures
[222004040020] |APA format (required for most journals I read/publish in) stipulates that figures and tables should not be included in the parts of the manuscript in which you actually talk about them, but rather they should all come at the end of the manuscript.
[222004040030] |I understand how this might be of use to the type-setter, but I find it a pain when actually trying to read a manuscript.
[222004040040] |I know I'm not the only one, because in some of the manuscripts I've submitted for review before I actually violated APA format and put the figures in-line, and the reviewers actually thanked me in their review and suggested that this should become journal policy.
[222004040050] |(The idea is that after acceptance, you resubmit with the figures and tables in APA format, but that during the review process, you put them in-line.)
[222004040060] |With that in mind, I left my figures in situ in my last journal submission.
[222004040070] |The staff at the journal promptly returned the manuscript without review, saying that they couldn't/wouldn't review a paper that didn't follow APA guidelines on tables and figures.
[222004040080] |Obviously I reformatted and resubmitted (the customer/journal is always right), but I put this out to the blogosphere: does anyone actually like having the figures at the end of the manuscript?
[222004050010] |Lab Notebook: Verb Resources
[222004050020] |It's good to be studying language now, and not a few decades ago.
[222004050030] |There are a number of invaluable resources freely available on the Web.
[222004050040] |The resource I use the most -- and without which much of my research would have been impossible -- is Martha Palmer &co.'s VerbNet, which is a meticulous semantic analysis of some several thousand English verbs.
[222004050050] |This is invaluable when choosing verbs for stimuli, as you can choose verbs that are similar to or differ from one another along particular dimensions.
[222004050060] |It's also useful for finding polysemous and nonpolysemous verbs where polysemy is defined in a very rigorous way.
[222004050070] |Meichun Liu and her students at NCTU in Taiwan have been working on a similar project in Mandarin, Mandarin VerbNet.
[222004050080] |This resource has proved extremely valuable as I've been writing up some work I've been doing in Mandarin, and I only wish I had known about it when I constructed my stimuli.
[222004050090] |I bring this up in case these resources are of use to anyone else.
[222004050100] |Mandarin VerbNet is particularly hard to find.
[222004050110] |I personally spent several months looking for it.
[222004060010] |Findings: Linguistic Universals in Pronoun Resolution
[222004060020] |Unlike a proper name (Jane Austen), a pronoun (she) can refer to a different person just about every time it is uttered.
[222004060030] |While we occasionally get bogged down in conversation trying to interpret a pronoun (Wait! Who are you talking about?), for the most part we sail through sentences with pronouns, not even noticing the ambiguity.
[222004060040] |I have been running a number of studies on pronoun understanding.
[222004060050] |One line of work looks at a peculiar contextual effect, originally discovered by Garvey and Caramazza in the mid-70s:
[222004060060] |(1) Sally frightens Mary because she...
[222004060070] |(2) Sally loves Mary because she...
[222004060080] |Although the pronoun is ambiguous, most people guess that she refers to Sally in (1) but Mary in (2).
[222004060090] |That is, the verb used (frightens, loves) seems to affect pronoun resolution.
[222004060100] |Over the last 36 years, many thousands of undergraduates (and many more thousands of participants at gameswithwords.org) have been put through pronoun-interpretation experiments in an attempt to figure out what is going on.
[222004060110] |While this is a relatively small problem in the Big World of Pronouns -- it applies only to a small number of sentences in which pronouns appear -- it is also a thorn in the side of many broader theories of pronoun processing.
[222004060120] |And so the interest.
[222004060130] |One open question has been whether the same verbs show the same pronoun biases across different languages.
[222004060140] |That is, frighten is subject-biased and fear is object-biased (the presence of frightens in sentences like 1 and 2 causes people to resolve the pronoun to the subject, Sally, whereas the presence of loves pushes them towards the object, Mary).
[222004060150] |If this were the case, it would suggest that something about the literal meaning of the verb is what gives rise to the pronoun bias.
[222004060160] |(What else could be causing the pronoun bias, you ask?
[222004060170] |There are lots of other possibilities.
[222004060180] |For instance, it might be that verbs have some lexical feature tagging them as subject- or object-biased -- not an obvious solution to me but no less unlikely than other proposals out there for other phenomena.
[222004060190] |Or people might have learned that certain verbs probabilistically predict that subsequent pronouns were be interpreted as referring to the previous subject or object -- that is, there is no real reason that frighten is subject-biased; it's a statistical fluke of our language and we all learn to talk/listen that way because everyone else talks/listens that way.)
[222004060200] |random cheetah picture(couldn't find a picture about cross-linguistic studies of pronouns) Over the last couple years, I ran a series of pronoun interpretation experiments in English, Russian and Mandarin.
[222004060210] |There is also a Japanese experiment, but the data for that one have been slow coming in. The English and Russian experiments were run through my website, and I ran the Mandarin one in Taiwan last Spring.
[222004060220] |I also analyzed Spanish data reported by Goikoetxea et al. (2008).
[222004060230] |Basically, in all the experiments participants were given sentences like (1) and (2) -- but in the relevant language -- and asked to identify who the pronoun referred to.
[222004060240] |The results show a great deal of cross-linguistic regularity.
[222004060250] |Verbs that are subject-biased in one language are almost always subject-biased in the others, and the same is true for object-biased verbs.
[222004060260] |I am in the process of writing up the results (just finished Draft 3) and I will discuss these data in more detail in the future, answering questions like how I identify the same verb in different languages.
[222004060270] |For now, though, here is a little data.
[222004060280] |Below is a table with four different verbs and the percentage of people who interpreted the pronoun as referring to the subject of the previous verb.
[222004060290] |It wasn't the case that the same verbs appeared in all four experiments, so where the experiment didn't include the relevant verb, I've put in an ellipsis.
[222004060300] | Subject-Biases for Four Groups of Related Verbs in Four Languages Group 1 Group 2 Group 3 Group 4English convinces 57% forgives 45% remembers 24% understands 60%Spanish … … recordar 22% comprender 63%Russian ubezhdala 74% izvinjala 33% pomnila 47% ponimala 60%Mandarin shuofu 73% baorong 37% … …
[222004060310] |For some of these verbs, the numbers are closer than for others, but for all verbs, if the verb was subject-biased in one language (more than 50% of participants interpreted the pronoun as referring to the subject), it was subject-biased in all languages.
[222004060320] |If it was object-biased in one language, it was object-biased in the others.
[222004060330] |For the most part, this is not how I analyze the data in the actual paper.
[222004060340] |In general, it is hard to identify translation-equivalent verbs (for instance, does the Russian nenavidet' mean hate, despise or detest?), so I employ some tricks to get around that.
[222004060350] |So this particular table actually just got jettisoned from Draft 3 of the paper, but I like it and feel it should get published somewhere.
[222004060360] |Now it is published on the blog.
[222004060370] |BTW If anyone knows how to make researchblogging.org bibligraphies in Chrome without getting funky ampersands (see below), please let me know. --------- Catherine Garvey, &Alfonso Caramazza (1974).
[222004060380] |Implicit causality in verbs Linguistic Inquiry, 5, 459-464
[222004060390] |Goikoetxea, E., Pascual, G., &Acha, J. (2008).
[222004060400] |Normative study of the implicit causality of 100 interpersonal verbs in Spanish Behavior Research Methods, 40 (3), 760-772 DOI: 10.3758/BRM.40.3.760
[222004060410] |photo: Kevin Law
[222004070010] |Universal Grammar is dead. Long live Universal Grammar.
[222004070020] |Last year, in a commentary on Evans and Levinson's "The myth of language universals: Language diversity and its importance for cognitive science" in Behavioral and Brain Sciences (a journal which published one target paper and dozens of commentaries in each issue), Michael Tomasello wrote:
[222004070030] |I am told that a number of supporters of universal grammar will be writing commentaries on this article.
[222004070040] |Though I have not seen them, here is what is certain.
[222004070050] |You will not be seeing arguments of the following type: I have systematically looked at a well-chosen sample of the world's languages, and I have discerned the following universals ...
[222004070060] |And you will not even be seeing specific hypotheses about what we might find in universal grammar if we followed such a procedure.
[222004070070] |Hmmm.
[222004070080] |There are no specific proposals about what might be in UG...
[222004070090] |Clearly Tomasello doesn't read this blog much.
[222004070100] |Granted, for that he should probably be forgiven.
[222004070110] |But he also clearly hasn't read Chomsky lately.
[222004070120] |Here's the abstract of the well-known Hauser, Chomsky &Fitch (2002):
[222004070130] |We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN).
[222004070140] |FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements.
[222004070150] |We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language.
[222004070160] |Later on, HCF make it clear that FLN is another way of thinking about what elsewhere is called "universal grammar" -- that is, constraints on learning that allow the learning of language.
[222004070170] |Tomasello's claim about the other commentaries (that they won't make specific claims about what is in UG) is also quickly falsified, and by the usual suspects.
[222004070180] |For instance, Steve Pinker and Ray Jackendoff devote much of their commentary to describing grammatical principles that could be -- but aren't -- instantiated in any language.
[222004070190] |Tomasello's thinking is perhaps made more clear by a later comment later in his commentary:
[222004070200] |For sure, all fo the world's languages have things in common, and [Evans and Levinson] document a number of them.
[222004070210] |But these commonalities come not from any universal grammar, but rather from universal aspects of human cognition, social interaction, and information processing...
[222004070220] |Thus, it seems he agrees that there are constraints on language learning that shape what languages exist.
[222004070230] |This, for instance, is the usual counter-argument to Pinker and Jackendoff's nonexistent languages: those languages don't exist because they're really stupid languages to have.
[222004070240] |I doubt Pinker or Jackendoff are particular fazed by those critiques, since they are interested in constraints on language learning, and this proposed Stupidity Constraint is still a constraint.
[222004070250] |Even Hauser, Chomsky and Fitch (2002) allow for constraints on language that are not specific to language (that's their FLB).
[222004070260] |So perhaps Tomasello fundamentally agrees with people who argue for Universal Grammar, this is just a terminology war.
[222004070270] |They call fundamental cognitive constraints on language learning "Universal Grammar" and he uses the term to refer to something else: for instance, proposals about specific grammatical rules that we are born knowing.
[222004070280] |Then, his claim is that nobody has any proposals about such rules.
[222004070290] |If that is what he is claiming, that is also quickly falsified (if it hasn't already been falsified by HCF's claims about recursion).
[222004070300] |Mark C. Baker, by the third paragraph of his commentary, is already quoting one of his well-known suggested language universals:
[222004070310] |(1) The Verb-Object Constraint (VOC): A nominal that expresses the theme/patient of an event combines with the event-denoting verb before a nominal that expresses the agent/cause does.
[222004070320] |And I could keep on picking examples.
[222004070330] |For those outside of the field, it's important to point out that there wasn't anything surprising in the Baker commentary or the Pinker and Jackendoff commentary.
[222004070340] |They were simply repeating well-known arguments they (and others) have made many times before.
[222004070350] |And these are not obscure arguments.
[222004070360] |Writing an article about Universal Grammar that fails to mention Chomsky, Pinker, Jackendoff or Baker would be like writing an article about major American cities without mentioning New York, Boston, San Francisco or Los Angeles.
[222004070370] |Don't get me wrong.
[222004070380] |Tomasello has produced absurd numbers of high-quality studies and I am a big admirer of his work.
[222004070390] |But if he is going to make blanket statements about an entire literature, he might want to read one or two of the papers in that literature first. ------- Tomasello, M. (2009).
[222004070400] |Universal grammar is dead Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X09990744
[222004070410] |Evans, N., &Levinson, S. (2009).
[222004070420] |The myth of language universals: Language diversity and its importance for cognitive science Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X0999094X
[222004070430] |Hauser MD, Chomsky N, &Fitch WT (2002).
[222004070440] |The faculty of language: what is it, who has it, and how did it evolve?
[222004070450] |Science (New York, N.Y.), 298 (5598), 1569-79 PMID: 12446899
[222004070460] |Baker, M. (2009).
[222004070470] |Language universals: Abstract but not mythological Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X09990604
[222004070480] |Pinker, S., &Jackendoff, R. (2009).
[222004070490] |The reality of a universal language faculty Behavioral and Brain Sciences, 32 (05) DOI: 10.1017/S0140525X09990720
[222004080010] |Darn You, Amazon
[222004080020] |For a while now, my department has had problems with packages going missing.
[222004080030] |A suspiciously large number of them were sent by Amazon.
[222004080040] |A couple weeks ago, our building manager started to get suspicious.
[222004080050] |He emailed the department:
[222004080060] |Today I received what is now the third complaint about problems with shipping of products at Amazon.
[222004080070] |I don't know which courier they were using, but the packages were left on the [unmanned] security desk in the 1st floor lobby ...
[222004080080] |In another recent case, the packages were dumped in front of the Center Office door while I was out.
[222004080090] |Interestingly, tracking showed that they were signed for me at a time that I was attending a meeting ... it's happened a few times.
[222004080100] |Usually the packages have simply been mis-delivered ... and turn up about a week later.
[222004080110] |Figure 1.
[222004080120] |A prototypical, over-packaged Amazon box.
[222004080130] |Some days later, he followed up with more information.
[222004080140] |Another department denizen noted that Amazon has started using various different couriers.
[222004080150] |She wrote "The other day I ordered 2 books and one came via FedEx and one came via UPS."
[222004080160] |The building manager noted that FedEx has started outsourcing delivery to UPS.
[222004080170] |He continued:
[222004080180] |What's odd is that we get shipments via UPS and FedEx all the time.
[222004080190] |Usually, it's the same drivers ...
[222004080200] |We know some of them by name.
[222004080210] |He concluded that perhaps Amazon (and UPS and FedEx) were starting to use a variety of subcontractors who don't understand how to deliver packages at large buildings (e.g., you can't just leave them in a random corner of the lobby).
[222004080220] |Yesterday, we got a follow-up on the story.
[222004080230] |The building manager ordered a package from Amazon to see what would happen.
[222004080240] |The building manager was on his way to lunch when he spotted a van marked "package delivery" and an un-uniformed courier.
[222004080250] |The courier was leaving the building sans package, so the building manager knew the package was incorrectly delivered (he obviously hadn't signed for it)!.
[222004080260] |He tried to explain to the courier building package policies but
[222004080270] |He was very polite, but did not speak much English, so I'm not sure just how much he took away from our little chat.
[222004080280] |The building manager -- tired of dealing with lost and mis-delivered packages -- is on a mission to get someone from Amazon to care:
[222004080290] |Calling them on the phone was unsatisfactory.
[222004080300] |Everyone in any position of authority is thoroughly insulated from public accountability.
[222004080310] |Perhaps.
[222004080320] |But that's why blogs exist.
[222004080330] |Seriously, Amazon, do something about this.
[222004080340] |photo: acordova