วันศุกร์ที่ 17 ธันวาคม พ.ศ. 2553

Social bookmarking can get your site indexed very quickly.


Throughout this arsenic-life NASA saga, I’ve been trying to pinpoint the fundamental reasons to explain why this story got out of hand.  Why did NASA feel the need to uber-hype this research?  Why the rush to publish research even if it may not have been ready?


I’ve drawn the conclusion that the primary cause is the need to be PURPOSEFUL while performing scientific research.  For an example, I’ll take the research I currently work on.  I study the aging process in yeast cells, focusing on how the cells’ epigenome changes as a cell gets “older.”  We do this research under a federally-funded grant, for which our purpose is to study the aging process to help us better understand cancer and other age-related diseases.


But, to be honest, I don’t really care about cancer.  I mean, I am someone who is perhaps a bit too comfortable with my mortality, but even beyond that: I actually just think the idea of different proteins and other factors manipulating what sections of DNA are translated and expressed is fascinating.  I want to understand this process better – what proteins do what?  how is this different in different cell types? how did this system evolve? – and this “aging grant” is really just an excuse for me to do so.


I doubt I’m alone here.  I think a lot of scientists are more interested in uncovering the various processes, not for the good of mankind, but simply because we want to understand.  (Correct me if I’m wrong, scientists.)  I’d be happy to cure cancer along the way if I can, but in terms of my own goals and what is possible during my brief stint in this field, I just want to understand this system a little bit better than when I started.


Science wasn’t always done with a purpose.  Think about Charles Darwin.  Sure, he was interested in natural history, but he was on the Beagle to provide friendship to the captain.  Along the way, he collected a bunch of samples of mockingbirds and finches and other organisms, and it wasn’t till decades later that he put the pieces together and formulated his theory of selection of the fittest.  He didn’t collect specimens on his travels for any real purpose, but used the data he collected to draw conclusions later.


Of course, back then science was primarily done by rich men with too much time on their hands.  Now science is the forefront of innovation and progress;  we need more people than bored rich men to be studying it and, hell, anyone should get a chance to do so!  But with greater knowledge and technology, we need more money.  And since I’m not a rich bored man, I don’t have any money.


That’s where the government comes in: grants to fund research.  But since it is taxpayers that are funding this research, it should have goals that will benefit those taxpayers.  Thus I study aging and cancer.  And these grants do keep us on task.  If I find a cool mutation that alters the epigenome of my yeastie beasties and it’s not related to the aging process, I will not be following up on that project.


I go back and forth on whether this is a good thing.  On the one hand, it keeps us accountable to the government and taxpayers, who give us our funding.  But on the other hand, does research for a purpose help us really advance in biology, help us better understand how life works?


One of my bosses, a great scientist, doctor and philosopher king, recently emailed this quote to our lab from Carol Greider, a recent Nobel Prize winner for her work on the discovery of the aging-related enzyme telomerase:


“The quiet beginnings of telomerase research emphasize the importance of basic, curiosity-driven research. At the time that it is conducted, such research has no apparent practical applications. Our understanding of the way the world works is fragmentary and incomplete, which means that progress does not occur in a simple, direct and linear manner. It is important to connect the unconnected, to make leaps and to take risks, and to have fun talking and playing with ideas that might at first seem outlandish.”


This idea burns me to my very core.  Purpose-based science assumes a certain knowledge of the systems we’re studying.  But, let’s face it: we still have so much to learn.  We’re all still flailing toddlers, trying to find a surface to hoist ourselves upon so that we can actually get somewhere.  While scientists are often conceived to be smart and have all the answers, we actually don’t have many.  The more you know, the more you know that you don’t know anything at all.


But instead of being allowed to play, to follow up on work because it’s exciting, to take risks, we have to make sure we stay within the limits of our funding and, thus, our purpose.  Because “playing” or studying something because we think it’s AWESOME doesn’t provide evidence of “progress.”



I could be entirely wrong: maybe the old adage that progress is made in leaps and bounds (as opposed to baby steps, I suppose) is farcical.  Maybe I only believe this because my human soul that thrives on chaos is drawn to it.


Either way: the purpose of research is overemphasized.  When I read papers, I am interested in knowing how their discovery fits into “practical knowledge” (“There is hardly anything known about X disease, BUT WE FOUND SOMETHING!”), but more than that, I’m interested in how it fits in with the current model of whatever system they are studying.  But that rarely gets as much attention in papers.


And this idea of “purpose” is why science in the media is so often overhyped.  News articles often take a definitive stance on how the new study has contributed to the public good.  Maybe it’s “eating blueberries will preserve your memory” or “sleeping 8 hours will make you attractive.”  This makes the science easy to digest, sure, but it also paints an incomplete picture.  These studies are just tiny pieces in a puzzle that scientists will continue to work on for decades.  It’s pure hubris to believe that non-scientists cannot understand the scientific process – that they cannot understand that it takes incremental steps.  But, nonetheless, if your research cannot be easily hyped, no one will hear about it, so you have to serve a purpose.


So with NASA’s arsenic-based life.  The current model, both in funding and the media, of requiring purpose to justify research forced NASA to claim a greater purpose for its discovery: “an astrobiology finding that will impact the search for evidence of extraterrestrial life.”


To give both NASA and the researchers the benefit of the doubt, let’s just say they found this cool bug and wanted to share the news to get help in studying it, as author Oremland suggested.  They submitted the paper to officially get the word out.  But then they needed to find a “good reason” to have been studying arsenic microbes and NASA decided this was a good opportunity to reinvigorate its reputation of performing “useful science” so called a press conference.  You know where it goes from here.


All that is pure speculation – but it probably isn’t too far from the truth.  Maybe I’m being too kind, but I really doubt that the researchers or NASA had any ill-intentions.  They simply lost control, and the following shitstorm took off.


We can scoff at them all we like: “an astrobiology finding that will impact the search for evidence of extraterrestrial life, my ass!”  But it’s really not so different from my lab publishing a paper with the headline, “KEY FACTOR IN CELL AGING UNCOVERED” when, really, we just discovered a factor, and we don’t even know if it’s key.


The idea of “useful science” also dampens my feelings about science: SCIENCE IS COOL!  Longing to pry up the corners of current knowledge isn’t enough: we can’t just look, but have to reveal a direct outcome.  But if we don’t allow ourselves even to look because of various purpose-based limitations, we could be missing out on something FUCKING AWESOME!


I’m just rambling now – and am very interested in hearing your thoughts on this.



  • Does purpose-driven science lead to better science or more innovation?

  • Are there ways of judging research as worthy (e.g. for funding purposes) without having to provide a direct purpose?

  • How should the media change its model for covering stories?  Should every study that comes out get attention, or should we wait for more details and provide more review-like coverage?

  • Would larger, field-based studies dampen competition?  Would this help or hurt scientific progress?


Etc. etc.  If you made it this far, thank you, xox, Hannah.





A Forrester Research report released Monday received a ton of attention for its suggestion that TV and internet usage in the U.S. had reached parity. Now that data is drawing some high-profile skeptics.



The problem is Forrester’s findings don’t remotely square with existing measurement on TV and internet usage. While the study found that in January and February of 2010 consumers reported spending 13 hours per week on both TV and internet, data from Nielsen and comScore (NSDQ: SCOR), arguably the most reliable sources for measurement of TV and internet usage, offer a markedly different picture.

See more of our latest ESPN coverage
or add an alert for future coverage of ESPN.



ESPN plans to meet Wednesday with Forrester, which counts the sports juggernaut as a client, to share its concerns. “Our fundamental concern is that, in a very confusing media landscape, we’re trying to answer very important questions about the behaviors of consumers,” said Dave Coletti, vice president of digital media research and analytics at ESPN (NYSE: DIS). “It’s imperative that we answer questions with the right methods.”



In the first quarter of 2010, Nielsen clocked weekly usage at 38 hours and 44 minutes, nearly three times what Forrester found. Over the same time frame, comScore’s account of internet usage was 7 hours and 24 minutes, about half of what Forrester found.



Why these numbers are so divergent cuts to the heart of the difficulties ESPN has with this Forrester study. The Forrester numbers are entirely based on self-reporting, or what the 30,000 respondents to the survey say is their consumption habits. But that’s a subjective metric different from the kind of metered measurement Nielsen and comScore do. They may have their own well-documented faults, but are at least they’re objective.



But what’s more troubling to Glenn Enoch, vice president of integrated research at ESPN, is that in media-research circles, self-reporting is known to be notoriously slippery. “It’s something we’re generally careful about,” he said.



To wit: In a Video Consumer Mapping study conducted last year by Ball State University Center for Media Design that is widely regarded as a landmark piece of research, one of the key findings noted, “Serious caution needs to be applied in interpreting self-report data for media use. TV was substantially under-reported while online video and mobile video usage were over-reported.”



On Twitter, a few prominent self-appointed critics openly questioned Forrester’s findings. Gian Fulgoni, chairman of comScore, had a rather heated exchange with Forrester’s lead researcher on the report, Jacqueline Anderson, in which he questioned the validity not of his own stock in trade—Internet measurement—but the TV numbers.



“Nielsen says TV 140+hrs /mo. You say it’s only 52. Something very wrong,” he tweeted. After a few back-and-forth tweets, Anderson defended the work as “clear” when you examine the year-over-year numbers. “Clear?” Gulfoni shot back. “There’s huge error level.”



Reached for comment, Anderson doesn’t take issue with the veracity of Nielsen or comScore’s numbers. But she feels they are important pieces of a puzzle that isn’t complete without getting the consumers’ perspective. There’s the reality of what metered measurement yields, but to Anderson there is also value in distilling the perception of what consumers believe is the reality of their consumption habit. “In their minds’ eye now, the time consumers spend between mediums is equal,” said Anderson.



In her defense, Anderson stated clearly in a blog post about the research on Forrester’s site that “the data we present in this most recent Technographics® report is self-reported, so the metrics aren’t the same as those you’d see from a Nielsen or comScore.”



However, the very first line of the executive summary on the page where Forrester makes the full research available to its clients seems to pass off the research as what viewers actually consume instead of what they think they consume: “For the first time ever, the average US online consumer spends as much time online as he or she does watching TV offline.”



Anderson believes that the TV vs. internet comparison is not as significant in this research as the growth—or lack thereof—that each medium experienced since 2009. TV consumption didn’t decrease; it just held steady while internet made the huge leaps it took to catch up. “The data in the year-over-year picture is the more important piece of the puzzle,” said Anderson, who also noted that she was cognizant that respondents tend to under-report, but if they’re doing so year-over-year, it’s an apples-to-apples comparison.



But that nuance was apparently lost on the dozens of press outlets who wrote about the research, trumpeting it as some kind of milestone in the growth of U.S. internet usage yet failing to convey that self-reporting isn’t the best basis for declaring a tie between the mediums’ exposure levels. Few referenced the wealth of statistics that demonstrate just how largely TV consumption still looms over internet usage, a dynamic one industry researcher recently put in perspective by characterizing the Facebook audience as being on par with that of PBS. Maybe it’s hard to resist that sexy narrative of the underdog coming from behind to race neck-and-neck with the longtime leader.



Then there’s the very either-or premise of the research to consider. The distinction between TV and online usage isn’t even entirely clear anymore in a universe in which there are a bevy of boxes that deliver programming directly to the TV set via broadband connection. And concurrent usage of TV and online is already a well-noted phenomenon, making any presentation of data that paints TV vs. online as a zero-sum game off the mark.



Perhaps it’s predictable that ESPN would be the one to want to counter the study. Just last week, the network released its own study that sought to minimize the so-called cord-cutting phenomenon, which drew observations that ESPN was only trying to protect its gravy train. And of course, the Forrester research is now being touted as supporting evidence of cord-cutting.



“When we do raise an eyebrow at things like this is, it’s often interpreted as we’re downplaying the potential of digital media,” said Coletti. “Nothing can be further from the truth. I wave the flag for digital media. It’s more about putting on my professional researcher hat and making sure the data that gets into the marketplace is as accurate and reliable as could be.”








Related Stories






Reference research: business research and health research and shopping research and my social page




sport promote