Iconoclasts Anonymous

Inane ravings of an irreverent slacker

Blogging… Redux

Posted by Jeff on January 18, 2010

So, as anyone who might still follow this blog has possibly noticed, I haven’t been very diligent with updating recently. In fact, I think it’s been 4 months since my last entry! I’m going to try to change this and post here at least a couple times a week from now on. I can’t guarantee that they will all be terribly compelling posts or extremely content-heavy, but I’ll try!

Starting with today. I wanted to mention something we were discussing in my lab class this quarter dealing with publishing scientific data and papers. One of the most important parts of writing a paper to be published in a peer-reviewed journal is properly performing the data analysis. To be even more specific, it’s properly performing error analysis on your data.

Nearly all scientific data will have possible sources for error of two kinds. First, there is statistical error, which is inaccuracy or indeterminacy due to the nature of the data itself or the system being studied.  An example of this would be the limited accuracy of measuring equipment, which is always a concern when taking measurements of quantum systems.

Second is systematic error, which reflects flaws in the experimental procedure or improperly controlled conditions. Basically, systematic error is something which can be controlled by the experimenter and should be minimized as much as possible.

I wanted to talk about this because a lot of people seem to have the impression that the credibility or reputation of a scientist is based on the scope of his/her work (i.e. good scientists ask tough questions and do ground-breaking experiements while poor scientists are lazy or falsify their data). While this is true to some extent, a very large part of a scientist’s credibility can be attributed to how effectively they analyze their data!

For example, a scientist publishes a study which, while not paradigm-changing, deals with some pretty important topics. In the analysis, this scientist makes an arithmetical error in reporting the error in the data and reports an order of magnitude better accuracy than the experiment actually provided. When the error is discovered, the scientist’s reputation in the scientisfic community will suffer a blow that they will likely spend most of their career trying to recover from. Not because they are seen as unethical or lazy, but because they weren’t rigorous enough to ensure that the published data was accurate.

Just wanted to share this little insight with people. Getting and maintaining a positive reputation amongst scientists is no easy task, and losing that reputation is far, far easier than building it. There are many reasons to be impressed by someone who is legitimately described as ‘highly regarded in their field’ and this is part of the reason why so many bullshit-peddlers try to claim that title for themselves.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: