[Insight-developers] REPRODUCIBILITY: IEEE CVPR 2010, Moved to the Bright Side !!!
Torsten Rohlfing
torsten at synapse.sri.com
Sun Nov 22 15:21:22 EST 2009
Luis:
Slight disagreement here again ;)
Reproducibility alone does not cure cancer. If we all simply keep
repeating and re-confirming each others results, there is no innovation.
As far as papers are concerned, I would say we need to distinguish
between "reproducible" (which is good) and "reproduction" (which, unless
the paper is concerned with something very hard to reproduce, is not).
I agree, though, that "originality" alone is also not an important
measure for a paper. Claiming that I can cure cancer by sticking a
carrot up the patient's nose and blowing in his ear is mighty original,
but hardly effective. Probably not reproducible, either.
In the end, I'd say it all comes down to "significance." Something long
known, although easily reproducible, is simply not significant.
Likewise, something very new and surprising is not significant if it
cannot be reproduced. Both novelty and reproducibility are required to
make a work of science significant.
Have a nice weekend!
TR
On 11/21/2009 04:08 PM, Luis Ibanez wrote:
> Hi Torsten,
>
> You make a good point.
>
> It is certain that IEEE CVPR is still lagging by many years behind the
> Insight Journal practices, given that they are not yet requiring source
> code and data to be made available at the moment of submitting the
> paper.
>
>
> But...
>
> given that we are talking about IEEE, the professional society that acted
> more like a commercial publisher and less like a scholarly society by
> lobbying the US Congress in order to obstruct the NIH Public Access Policy:
> http://www.ieee.org/portal/cms_docs_iportals/iportals/publications/PSPB/NIH_Public_Access_RFI_29_Apr_08_Vers3.doc
> you have to give them some credit now for mending their evil ways
> and starting to evolve into a positive direction.
>
>
> We can hope that this new review guidelines will evolve into an actual
> requirement for inclusion of source code, data and the scripts needed
> to run it all.
>
> The fear of reviewers "stealing" authors code can be resolved quite
> easily by making the source code to be Open Source in the first place.
>
> In which case, the license allows anybody (not just the reviewers) to
> use, redistribute and improve the source code. The authors in this way
> clearly establish the authorship of the code in the Free an Open Internet
> and it would be very difficult for someone else to claim being the author
> of such work.
>
> Source code that is not available, is simply irrelevant and useless,
> and if authors want to promote it but not share it, they should pay
> the Journals for publishing their papers as advertisement.
>
>
>
> I'm glad that you mention that the typical review process is also
> concerned with evaluating the level of "originality" of the paper.
>
> This is one of the most shameful symptoms that illustrate how
> the current publishing system is uneducated on the principles of
> scientific research.
>
>
> "Originality" is irrelevant in science.
> It serves no purpose.
>
> The illusion of originality is simply a measure of ignorance of what
> has been done before. Science is not made out of "original" material,
> it is made of "reproducible" material.
>
> "Originality" is important if you are running a publishing business and
> need to find the next "Harry Potter", but it serves no purpose when
> you are trying to understand the underlying behavior of tumor cells.
>
> The foolish obsession with Originality is actually one of the reasons
> why Journals and Conferences make the mistake of rejecting
> reproducibility reports as standard papers. Real scientific Journals
> will publish repetition of work, both when it pertains positive and
> negative results. The practice of reproducibility is by definition,
> non-original work.
>
> Cancer patients couldn't care less about how "original" a medication
> could be. They would care a lot more about how "effective" and how
> "reproducible" is the effect of the medication.
>
>
> We must not confuse a "researcher" with an "inventor",
> those are two very different professions.
>
>
> --
>
> IEEE CVPR, still lags in many aspects.
>
> * Their papers are not yet Open Access,
> * Their reviews are still anonymous
> * Their readers can't yet rate papers,
> much less rate reviewers.
> * The papers can't be corrected by posting
> new revisions
> * The papers are not hyperlinked,
> (not even their own references...)
> * Readers can't post blog discussions
> * Papers can't be annotated
>
>
> They are still living in the pre-Internet era.
> Not even the Web 1.0.
>
>
> It is still the case that teenagers have better
> information systems at their disposal than scientists,
> by simply using Facebook and Youtube.
>
>
> ...but...
> CVPR has done something that many other Journals
> and (so-called) Scholarly societies have not dared
> to do:
>
> to embrace the principles of the scientific method.
>
>
> If you look at the review criteria for MICCAI,
> or SPIE Medical Imaging,
> http://spie.org/x14099.xml
>
> or IEEE TMI
> http://www.ieee-tmi.org/Reviewer-Info.html
>
> you will still find "Novelty" and "Originality",
> as a requirement,
>
> and not mention at all of "Reproducibility".
>
>
> They want "new" things,
> but don't quite care if they work or not...
>
>
>
> Luis
>
>
> ---------------------------------------------------------------------------
> On Sat, Nov 21, 2009 at 3:30 PM, Torsten Rohlfing
> <torsten at synapse.sri.com> wrote:
>
>> Hi Luis --
>>
>> I need to disagree on some of your points, unfortunately.
>>
>> However, I do not disagree at all that this is a move in the right
>> direction, and I am thrilled to see it from a conference as strong as CVPR.
>>
>> Now for my disagreements: the review will not actually become trivial just
>> because data and source code are provided. Neither answers the questions,
>> how significant and original the research is, and these are quite important
>> review criteria. All that data and code help us with is a) to ensure that
>> the current presumption of reproducibility is actually justified, and b)
>> build on others' research without having to re-code all their code first.
>>
>> Now there is one final problem here as far as the impact of data and code
>> availability on the CVPR reviews is concerned: they are not actually
>> available during the review phase. The author only states in the paper if
>> and how code and data will be released AFTER the paper has been accepted. So
>> we still have to assume as reviewers that the code does indeed function as
>> described in the paper, and we furthermore have to trust that the authors
>> will indeed make good on their promise after paper acceptance.
>>
>> I can't say that's unreasonable, though, because just like we shouldn't
>> necessarily blindly trust authors, we should certainly also not blindly
>> trust reviewers (after all, they are basically the same people). So if the
>> code were available during review, reviewers might be tempted to take it for
>> their own work, yet reject the paper, maybe even intentionally to gain an
>> advantage.
>>
>> Anyway, bottom line is, the CVPR review isn't really affected much by the
>> new criterion, but it would be nice indeed if the conference implemented a
>> reward for releasing code and data, maybe by adding a certain bonus to the
>> reviewer scores.
>>
>> Best,
>> Torsten
>>
>>
>>> Hi David,
>>>
>>> I agree, it will be fascinating to see how this reshapes the field.
>>>
>>> Regarding your concern, I would argue that if the paper is "really"
>>> reproducible, then the review should become trivial.
>>>
>>> It should come down to a one-hour exercise of:
>>>
>>> 1) Download the data
>>> 2) Download the software (and potentially build it).
>>> 3) Download the scripts with parameters that run the software
>>> 4) Going for lunch
>>> 5) Coming back and comparing the results with the paper.
>>>
>>> There shouldn't be ANY thing left for the reviewer (or the reader)
>>> to guess, or to figure out. The instructions should be quite explicit.
>>>
>>> All figures in the paper must be regenerable by doing "make"
>>> on the materials downloaded from (1,2,3).
>>>
>>> On the other hand, preparing the paper will become more involved,
>>> but, again at the benefit of the practices on the field.
>>>
>>> Even for the authors themselves, it will be great to have under
>>> CVS an entire structure of the paper, that they can download
>>> and rerun in a matter of minutes to hours.
>>>
>>> What we all will learn is that, reproducibility leads to a full
>>> set of good practices.
>>>
>>>
>>> Luis
>>>
>>>
>>>
>> --
>> Torsten Rohlfing, PhD SRI International, Neuroscience Program
>> Senior Research Scientist 333 Ravenswood Ave, Menlo Park, CA 94025
>> Phone: ++1 (650) 859-3379 Fax: ++1 (650) 859-2743
>> torsten at synapse.sri.com http://www.stanford.edu/~rohlfing/
>>
>> "Though this be madness, yet there is a method in't"
>>
>>
>> _______________________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Kitware offers ITK Training Courses, for more information visit:
>> http://kitware.com/products/protraining.html
>>
>> Please keep messages on-topic and check the ITK FAQ at:
>> http://www.itk.org/Wiki/ITK_FAQ
>>
>> Follow this link to subscribe/unsubscribe:
>> http://www.itk.org/mailman/listinfo/insight-developers
>>
>>
>>
--
Torsten Rohlfing, PhD SRI International, Neuroscience Program
Senior Research Scientist 333 Ravenswood Ave, Menlo Park, CA 94025
Phone: ++1 (650) 859-3379 Fax: ++1 (650) 859-2743
torsten at synapse.sri.com http://www.stanford.edu/~rohlfing/
"Though this be madness, yet there is a method in't"
-------------- next part --------------
A non-text attachment was scrubbed...
Name: torsten.vcf
Type: text/x-vcard
Size: 373 bytes
Desc: not available
URL: <http://www.itk.org/mailman/private/insight-developers/attachments/20091122/1cfe0cd8/attachment.vcf>
More information about the Insight-developers
mailing list