Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, if your accuracy/performance metrics are too good, that's almost a sure sign that something has gone wrong.

Source: bitter, bitter experience. I once predicted the placebo effect perfectly using a random forest (just got lucky with the train/test split). Although I'd left academia at that point, I often wonder if I'd have dug in deeper if I'd needed a high impact paper to keep my job.



I believe it's very common. At some point I thought about publishing a paper analyzing some studies with good results (published in journals) and showing where the problem with each lies but at some point I just gave up. I thought I will only make the original authors unhappy, everybody else will not care.


> I believe it's very common.

Yeah, me too. There was a paper doing the rounds a few years back (computer programming is more related to language skill rather than maths) so I downloaded the data and looked at their approach, and it was garbage. Like, polynomial regression on 30 datapoints kind of bad.

And based on my experience during the PhD this is very common. It's not surprising though, given the incentive structure in science.


Peer Review is a thankless job

but that’s how science advances

there should be an arxiv for rebuttals maybe




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: