Then I'm glad I didn't actually do anything of the sort.
But I think not even validating the experimental data and analysis by independent parties can hardly be called "science". Therein lies the real problem. Science has likely been hijacked by unscrupulous people to further their own agenda.
[...]
Again, I didn't claim it was "all nonsense".
But I think you can agree that the lax standards used in some fields by some not-so-reputable publishers is also a major part of this problem.
As you pointed out, you see it a lot in the medical publications because there are millions, and perhaps even billions, riding on the outcome of some of the studies.
A full validation of the experiment and the analysis is almost never done, even in the hard sciences. The rare exceptions are usually in case of an accusation of fraud. In all other cases, nobody has the time to validate everything. Peer review will (hopefully) catch glaring errors, but it is no guarantee that the result is correct.
It is not a problem of not-so-reputable journals. All of the studies that failed to replicate in that meta-analysis were peer-reviewed and published in reputable journals and were up to the standards in the field.
Hijacking by unscrupulous people can be a problem, but there are intrinsic problems in regular science, where nobody wants to do bad science, but ends up doing so, because doing it right is not encouraged.
Also, the fossil fuel industry has clearly injected substantial sums into supposed AGW research.
Here is a supposedly
"peer-reviewed study" by two members of university business schools whose topic is
"Science or Science Fiction? Professionals Discursive Construction of Climate Change".
Would you call supposed "science" conducted by two business school members to be "bad science" or "pseudo-science" in a study which attempted to do nothing but classify opinions regarding AGW into a few predefined buckets? Why are "professional engineers" even asked for their opinions in this matter since it is clearly completely outside their respective fields?
Certainly pseudo-science. But pseudo-science deserves its own category, apart from bad science.
Nope. Repeated testing and statistical testing of the results is always used to rule out bogey results. If the results from one repeat are found to be different to 99 others, it wont make the slightest dent in the overall peer reviewed conclusion.
That's also a very strong reason why 'one reference is never adequate' on undergrad degrees. The main science journals don't just publish random whacko lunatic results and conclusions if they are shown to be far from the norm, unless a statistical majority of repeats produce the same random whacko lunatic results, in which case they are no longer random whacko lunatic results.
'One reference is never adequate' might work on undergraduate work. But if you are on the edge of human knowledge, where science is supposed to be, there might be only one reference, maybe even because currently no one else is able to do something similar. So what do you do? You might devote your efforts to create the second reference, but no one will thank you for that and you get neither publications nor funds. You can only do something related, but different in the hope that the reference is correct and someone will find out in the future if it is not.
The problem are not so much the whacko lunatic results. In those cases the rule 'extraordinary claims require extraordinary evidence' tends to apply and your data needs to be very convincing to pass peer review. The problem are the results which seem plausible and there is no intrinsic reason to doubt them, but there are no resources available to fully validate them beyond any reasonable doubt.
If it is anything important, an erroneous result will be corrected over time, because other experiments might require a comparison against the old result, and if the baseline is not what it should be you need to investigate that. But that will take time and in the meantime the wrong result stands.