|
You are logged in as stevemcintyre
Logout
|
|
Faking that NASA faked the moon landing
Data
integrity is a central issue in all research, and internet-based data
collection poses a unique set of challenges. Much attention has been
devoted to that issue and procedures have been developed to safeguard
against abuse. There have been numerous demonstrations that internet
platforms offers a reliable and replicable means of data collection, and
the practice is now widely accepted.
Nonetheless, each data set must be examined for outliers and “unusual” responses, and our recent paper on conspiracist ideation and the motivated rejection of science is no exception.
Perhaps unsurprisingly, after various unfounded accusations against us have collapsed into smithereens,
critics of our work have now set their sights on the data. It has been
alleged that the responses to our survey were somehow “scammed,” thereby
compromising our conclusions.
Unlike the earlier baseless accusations, there is some merit in casting a critical eye on our data. Science is skepticism and our data must not be exempt from scrutiny.
As it turns out, our results withstand skeptical scrutiny. We will
explain why in a series of posts that take up substantive issues that
have been raised in the blogosphere in turn.
This first post deals with the identification of outliers; that is,
observations that are unusual and deserve to be considered carefully.
Outlier detection and identification
Let’s begin by examining the variable of greatest interest in our
paper, namely the indicators for “conspiracist ideation,” which is the
propensity to endorse various theories about the world that are, to
varying extents, demonstrably unfounded and absurd (there are some
reasonably good criteria for what exactly constitutes a conspiracy
theory but that’s not at issue here).
The full distribution of our conspiracy score (summed across the
various items using a 4-point scale ) is shown in the figure below.
Ignore the vertical red line for now.
For simplicity we are ignoring the space aliens for now (which formed
a different indicator variable on their own), so the observations below
represent the sum across 10 conspiracies (remember that the
"convenience" theories involving AIDS and climate science are omitted
from this indicator variable for the reasons noted in the paper).
Thus, a person who strongly disagreed with all conspiracies would
score a 10, and someone who strongly agreed with them all would score a
40.
The figure invites several observations. First, the distribution is
asymmetrical, with a longish upper tail. That is, most people tend to
more or less reject conspiracies; their score falls towards the lower
end of the scale.
Second there are several observations at the very top that may—repeat may—represent aberrant observations. It is those extreme scores that critiques of our data have focused on, for example the very thorough analysis
by Tom Curtis. The top two extreme data points are indeed unusual. But
then again, one might (just) expect a few such extreme scores in a
sample of more than 1,000 people given the shape of the distribution.
So how does one deal with this situation?
The first, and most important step is the recognition that once the
data have been obtained, any identification of an observation as an
"outlier", and any decision to remove a subset of observations from
analysis, almost inevitably involve a subjective decision. Thus, a
valuable default stance is that all data should be retained for
analysis. (There may be some clear-cut exceptions but the data in the
above figure do not fall within that category).
There are two ways in which data analysts can deal with outliers: One
is to remove them from consideration based on some criterion. There are
many candidate criteria in the literature, which we do not review here
because most retain an element of subjectivity. For our analysis, we
therefore elected not to remove outliers by fiat, but we instead ensured
that the inclusion or exclusion of any potential outliers has no
substantive effect on the results.
That is, we examined the extent to which the removal of outliers made
a difference to the principal result. In the case of our study, one
principal result of interest involved the negative correlation between
conspiracist ideation and acceptance of science. That is, our data
showed that greater endorsement of conspiracy theories is associated
with a greater tendency to deny the link between HIV and AIDS, lung
cancer and tobacco, or CO2 emissions and global warming.
How resilient is this result to the removal of possible outliers?
The red line in the above figure answers that question. If all
observations above that line (i.e., scores 25 or greater; there were 31
of those) are removed from the analysis, the link between the latent
constructs for conspiracist ideation and rejection of climate science
remains highly significant (specifically, the p-value is < .001),
which means that the association is highly unlikely (less than 1 in
1000) to have arisen by chance alone.
In other words, if we discard the top 3% of the data, that is those
part of the data which for conceptual reasons should arouse the greatest
suspicion, our conclusions remain qualitatively unchanged.
Why discard anything above 25? Why not 29 or 30 or 18?
Because a score of 25 represents a person who disagrees with half of
the theories and agrees with the other half (or some equivalent mix of
strongly-agreeing and strongly-disagreeing responses). Lowering the
cutoff further, thereby eliminating more observations, would eventually
eliminate anyone who endorsed any of the theories—and guess what, that
would defeat the whole point of the study. Conversely, there is no point
in raising the cutoff because we already know what happens when all
data are included.
We conclude with considerable confidence that when a highly
conservative criterion (scores 25 or above) for outliers is used, the
principal result remains qualitatively unchanged. Conspiracist ideation
predicts rejection of science—not just climate science, but also the (even stronger) and even more strongly, rejection of the link between HIV and AIDS and the link between tobacco and lung cancer. [13/9: rephrased to clarify, the 'more strongly' refers to magnitude of regression weights.]
How does the elimination of outliers relate to the notion of “scamming”, which has stimulated so much interest in our data?
The answer is both obvious and also quite subtle: The obvious part is
that the two folks at the very top of the above distribution strongly
endorsed virtually all conspiracy theories. If they then also strongly
rejected climate science, that would arguably constitute a profile of
scamming—that is, those folks may have generated responses to create the
appearance that “deniers” are “conspiracy nuts” (note the quotation
marks: this discussion is almost impossible to write succinctly without
labels, even if they are caricatures).
Now, we have dealt with the obvious bit about the "scamming" problem
by throwing out not just those two people of greatest concern, but the
top 3% of the distribution—that is, anyone who might remotely look and
act like a “scammer” based on their responses to 10 conspiracy theories.
Remember—none of the significant correlations in our data disappear when those people are removed from consideration.
But now to the more subtle part: How would we know that
someone who endorses all conspiracy theories but none of the science is
actually a scammer? We have tacitly assumed that this somehow is
evidence of scamming. But on what basis? Is there more to this than
intuition?
This brings us to the fascinating issue of mental models of people's behavior, which we will address in a future post.
107 Comments
Prev 1 2 3 Next
Comments 51 to 100 out of 105:
-
(-snip-)
(-snip-)
(-snip-)
(-snip-)
(-snip-)
You be the judge.
Civility and collegiality, respecting opinions of others even if you
disagree, presenting your research and encouraging challenge and review
... that used to be the hallmarks of science.
Moderator Response: Repetitive sloganeering snipped.
-
Tom Curtis, I can answer a couple of your questions/concerns.
First, the reason you came up with 12 conspiracy theories to use rather
than 10 is Lewandowsky omitted the theories relating to aliens. If you
omit them too, I believe the numbers will line up for you.
Second, the issue you raised about the drop in correlation is true for
the author's results as well as their latent variables are close to what
you did. The difference is before the values from each column get
added together, they're scaled by some constant. That, plus the fact
that scaling can be changed, means the drop in their correlation will
likely be a bit smaller, but it will certainly be present.
-
@Tom Curtis, re the 3%, if you still haven't picked up on it, the
article talks about 10 CY items and lists those not counted for the
purpose of this particular exercise (ie HIV, the two space alien items,
climate change). (Don't know if that makes a difference to how you
frame your question or not.)
-
A Scott @ 53.
Um, the argument you make at point 2 is self-defeating.
Hanich's explanation to Pielke is clearly about 'drawing a link' between
attitudes to climate change and science generally. But what that link
was, and hence the title of the paper, did not come until after the survey had made apparent what those linkages are.
I suspect the reason you are arguing that the whole question of
identifying any linkage is itself inherently tendentious is that you also
automatically assume that there will be a strong positive link between
AGW 'skepticism' and rejection of other aspects of science (an
assumption that will likely be shared - even if only semi-consciously
acknowledged - by a majority of those on both sides who've been involved in the debate for some time, I suggest.)
This somehow renders the whole process of identifying any such a link
inherently unfair to 'skeptics'! I suggest it was also likely a strong
motivator in the persistent rejection of participation in the survey by
'skeptic' blogs.
It was, after all, entirely possible - but perhaps never likely! - that a survey of the attitudes of both sides
of the debate would not have revealed such linkages - between
'skepticism', 'Libertarian' ideology, conspiracist ideation, and more
general science rejection - at all. But because the whole concept of
asking the question is unfair, the result must be, well, cheating.
As has been pointed out extensively, whatever one might think of the
original results, that claimed positive linkage could scarcely have been
more directly confirmed than by the subsequent 'skeptic' reaction to
this paper.
-
An important point - not repetitive, but separate and distinct from the others above:
There is one place we don't see the level of what I believe to be excessive rhetoric.
That is in name included at the top of the survey. There the simple:
"Attitudes Towards Science" is displayed. Just a plain statement
describing the survey.
I believe almost all would agree - that using phraseology such as that
in the paper's name would have been inappropriate - would have
potentially influenced the survey.
If you wouldn't use it in the survey, should it be used in the paper, let alone as the title?
You be the judge.
(-Snip-)
Moderator Response: Sloganeering snipped. You have had your say; repetitive and myriad iterations of the same constitute sloganeering. Let it go.
-
Brandon Shollenberger and Sou, the relevant part of the paper reads:
"eparate exploratory factor analyses were conducted for
the free market,climate-change, and conspiracist ideation items. For
free-market items, a single factor comprising 5 items (all but
FMNotEnvQual) accounted for 56.5% of the variance; the remaining item
loaded on a second factor (17.7% variance) by itself and was therefore
eliminated. The 5 climate change items (including CauseCO2 ) loaded on a
common factor that explained 86% of the variance; all were retained.
For conspiracist ideation, two factors were identied that accounted for
42.0 and 9.6% of the variance, respectively, with the items involving
space aliens (CYArea51 and CYRoswell) loading on the second factor and
the remaining 10 on the first one (CYAIDS and CYClimChange were not
considered for the reasons stated in Table 2). Items loading on each
factor were summed to form two composite manifest variables. The two
composites thus estimate a conspiracist construct without any conceptual
relation to the scientic issues under investigation."
As I read that, the two factors where combined to form a single
conspiracist latent construct. That mean all twelve items contribute
to that construct, albeit with different weights. Lewandowsky also
calculates the pairwise correlation with just one conspiracist
construct, not two as would be required of the two factors continued to
be treated separately.
I would certainly appreciate Lewandowsky's correction on this point if I am wrong.
-
@57
"Civility and collegiality, respecting opinions of others even if you
disagree, presenting your research and encouraging challenge and review
... that used to be the hallmarks of science."
That was the goal, if not always the reality. However, scientific debate
has become confused with public debate, and there is very little in the
level and tone of public debate about climate science that is civil or
collegiate - quite the opposite in fact. The response to this paper is
suitably representative of that. Times have changed. Einstein had very
public detractors and critics - expert and armchair - but he wasn't
subjected to nuisance FoI requests, torrents of emails, networks of
bloggers pouncing "aha!" on his every step and misstep, shock-jocks
whipping up the masses, round-robins of Tweets, YouTube concoctions or
Facebook campaigns.
We have seen the enemy and he is us.
-
A Scott, while I agree with you that the paper's title was uncalled
for, the data show a -0.848 correlation between assent to the science,
and acceptance of the claim that:
"The claim that the climate is changing due to emissions
from fossil fuels is a hoax perpetrated by corrupt scientists who wish
to spend more taxpayer money on
climate research.
The widespread acceptance of this absurd conspiracy theory within the
"skeptic" community, and active promotion of it, or even worse
conspiracy theories by major players within that community are hardly
consistent with "collegiality". Rebukes about the lack of
"collegiality" in Lewandowsky's from people who do not actively
discourage that conspiracy theory, and disassociate themselves from
people who push it leave a bad taste in the mouth.
-
No Bill. Hanich's statement says the intent was to draw linkages
between attitudes toward climate science ... and scepticism.
The paper states the intent of the authors: "... to investigate predictors of the rejection of climate science" including the theory that "rejection of science is conspiratorial thinking, or conspiracist ideation"
Neither I or anyone I know has any issue with this basic premise. Facts
are facts - if that is what the data truly shows, it is hard to argue.
The problem is the data. In several aspects - both the method/source - and the analysis.
The authors make claims about skeptic's and their rejection of science, along with claims associating skeptical views with conspiracy theories and the like, from a review of the data.
Again, I have no problem with that - if you can show a legitimate link it is hard to argue.
The problem is that the authors made these claims by using data
collected not from the skeptics they are making these conclusions about,
but from data collected only thru sites that are some of the strongest
supporters of AGW theory around.
If I want to collect data on a particular group I do not go to their opponents to obtain that data.
It is as simple as that.
-
Highly logical and reasonable from Tom - if my demands aren't met in the time frames I impose, (-snip-).
'Lukewarmers' showing us how to do reasonable.
Moderator Response: Copy of snipped accusation of dishonesty snipped.
-
Tom Curtis, I was explaining why this post discusses scores based
on 10 conspiracy theories, not 12. That the alien conspiracies get
combined with the other 10 for the analysis doesn't really have anything
to do with that. That said, you are right about what was done in the
analysis.
As for the correlation you pointed out in your last comment, that's not
all that meaningful. There are about a thousand responses that "assent
to the science," and we would expect all of those to reject that
conspiracy theory. Of course there will be a strong, negative
correlation!
-
Tom @67 just got snipped (and rightly so), so I guess @68 is for it too!
-
Sorry, A Scott, I'm not letting you out of your illogicality so easily.
Firstly - if there was nothing there to find, it would not have been found. Oh, that's right, it's all a trick.
If I want to collect data on a particular group I do not go to their opponents to obtain that data.
Sorry, how would we know if conspiratorial ideation was not a feature of all
participants in the AGW debate if both sides weren't contacted to be
surveyed? Further, I notice that you're here at this blog - do
participants in this debate only read the blogs of their own side?
And did you miss the bit where it was pointed out that the part of the
supposed conspiracy where 'skeptic' blogs had not been contacted turned
out to be false?
I suspect we all know why they'd be reluctant to participate.
Need one point out that prominent champions of the 'skeptic' community
include Lord Monckton, a man who claims the most extraordinary things -
in fact, the only thing more remarkable than these claims is his sublime
confidence when making them, and the widespread support he receives
from those who lay claim to the title of 'skeptic'!? A man who has
recently been playing along with Jo Arpaio's 'Birther' 'investigations'
in the US?
Spend some time at Jo Nova's popular 'skeptic' blog to encounter the
most remarkable tales about the doings of The Regulatory Class. Indeed,
the popular core of 'skepticism' is predicated on the bizarre notion
that NASA, the CSIRO, the BoM, NIWA, and the world's Academies of
Science are somehow involved in the deliberate manipulation of data to
serve political ends. This notion is, simply, ridiculous. The likelihood
of anyone embracing it not also embracing other ridiculous beliefs is vanishingly small...
-
Tom Curtis ...
Lets start with the question. It is confusingly worded, self serving and
exhibits significant bias. It is a caricature of how they believe
skeptics think.
Worse, it's one of the most important questions in the survey. It is the big tell in defining/labeling skeptical respondents.
It is purposely designed to elicit a forced response that does not allow
accurate reflection of the views of the respondent. It is unfairly
stacked, and if answered becomes a self fulfilling construct.
And here's why.
I am a skeptic - didn't used to be, but eventually my review of the science convinced me otherwise.
This is not a single question - as worded it has multiple parts and assertions:
(a)... the claim the climate is changing due to emissions of fossil fuels
(b)... is a hoax perpetrated by scientists
(c)... who are corrupt
(d)... who want to spend more taxpayer money on their climate research
There is, by the authors intent and design, no neutral answer choice.
You are forced to and must agree or disagree - with all of the
points/claims, whether you agree with them or not.
Here is a direct example - how I answer the question.
(a) a statement, whether true or false not really part of the answer here, but many struggled with it
(b) there is inaccuracy, maybe even some untruthfulness, but an intentional hoax? No. That is the caricature of what a warmist thinks a skeptic thinks. My answer - "False = 2"
(c) same thing, some inaccuracy, maybe even some untruthfulness, but
"corrupt"? No I do not believe as a group the warmist science side is
remotely corrupt. Bad intentioned at worst. My answer "False = 2"
(d) here I'm less charitable. Institutions run on money, mostly govt $ =
no $ no jobs. A good share of those $ are unavailable for research that
does not support "warming" topics. Yes, all scientists will spend every
taxpayer $ they can find. My answer "Absolutely True = 4"
My composite averaged answer = 2.67 ... neutral. Yet I am forced to
select from available choices. Without the benefit of a "neutral" choice
I must select "True."
The paper then reports I think the entire claim:
"the climate is changing due to emissions from fossil fuels is a hoax
perpetrated by corrupt scientists who wish to spend more taxpayer money
on climate research"
... is True
When clearly that does not accurately reflect my views on the issue. I do not think there is a hoax by scientists, I do not think scientists are corrupt, I do think scientists will spend every dollar they can get from the taxpayers.
The data the authors provide for this question show a definitive signal -
all but a handful of respondents believe with apparent certainty the
claim is Absolutely False.
Here are the responses:
4=Absolutely True 63
3=True 69
2=False 75
1=Absolutely False 926
To me, that tells us a couple things:
(1) There were at best a small fraction of the total respondents to
their survey that were skeptics - that the vast majority were strongly
pro-AGW ... 132 likely skeptics to 1001 likely pro-AGW.
(2) Most respondents had little problem deciding on an answer - almost
all answered unequivocally "Absolutely False." Which is another strong
tell to the overwhelming pro-AGW slant of the responses.
It also clearly shows the small group of skeptics likely had exactly the
same difficulty I described above in answering the question - and that
they do not strongly believe it to be true.
The authors work and data here shows pro-AGW respondents know exactly
how they feel about the issue. It also show skeptics are highly divided
on this issue, not the strong believers we keep hearing they are.
-
Brandon Shollenberger, thankyou. You may well be correct.
Your supposition that the approx 1000 respondents "affirming the
science" being sufficient to generate the negative correlation is,
however, incorrect. 44.6% of respondents who accepted AGW (mean score
on four CC questions plus CauseCO2 greater than 3.33) accepted (scored 3
or 4) at least one conspiracy theory, and the average number accepted
by those who accepted at least one conspiracy theory was 2.06. This
compares with 49.8 affirmers of at least one conspiracy theory among the
undecided (CC plus CauseCO2 means score between 1.67 and 3.33), with a
mean of 2.59 conspiracy theories accepted if at least one was; and 61.5%
of AGW rejectors (mean CC plus CauseCO2 score less than 1.67) accepting
at least one conspiracy theory, with an average of 2.64 conspiracy
theories accepted if any were. This data excludes the two most suspect responses.
-
#72 A Scott
Excellent analysis of the problems of giving a true answer. Respondents
who refuse to compromise by giving an answer which approximates to their
true belief are automatically eliminated from the survey. This is a
mechanism that could have been designed to eliminate “normal” sceptics,
leaving only those with dogmatic contrarian beliefs.
-
I'll see your "Sorry" Bill - and raise you ;-)
Sorry, how would we know if conspiratorial ideation was not a
feature of all participants in the AGW debate if both sides weren't
contacted to be surveyed?
Further, I notice that you're here at this blog - do participants in this debate only read the blogs of their own side?
And did you miss the bit where it was pointed out that the part of the
supposed conspiracy where 'skeptic' blogs had not been contacted turned
out to be false?
I suspect we all know why they'd be reluctant to participate.
Your comment has some merit in theory. Sorry :-) though, it is not
reflective of what was achieved by the authors here, in fact.
It is entirely reasonable to expect, that even if a concerted effort had
been made to include a comparable number of skeptic sites in the
survey, they might have been wary of the offer, considering the authors
history.
I suspect we can agree that perhaps that is why an associate made the 5
contacts, by all appearances without mention of the author.
It should have been apparent to the authors that a diligent effort would
be needed to secure the support of the skeptic blogs, but no such
outreach appears to have been attempted.
Yet despite the known lack of skeptic participation they went ahead.
They collected data intended to be reviewed to find connections between
skepticism and conspiratorial ideation - to explore motivations of
skeptics toward rejection of science - yet knew all the data was being
obtained thru largely anti-skeptic sites.
You can see by the answers to the hoax question alone how many (or in
reality few) skeptics responded. And that was a single, albeit fairly
seminal, question. Further analysis reduces the skeptic count much
further.
Making broad-based conclusions about skeptics from a small subset of the
total responses, when we know with certainty (from posts on pro-AGW
sites - including the sites the survey was offered thru) that there were
fraudulent responses made, is not good science in my opinion.
As to the willingness of skeptic sites, and their readers, to
participate ... I can answer that question with absolute certainty. They
energetically and quickly did exactly that. A single initial request,
by a largely unknown "guest" at a single skeptic leaning site, saw
people participating in droves, with high quality responses from around
the globe.
Sorry again, with all due respect - as you cannot know what I do, the
claim that skeptics would/will not participate is a red herring - and
demonstrably false.
-
A Scott @72:
First, you are correct that the question is a compound question. That
means it is only true if all three parts are true; and only probably
true if the probability of each (assuming independence) is sufficiently
high that their product is greater than 0.5. That means that simple epistemic consideration bias in favour of rejecting the statement.
If you think even one of the subclauses is false, or probably false,
or fifty/fifty fifty, then the only consistent response is to indicate
that the statement is at least probably false. Ergo, deciding that it
is at least probably true represents a substantive epistemic commitment,
and is certainly not the consequence of a "skeptics" being manipulated
into answering that way by the structure of the question.
Second, excluding the two most suspect responses, 70 "skeptics"
considered the statement "probably true", while 62 considered it
"Absolutely true". A near fifty/fifty balance may indicate "skeptics"
had difficulty deciding whether it was merely "probably true", or
"absolutely true". Indeed,43% of respondents who scored less than 3.33
as a mean response to climate change questions scored 3 or 4 on that
question.
Third, it was not the most important question in the survey. It was not
used at all in analysis, and therefore was tied for the least important
question on the survey.
-
And a clarification - by "the authors history" I mean their history
of pro-AGW support and activism - advocacy for their cause/beliefs ...
that due to that strong advocay skeptic site might be wary and need some
extra effort to be convinced to support ...it was not intended to
reflect anything regarding professional credentials.
-
In addition to my 76, assume a "skeptic" assigned a probability of
.33 to the claim of a hoax, probablity of 0.33 to the claim of
corruption, and probability of 0.9 to the claim that scientists only do
what they do to secure money. Then the conjoint probability of the
statement is 0.98. Using my scale that Absolutely true means probablity
of .9 or greater, probably true means a probability of about 0.66, and
so on; that means A Scott should unhesitatingly have answered
"absolutely false". (Again, I assume independence, but as parsed by
SCott, the clauses are independent.)
I find it utterly bizarre the way Scott ignores the meaning, and hence
logic of "and" so as to talk himself into affirming that climate
scientists are all corrupt and fraudulent without actually saying that
all climate scientists are corrupt and fraudulent.
-
Bill remarked on the same thing that struck me in A. Scott's remark:
"The problem is that the authors made these claims by using data
collected not from the skeptics they are making these conclusions about,
but from data collected only thru sites that are some of the strongest
supporters of AGW theory around."
As Bill points out, skeptics hardly maintain a hermetic diaspora from
the world of mainstream beliefs. From what I've seen, skeptics are
enthusiastic participant at many "pro-science" websites. As well it's
arguably the case that disproportionately -robust- skeptics are to be
found participating in discussions outside of "friendly" venues.
What will be really fascinating are results from surveys done with
respondents sourced from locations that do not have climate science as
their central concern at all. There's enough good eating on this topic
that we can probably count on seeing followup research. That's probably
the best way a few of the questions being asked here can be settled and
of course is in keeping with the normal means of scientific progress.
Somebody gets the ball rolling, others kick it, everybody gets to see
where it goes.
-
Tom ...
Whether it was included in the analysis or not I think you have to agree
it is one of the - if not the - best separators/indicators of skeptic
and non-skeptic respondents
And while you are correct, that is how people technically should have responded to the compound questions, it is not how respondents felt about, or tried to deal with their answers.
The poor compound construct, and unavailability of a neutral choice can
be, in my opinion, shown to have forced a vote they felt was not
accurate, and were not comfortable with.
I showed the direct problem thru my deconstructed answer. Two of my
responses were false and one was true. By your standard I should have
answered False. However, my averaged answer was 2.67 - almost spot on
"neutral" - however the answer that score is closer to True than False -
and is forced to True.
That is the real world for many with those questions.
-
Excuse me Bill?
I find it utterly bizarre the way Scott ignores the meaning,
and hence logic of "and" so as to talk himself into affirming that
climate scientists are all corrupt and fraudulent without actually
saying that all climate scientists are corrupt and fraudulent.
Where the heck did you get that from - that I say or even imply all climate scientists are corrupt?
How about using what I really said - my own words:
(b) there is inaccuracy, maybe even some untruthfulness, but
an intentional hoax? No. That is the caricature of what a warmist
thinks a skeptic thinks. My answer - "False = 2"
(c) same thing, some inaccuracy, maybe even some untruthfulness, but
"corrupt"? No I do not believe as a group the warmist science side is
remotely corrupt. Bad intentioned at worst. My answer "False = 2"
I quite clearly stated I do not think there is a hoax on the part of climate scientists and do not think those climate scientists are corrupt.
-
Sorry Bill, I mean "Tom" ...
-
A Scott:
1) How do you know how respondents tried to answer? Did you email them? Don't try to cover your conjecture with respectability by making assertions about things you are plainly in no position to know.
2) I believe A Scott's assertion that "skeptics" are so lacking in
logic that they believe they should treat conjunctions as disjunctions
must surely qualify as an outrageous ad hominen. I request that moderators let it stand in any event as a true measure of the merits of his position.
3) Given that he seems having difficulty following the logic of conjunction himself, let me explain it to him simply. The probability of a conjunction can never be greater than the probability of its least probable part.
Ever. It is mathematically impossible. As you assert that some part
of the conjunct is probably false, that means the probability by your
estimate of that part is less than 0.5; from which it follows on pain of
contradiction that the probability of the conjunction is less than 0.5.
-
Doug:
As Bill points out, skeptics hardly maintain a hermetic
diaspora from the world of mainstream beliefs. From what I've seen,
skeptics are enthusiastic participant at many "pro-science" websites. As
well it's arguably the case that disproportionately -robust- skeptics
are to be found participating in discussions outside of "friendly"
venues.
Do you really believe there are any significant numbers of skeptics at the 8 pro-AGW sites that participated?
We all pretty much know that a small fraction of participants at pro-AGW
sites are skeptics - a fact well supported by the data responses shown
above.
Skeptic's 132 vs pro-AGW 1001
-
"Skeptic's 132 vs pro-AGW 1001"
Interesting; skeptics are pretty much in the ballpark of what Maibach
has found in the fringes of his surveys, those two segments called
"doubtful" and "dismissive." Superficially suggests that sampling was
not so awful but it's probably a coincidental resemblance.
-
Tom Curtis, your response makes no sense. You claim I am wrong to
say ~1,000 responses would be enough to generate the negative
correlation you saw, yet you then go onto discuss things that have no
bearing on the correlation you saw. How many conspiracies different
people may have accepted has no bearing on whether we'd expect to see
the strong, negative correlation you highlighted.
Almost 90% of the respondents "assent to the science." If they accept
the science behind global warming, they shouldn't believe global warming
is a hoax. This means you'd expect a strong, negative correlation
between their belief in global warming and the idea of global warming
being a hoax. And since they make up nearly 90% of the respondents,
that correlation would necessarily be found in across the population as a
whole.
If you really think my point is wrong, do a simple test. Generate a
series of a 1,000 1s and another series of 1,000 4s. Then add 150
random values ranging from 1-4 to both series. When you check the
correlation of these series, you'll see a strong, negative correlation.
That demonstrates exactly what I described.
-
Also, regarding the quantity of skeptics to be found at various
pro-science climate websites there seems plenty of data available on
that. See the amusingly titled "it's waste heat" topics at Skeptical
Science, for instance. There we see ~500 back-and-forth interactions
that would not have happened without enthusiastic participation from
skeptics.
Now, if we want to argue that there are some -qualitative- differences
in skeptics found at pro-science websites, where would we begin? We
might say they're more than averagely pugnacious, but what else?
Also, why would involving more skeptics in a the survey affect the
qualitative properties of those respondents? Or is the argument about
the relatively few skeptic respondents to do with noise?
-
Tom, you can talk "technical" explanations and assertions all day
long, but just as Scotty told Kirk 'ya caaannn't change the laws of
physics Captain' neither can you change human nature and response. And
humans don't always understand they should follow rigid rules of math.
As far as my being in a position to know some of these things, not
wanting to disrespect the hosts here, and with all due respect (which
hopefully you'll see fit to return) I'll just encourage you to check
around a little.
Last, I'm no expert on "skeptics" logic about "conjunctions" and
"disjunctions" but I can say I have a decent idea of what they think,
the struggles they have, both with the question constructs, and coming
up with an answer to them as written.
There are many comments out there if you look, but here's just a couple:
"Many of these questions are poorly phrased. It's very
difficult to assign "absolutely true" or "absolutely false" to
conjectures such as the ones about the effects of fossil fuels on
climate change."
It seems as though any honest person would actually answer 3 (or I don't know) for many of these questions.
[The Co2/hoax question] for example, should be rephrased to talk about
whether climate change dangers are being exaggerated for funding
purposes, rather than an absolute statement about a hoax.
There were no good answers for some questions. "I don't know" or "No Opinion" should have been available.
The term "I struggled with" seems to characterize a lot of responses.
-
I don't think there's much point in focusing on a person evaluating
a statement with a conjunction in a way that isn't correct. The
conjunction fallacy is a common problem, sometimes being found in over
80% of responses. Even if you think people shouldn't do it, you cannot
deny it is a natural phenomenon most people are susceptible to.
As such, anyone making a survey should be aware of it. When writing
questions, it should be kept in mind. If it isn't carefully controlled
for, it introduces a very real risk of biasing results.
If you want to know how people feel about a subject, you have to accept
the fact mathematical truths don't always determine people's
interpretations of things.
-
...you have to accept the fact mathematical truths don't always determine people's interpretations of things.
As evidenced by this survey and elsewhere, it's the same with physics, chemistry and biology.
-
A. Scott @ 75
You can see by the answers to the hoax question alone how
many (or in reality few) skeptics responded. And that was a single,
albeit fairly seminal, question. Further analysis reduces the skeptic
count much further.
Making broad-based conclusions about skeptics from a small subset of the
total responses, when we know with certainty (from posts on pro-AGW
sites - including the sites the survey was offered thru) that there were
fraudulent responses made, is not good science in my opinion.
Conjecture.
Paper, p13:
Another objection might raise the possibility that our
respondents willfully accentuated their replies in order to subvert our
presumed intentions. As in most behavioral research, this possibility
cannot be ruled out. However, unless a substantial subset of the more
than 1,000 respondents conspired to coordinate their responses, any
individual accentuation or provocation would only have injected more
noise into our data.This seems unlikely because subsets of our items
have been used in previous laboratory research, and for those subsets,
our data did not dier in a meaningful way from published precedent. For
example, the online supplemental material shows that responses to the
Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Grin,
1985) replicated previous research involving the population at large,
and the model in Figure 1 exactly replicated the factor structure
reported by Lewandowsky et al. (2012) using a sample of pedestrians in a
large city.
Plus, as I've said, the results fit the experience of most participants in the debate, and , particularly, are only confirmed again
by the 'skeptic' reaction, with swift accusations of malfeasance and
the whole being a contrived plot against 'skeptics' based on the
flimsiest evidence. (The 'Cook-Lewandowsky Social-Internet Link' [Watts
2012]? Give me a break! And, oh, the irony...)
-
Brandon Shollenberger @89, however, common the conjunction fallacy
may be, it is not a reason for "skeptics" to find it difficult to
interpret the CYClimateChange question. Furthermore, invoking it as a
reason for the high affirmation rate for that fallacy by "skeptics" is
only plausible if you also posit a high rate of logical fallacies among
"skeptics". I have no objection to your adopting that premise, but I
doubt many "skeptics" would consider it flattering.
With regard to the large number of accepters of the science who do not
accept the Climate Change conspiracy, performing the experiment you
described I find consistent negative correlations in the 0.1 to 0.3
range, not enough to explain the -0.85 correlation that actually
obtains.
Please note that I rolled a random number for each of the three
components of the compound number. For you to think your experiment
would give a high negative correlation, you must think it reasonable
that people have a fifty-fifty chance of accepting arbitrary conspiracy
theories. Even my method probably over-estimates the acceptance to be
expected from pure chance.
The strong negative correlation is, therefore, primarily the consequence
of the 85.6% of respondents scoring a mean less than 1.67 on the CC
plus CauseCO2 questions affirmed the conspiracy (excluding the two
suspect responses). Alternatively, you can attribute it to the 73% of
respondents scoring a mean CC response less than 2.5 and affirming the
conspiracy theory.
-
I would still like to see the statistics showing where the respondents come from.
How many Deltoid?
How many Tamino?
How many the university's own labs?
Etc
I find it highly unlikely that the number of "skeptics" as defined in this poll exceeds 5% on these blogs.
An analysis of the posts and opinions professed therein confirms this estimation.
The biggest mystery for me is how a large number of skeptics could have
appeared on blogs where skeptics never go because they disapprove
strongly of their editorial policy and of their moderation.
-
Kevin C @ 50
Thanks so much for your personal take on this.
I don't view your path to doing science as a counterexample to my views.
Did you? That suggests I have to express my views better—because I didn't mean to exclude the path you took.
I view your path as an example.
Like anyone else, you couldn't do science until you knew how; you didn't
know how to do science until you learned it; learning took time and
work.
But that's all I mean by "the scientific method": I mean "how to do science." No more and no less. (In this context.)
You can know it explicitly or implicitly—like grammar, as you said.
You learned it on the job, with feedback and advice from supervisors who
knew what they were doing. Eventually you knew what you were doing.
This was implicit knowledge; you probably didn't have a name for
everything you did. Even the name "the scientific method" may never have
come up. But in learning how to do science, that's what you were
learning.
I learned the scientific method from Popper et al.; you knew it even before you'd opened Popper et al.!
(I'm very curious to know if you felt these writers actually improved or changed how you did science.)
But we both had to learn it somehow. Neither of us emerged into the
world already knowing it. I studied it at university, then I used it.
You learned it by using it, then you read about it.
Only 2% of people have done either of those things.
Now... onto your second paragraph :-).
Hehe... no I'll let you go for now. I do want to talk about those things though.
-
Is it possible that you have identified three groups in your
survey: AGW Supporters, AGW Skeptics, and New World Order conspiracy
theorists?
I ask this because if you remove all people who believe in the NWO
conspiracy in particular and then divide the remaining respondents into
skeptics and supporters, you will find that the percentage of conspiracy
theory believers is nearly identical between the skeptics and
supporters. I have done this.
Potentially, because the NWO conspiracy is so inextricably linked to the
idea of an AGW conspiracy as a means of boosting the NWO, including the
NWO consipiracy in your questions was not appropriate. A supporter of
the NWO conspiracy will necessarily be an unreasonably harsh skeptic of
AGW theory.
Are you merely identifying the fact that NWO conspiracy theorists are
also believers in an AGW conspiracy - a fact that is obvious from the
claims of the NWO conspiracy theory itself? Is it reasonable for your
paper to claim that "skeptics" are conspiracy theorists when, in
reality, only the radical NWO supporters show any significant difference
in the data?
-
A question for Tom Curtis: I haven't read the paper in question and
am no expert in this field. I have read most of the various discussions
about it. I was interested in your analysis, but it seemed to me you
were studying a *different* question from that of the study.
That is, in Lewandowsky's paper as I understand it the analysis looked
at the relationship of *two* factors - free-market ideology first, and
conspiracy "ideation" second - to climate change denial. However you
analyzed only the second factor. I don't know how it was done here, but
depending on the details of the sample you can get very different
relationships when looking at just one explanatory variable vs looking
at two.
For example, if a "scam" record was 100% pro-free-market as well as
being pro-conspiracy and anti-climate change, isn't the
anti-climate-change answer explained more by the pro-free-market answers
than by the pro-conspiracy answers, if you were studying both factors
(and there was a much higher correlation on the first)? So the more
interesting answers to quantify the second relationship would be those
with pro-conspiracy but not-pro-free-market responses. That would
explain why Lewandowsky found the "scam" records had no impact on the
quantitative results.
-
According to the paper:
We additionally show that endorsement of a cluster of conspiracy
theories (e.g., that the CIA killed Martin-Luther King or that NASA
faked the moon landing) predicts rejection of climate science as well as
the rejection of other scientific findings, above and beyond
endorsement of laissez-faire free markets.
According to Steve McIntyre:
The proportional adherence to the Moon conspiracy and the MLK conspiracy
in the revised dataset is higher among "warmists" than among
"skeptics". Endorsement of these conspiracies, if anything, "predicts"
that the respondent is a warmist.
Has Mr. McIntyre miscalculated the impact of the revised dataset? Or is this paragraph of the paper going to be withdrawn?
-
Tom Curtis, I wish you would have addressed the fact you responded
to me in a way that made no sense (or explained how your response had
made sense). It's difficult to have a conversation when people don't
address their mistakes. It's especially difficult if they keep making
similar mistakes.
In this case, you suggest the conjunction fallacy is only an explanation
if I "posit a high rate of logical fallacies" amongst skeptics. You
say this despite the fact I pointed out the conjunction fallacy is
present in high rates amongst all people. Not only do you seem
to have ignored what I said, but you did so in a way that allowed you to
make a critical remark of a group you "dislike."
Following that, you say you performed the experiment I described and got
"negative correlations in the 0.1 to 0.3 range, not enough to explain
the -0.85 correlation that actually obtains." I'm not sure what you
actually did. Your description seems to indicate you set out to do an
experiment other than the one I described as you say you "rolled a
random number for each of the three components of the compound number."
There is absolutely no way my description should lead someone to think a
step like that is part of it.
Anyone could generate the two series I described in a matter of minutes,
even with Excel. If they did, they'd find correlation values of ~-0.6.
You say for me "to think your experiment would give a high negative
correlation, you must think it reasonable that people have a fifty-fifty
chance of accepting arbitrary conspiracy theories," but not only is
that untrue, it is nonsensical. What I described is the emergent
behavior of the data I discussed.
I provided a simple and easily verifiable experiment. You claimed to
perform it while actually performing some other experiment. You then
used that to claim I was wrong. It is difficult to know how to respond.
-
To demonstrate my description is in fact accurate, and to allow
anyone to see how it is done, I repeated the experiment in Excel then
copied it to a Google Spreadsheet. It took me less than five minutes.
It should be available here:
https://docs.google.com/spreadsheet/ccc?key=0An7wzninew2edEpGODluR3hYWmUzZThWRzFvUDRwcFE
-
Brandon, you're 100% right. (-snip-).
Moderator Response: Off-topic snipped.
-
Brandon Shollenberger, it is not the case that, for an arbitrary
conspiracy theory, on average 50% of the population will assent to it.
Therefore, it is not a valid test of your claim that the high negative
correlation between mean CC plus CauseCO2 score and CYClimate
ChangeScore to used a random factor which assigns equal probability. In
fact, excluding CYClimateChange, the frequency of people absolutely
disagreeing with with a random conspiracy theory in Lewandowsky's
results is 0.545; of there considering it probably untrue the frequency
is 0.373; of their considering it to be probably true, it is 0.075; and
of their considering it absolutely true, the frequency is 0.008 (figures
are rounded so will not sum to 1).
Unless people who endorse AGW are far less likely to endorse a
randomly chosen conspiracy theory than the general population, these
figures are representative. Indeed, if anything they over represent
the probability of endorsement in that the included the two most
suspect responses. Consequently, unless you agree that the people who
reject AGW are far more likely to endorse a randomly chosen conspiracy
theory than those who accept AGW, your model is completely distorted.
Alternatively put, if you think your model is a reasonable model of
responses from the general community, you must think it also a
reasonable model for responses to other conspiracy theories. That being
the case, you expect a negative correlation between acceptance of
consensus climate change science and acceptance of conspiracy theories
of about -0.6. In other words, you would have been surprised by Lewandowsky's result because it so massively understated the correlation.
Frankly, I don't think you believe anything like that. Therefore I
expect you to respond stating that your model is fatally flawed,and your
analysis incorrect. At least, I should be able to expect that. Long
experience has taught me that "being a climate change "skeptic" means
never having to admit your wrong".
Out of interest, even my model, which assumes a 0.5^3, or 0.125 chance
of endorsing a randomly chosen conspiracy theory overstates the
probability, of acceptance by 50%. Also out of interest, you overstate
the negative correlation achieved by your model. With your model you
obtain correlations between -0.4 and -0.6. The mean of my five samples
was just below 0.5. I have difficulty believing that an intelligent man
would attempt to foist of on us as as serious modelling effort a single
run of a stochastic model as being in any way meaningful.
Finally, Steve McIntyre would do better to actually determine the
correlation after excluding the "warmists" (by his definition), or all
those who scored a mean CC plus CauseCO2 score greater than 2.5 instead
of making vacuous "rah! rah!" statements. I would also appreciate it
if he, four days after my initial request, and during which time he has
found time to post five distinct posts, finally corrected his claims
about my opinion in accordance with my wishes. I do not see why deleting ten words that turn a falsehood into a true sentence is so hard for him.
-
APSmith @96,
Lewandowsky's model includes five latent constructs - Free Market;
Climate Science; Conspiracist Ideation; Other Sciences; and Problems
Solved. Problems solved is essentially a measure of the willingness to
consider past ecological problems (acid rain, ozone layer) solved, and
is negatively correlated with acceptance of climate science (-0.586).
Other Sciences is a measure of willingness to attribute HIV as the cause
of AIDS, and smoking as a cause of lung cancer, and of the ability to
accurately specify the consensus in support of various scientific
propositions. This was positively correlated with acceptance of climate
science (0.563) Like the negative correlation between acceptance of a
free market ideology and acceptance of climate science (-0.866), these
correlations are so strong that any issues about scammed responses,
question structure and weighting, or other methodological issues have no
chance of overturning the result. That is, any research in this area
properly conducted will find correlations between these items with the
same sign and similar (+/-0.2) magnitude. In contrast, the negative
correlation between acceptance of climate science acceptance of
conspiracy theories is sufficiently small that repeat research could
easily to find any significant correlation. It is more likely, IMO,
that it would given my experience with AGW "skeptics" but that, like the
sample in the survey, is a biased sample.
Given that it is the relationship between accepting climate change, and
rejecting conspiracy theories which is the most suspect, it is the
effect of scammed responses on that relationship which is interesting.
With regard to your final comment, that is an interesting approach, but I
will have to take the time to set up my spreadsheet to see the effects.
I do not, however, think it is the approach taken by Lewandowsky
either in his paper or in this blog post. If I am wrong, perhaps he
could clarify.
-
Tom Curtis (@102) - thanks for the reply. However, I wasn't
intending to ask two separate questions. The question was whether you
were sure you had replicated the paper's analysis. I don't believe what
researchers in this sort of field do is simple pair-wise correlation
analysis when more than one explanatory factor is involved. The presence
of one correlation has an impact on the others. If you read about
factor analysis - http://en.wikipedia.org/wiki/Factor_analysis - it
seems significantly more complex than what you have described doing.
It's essentially a regression problem with multiple explanatory
variables, and the relative weights in that regression will generally
not be what simple pair-wise correlation gives. Can you tell from your
reading of the paper whether what you did actually matches their work?
-
apsmith @103, I have not replicated their work and do not claim to
have done so. I have simply taken the pairwise correlation using a
spreadsheet. That differs from their procedure not only in not being a
factor analysis, but also in not adopting the weightings of the
different elements used.
Despite the fact that I have not replicated their results, I believe the
issues I have raised are genuine issues. Put simply, if, over a range
of data, the simple pairwise correlation does not vary with and have
similar magnitudes to that obtained by factor analysis, reporting the
correlation obtained in the press as that (ie, a correlation) is deeply
misleading. Indeed, I would go further and say that if the correlation
obtained by factor analysis do not correlate with those obtained by
pairwise comparison, then your "correlation" obtained by factor analysis
is not a correlation between the two factors at all, but some other,
independent property.
Therefore, while I do not expect changes in correlation with changes in
data set to have the same magnitude, and not even always the same sign
as the equivalent change obtained by factor analysis; if a change in the
data set causes a change in the pair wise correlation, the presumption
is that it will cause a similar change in factor analysis. Given that,
if a change in the data set can reduce the correlation by 20% or more,
it needs to be at least checked to ensure a similar change does not
occur in the factor analysis.
Disappointingly, Lewandowsky has reported no such check.
It cannot have escaped your notice that in the article above,
Lewandowsky reports on effect of deleting outliers on the p-value rather
than on the correlation. If the p-value remains constant but the
correlation is halved, that is a significant effect which should
mentioned in the paper. It appears that Lewandowsky, by using a 10
question latent construct rather than the 12 question construct used in
his paper has in fact made direct comparisons of correlations impossible
in his new analysis. It is probable therefore that the pairwise
correlation between the CClatent construct and the CY latent construct
in his new analysis differs substantially from that reported in his
paper, but reporting that correlation will tell us little about whether
that is due to the exclusion of CYRoswell and CYArea51, or due to the
exclusion of the 3% most extreme responses.
As such, I consider the post above an evasion of my questions rather than an answer to them.
-
Tom (@104) - I don't think this post was an evasion - you
specifically indicated that the change in correlation by removing "scam"
entries was significant enough to make it no longer statistically
significant. Lewandowsky addressed that concern directly in the above
post. Yes, the post does not describe the quantitative change associated
with cutting off outliers - surely there is some. But it describes the
result as "qualitatively" the same. To me that indicates there's no
substantive change (and of course it is still significant as the post
clearly states). I'm not sure why you think a 20% change in the weight
is so important? If it was 50% maybe but to me think that would be
because such a significant difference is a "qualitative" change in
itself. Lewandowsky could of course clarify by providing some more
numbers, but this starts to look like a never-ending process here...
As to the difference between "correlation" and regression weights from a
factor analysis - again, I claim no expertise. But look at the analysis
done by Tamino on temperature trends here for example:
http://tamino.wordpress.com/2011/12/06/the-real-global-warming-signal/
which was later published as Foster and Rahmstorf 2011:
http://tamino.wordpress.com/2011/12/15/data-and-code-for-foster-rahmstorf-2011/
If you just look at temperature trends there's a lot of uncertainty. If
you look only at the correlation between temperature and solar
irradiance you see no signal at all. But do a multiple-regression
analysis and all the explanatory factors appear with sensible weights.
I'm assuming what Lewandowsky and co-authors did here with the cognitive
explanatory factors was something similar - since free-market views
explain more of climate denial, if you don't account for that factor
then you get the wrong numbers or even no discernible signal looking
only at the other factors.
-
Tom Curtis, your latest response to me is silly. First off, you
failed to address any of the issues I raised, including where you
falsely claimed to perform the experiment I described. If you will not
admit to making things up when it's pointed out, it's hard to have any
sort of discussion with you.
Second, you discuss at length what belief I'd have to hold for my
"model" to be true. Everything you say is nonsensical. I showed what
would happen if you added pure noise to a certain data set. That
doesn't suggest anything about my beliefs about anything. It suggests I
think a viable test of the influence of data is to add pure noise!
Could I have added noise with a different structure? Of course. Might
such a test have been more "lifelike"? Perhaps. Would it have mattered
if I did? No! The simple reality is when 90% of your data has a
strong, negative correlation, the full dataset will show a strong,
negative correlation.
By the way, if you're going to make things up, it'd be wise to do so in a less blatant manner. You say:
Also out of interest, you overstate the negative correlation achieved by your model.
With your model you obtain correlations between -0.4 and -0.6. The mean
of my five samples was just below 0.5. I have difficulty believing that
an intelligent man would attempt to foist of on us as as serious
modelling effort a single run of a stochastic model as being in any way
meaningful.
I didn't overstate anything. To prove this, I wrote a quick script to rerun my "model" 10,000 times. Results:
Mean: -0.6102619
SD: 0.03970063
Min: -0.7386666
Max: -0.4493552
To be blunt, I'm tired of you making things up. I'm even more tired of you making things up about me.
I'm especially tired of having to go out and do extra work just to
prove you're making things up me. And mostly, I'm tired of you never
addressing the fact you make things up. It's insulting and makes
discussions with you mostly pointless.
If someone wants me to share the code I used, I'll be happy to. If
someone would like to see the influence of a specific choice for type of
noise to add, I'll be happy to try to accommodate them.
But I'm done responding to people who flat-out make things up about me.
Prev 1 2 3 Next
Post a Comment
|
Relevant events (mainly in Australia) will be announced here as they become available.
|
|