Dr. Bernardo Attias, Professor in the Department of Communication Studies at CSUN.
Question 1. Does the authority of an author guarantee the accuracy of scientific information?
The Alan Sokal Affair was certainly of the first things that came to mind in terms of an example of somebody who kind of intentionally disrupts the peer review process and the process of trust that’s been addressed here. There is an expectation within academia that we do rely on trust to a certain extent and the peer review process is not about fact-checking per se. It’s not like in journalism, where you have people who are trained to look up facts, to check, to make sure they’re accurate, who are trained to check on sources and all those kinds of things. When people peer review papers in academia we’re often mainly asking four questions: first, is the research cited here appropriately summarized? Are specialized terms utilized appropriately, are research conclusions accurately restated, is the author accurately reporting the expert consensus in the field, etc. Second, is the author missing any important areas of already published research? Third, is the research itself valid? Has the research been conducted appropriately, do the conclusions follow logically, is the argument coherent, etc. And fourth, are they making contributions to the field? These are the kinds of questions that we would ask as reviewers.
If I’m reviewing a paper in my field in communication studies and somebody is writing about communication studies of a particular area that I don’t have a lot of information — like if they’re writing about communication in criminology, for example — I am not a criminologist, I don’t have background in criminology and I don’t really have the expertise to determine whether specific cited facts are correct. Did certain historical events actually occur on the date cited in the research, for example? Are proper names spelled correctly; is the author making anything up; is the author telling only one side of a story, etc. So I’m trusting the author that those facts are accurate and I’m evaluating that paper based on those other criteria. And even the kinds of questions I would ask about my own field, if the author is engaging in interdisciplinary research, I may not know, for example, if they inadequately summarized the consensus of scholarly opinion on a particular conclusion about criminology.
So the peer review process does not necessarily weed out false information. The Sokal Affair is a great example of where a scholar in order to catch a journal’s editors with their pants down, so to speak, kind of intentionally used false information and worded the article in such a way that it sounded like a legitimate argument. Personally I’ve actually read that paper and I do find a lot of faults with the peer review process in that particular journal. I think they made a big mistake accepting that paper even without the scientific knowledge to understand some of its claims. Hindsight, as we say, is always 20/20, but I think a journal engaged in the kind of truly interdisciplinary research that is common in cultural studies should make more of an attempt to engage scholars in the review process who have expertise in all of the relevant fields. The review process is blind, of course, so there is no way to know for certain, but I would guess that the journal did not solicit reviews from any professional physicists in the case of that article. Had they done so, the reviewer might more likely have caught the inaccuracies (and, indeed, bad faith) of that article. But that’s a different question. The problem here really is presented because somebody disrupts that dynamic of trust and, as Dr. Krase pointed out, this is really a social rather than a kind of scientific question. The question of scholarly authority is really itself a question of sociality: who is giving people authority and on what grounds are they giving them authority? The expectation of journals is that scholars will submit articles in good faith; that they will engage in such research honestly with transparent scholarly intention rather than practicing a kind of political game of “gotcha” as was occurring with Professor Sokal.
My background is communication studies and specifically, my expertise is in rhetoric. And in classical rhetoric, we actually look at questions of authority. If we back to Aristotle, for example; he taught that in public speaking there are three forms of proof. And those forms of proof are what he called ethos, pathos and logos. Logos, of course, was rationality or reason; weighing the facts and arguments; this kind of thing. Pathos would be the appeal to emotions. And then ethos being interpreted as credibility, authority, or really for the Greeks, ethos means character. The character of a speaker was part of what people used to evaluate whether what that speaker was saying was true or not, and whether they were convinced by the speaker. It is based on their character. And their character, may be based on reputation. It may also be based on just their inherent qualities of public speaking. So one speaker maybe sounds more credible than someone else and so forth.
And Aristotle warned even way back then, that ethos was actually the most powerful form of proof even though Aristotle — as most people know — was a strong believer in reason, in the power of logos. But nevertheless, he felt that the ethos was the proof that actually convinced people. That there was an inherent danger in that people could use their authority in a manner that was unethical (literally, without ethos). Some philosophers of the time (especially Plato) feared that rhetorical expertise would embolden unethical rhetors, giving them the tools to pollute the public discourse by making false things seem true and true things seem false. 
The Sokal affair is kind of an extreme example of this, where a scholar intentionally disrupts the process of trust by acting in bad faith. So what I would like to do with the question of today’s conference is actually turn it around. Instead of asking: does the authority of an author guarantee the accuracy of scientific information? I would like to ask whether an author’s reputation for accuracy and scientific information will enhance their authority. One common theme in what people have said so far in this panel is that recent years have changed things in some ways, whether it’s because of the increase in the importance of publication, and then the glut of information that we find ourselves in, whether it is the increasing difficulty of looking up things, to check on them and so forth; or whether it’s the political climate which does lead to things like the Sokal Affair, which was itself politically motivated. Sokal felt that there was an area of the field of scholarship that he was writing in that was really kind of letting politics get in the way of their scholarship; I think it’s fair to say that even though it was submitted in bad faith, he likely believed that his “gotcha” submission was a kind of corrective intervention that would expose practices at the journal that he believed were themselves unethical.
While one can debate the validity of someone like Sokal’s intentions, and indeed there is something to be said for the role of the “prankster” in academic discourse, it is nevertheless indisputable that this intervention disrupted the scholarly process and it was rooted in bad faith, leading to a certain amount of distrust. There is little question that we are seeing an increasing politicization of scientific and social scientific research across the board. This is particularly problematic when you look at areas such as climate science, or sex education, as well as other topics that have become increasingly part of the public policy sphere.
To sum up, there are a lot of issues here, but one of the things that has shifted is that in the political sphere we see more and more people rely on something like ethos – understood commonly as reputation, or perhaps even the “feel good qualities” of a public speaker or writer. For example, “I trust this person because they make me feel like I feel about myself.” That’s why we hear time and time again, that’s how people vote for candidates, etc. I think that puts the cart before the horse in a lot of ways. It may be that I feel nostalgic for something that never existed, but there might have been a time when that reputation and that authority was established through a reputation for accuracy or for well-established scientific or social scientific research.
Let me speak briefly to the tension between the tendency of the field to require metrics, or some way of answering the question “why does your reputation increase”. We need some collectively agreed upon way to evaluate that reputation (and that ethos), whether it’s a prize or award or something else. But we need some kind metrics that people agree on because if we’re going to say “Okay. This person is now an authority in the field,” on what basis do we make that decision? It is in many ways just an agreement among the members of the group. But presumably that agreement is based on something. So we do seek those metrics, but then the tension is that those metrics then become a kind of gatekeeping. And then that gatekeeping process takes on a life of its own. So in some of these examples, that’s what’s happening; there is this kind of residue of established power which Professor Finkenauer called the “good old boy network”. That kind of network reinforces itself and then keeps other people out of that. And the corollary to that I think is what Dr. Krase brought up in terms of language, that part of the problem is just a matter of a specialized language that develops over time that people outside of that in-group might not be privy to or understand aspects of that language.
Again with the Sokal Affair, I think it’s a great example. He was a physicist and the journal’s expertise was not in physics at all. I remember at the time joking with friends — “what would happen if we submitted a phony article to a physics journal?”. It really is a matter of that specialized language. I imagine the people in the review committee for the Sokal article said: “well, I don’t understand the physics here, but it seems well written. It takes the form of an academic article that would be accepted. So we should say ‘yes’.” That obviously was a problem, it was embarrassing for the journal at the time.
But the other aspect of that problem too, is this kind of disinformation, this kind of intentional disruption. And it’s happening more and more. It’s a kind of gotcha style. I think of Saul Alinsky when the kind of gotcha journalism that is now infecting a lot of American political discourse right now where you get somebody to say something out of context or do something out of context in order to embarrass them.  And that seems to be happening more and more.
I recall a more recent academic scandal that was influenced by the Sokal Affair in which three graduate students wrote at least 20 phony papers and managed to get a handful of them accepted into scholarly journals. This is similar to the scandal mentioned in Ukraine where scholars sent out hundreds of articles to journals that were all fake. The students I mentioned just kind of made up the data, made up everything. Like Sokal they saw their activities a performative critique of what they called “grievance studies,” and their approach was obviously modeled after (indeed, entirely derivative of) Sokal’s intervention.
They were interestingly enough trying to build their academic reputation on doing this. They thought “okay, we’ll do this and expose these journals for being fools or whatever and then that will help us get jobs in Academia”. And it kind of blew my mind because I thought well if anything that should be evidence of academic dishonesty at a really high level I can’t imagine wanting to hire somebody who’s willing to try to publish something in such a dishonest manner. If they will make up data to play “gotcha,” how can we be assured they will not make up data in more serious scholarship? It’s like “wow, that’s nuts”. It really points to these increasing tensions that we’re going to have to confront in a lot of different fields. And I think it is testament to the increasing politicization and polarization of academic work that instead of being roundly criticized as a brazen attack on the good faith assumption that underlies scholarly activity, some scholars actually praised these interventions.
Question 2. Priority of sources and self-alignment among them.
Role of experiments. What if the facts contradict science? Do such contradictions indicate an unscientific nature of preceding inferences?
One thing that I think is a common thread to these examples is the act of telling stories. Fr0m the perspective of my field of communication studies, I believe that this is a big problem for science in the current age. It is not the science itself, but the need to communicate to the public what science means or, for example, what it means that an experiment that led to this result or that result—this kind of communication has been more difficult than ever before. And part of the reason it’s been more difficult than ever before is because you do have entrenched voices from different spheres that really want to change the conversation. Dr. Finkenauer’s example of the Scared Straight program is a great example. Having grown up with that program I remember people joking about this in high school. It was almost like the “say no to drugs” campaign in that everybody just thought it was funny that anyone believed that this was actually going to work. And it was like common knowledge among the people who are supposed to be actually targeted by the program that it wasn’t working. And that’s probably before science showed that to be the case. But of course you have the entrenched voices of those who built an institution around the program. They don’t want to lose the program, the funding, etc. They’re fighting against the scientific conclusions that have resulted. And then they’re exploiting any kind of disagreement or even just something that they don’t understand, or that they know the public doesn’t understand about the science. And they’ll take that thing they don’t understand and say “oh, well that proves the scientists are full of it”.
A great example of this is the way that Dr. Anthony Fauci in the United States around the coronavirus has been vilified by certain political camps because maybe one time he said “oh, you don’t need to wear a mask,” and then later he said, “you need to wear a mask”. Lots of things may have changed in the interim. It may be that the data is changed. It may be that the virus has gotten more serious and spread more. Part of the issue isn’t even so — when we’re talking about coronavirus – the issue isn’t just the people don’t understand the science. They don’t even seem to understand the math. It’s not just a matter of science – we’re talking about an exponential increase in numbers that’s a basic mathematical issue.
But again people are saying “well, he said don’t wear masks before and now he says to wear masks. He must not know what he’s talking about or maybe he’s being paid by the Democrats,” or whatever other conspiracy theory they associate with it. What they’re doing is kind of exploiting that scientific miscommunication, or the failure on the part of scientists to effectively communicate the meaning of their results to the public. And these people are exploiting that because they have another agenda.
Another example comes from 2006 and the climate change conversation. There was what at the time became colloquially known as “Climategate.” And what happened was that some Russian computer hackers found a trove of emails and made them public, emails between climate scientists and they were talking smack about other scientists. They were saying the kinds of things people might say in private conversations that they believe are going to remain private about what’s going on the politics of the field, etc. And so the hackers exposed these emails and people used them as evidence for the claim that there was no climate change. Essentially they were saying, “climate change is a hoax and this is proof because these scientists are arguing with each other and sort of gossiping and talking smack.” And that became the dominant narrative of the day and in the press about climate change – “climategate” – that all these climate scientists are actually lying, there must be some conspiracy going on. And it took the climate scientists themselves weeks to actually respond to any of this. So in the meantime a narrative built up around climate change in the public eye that started to see all these climate scientists as liars, as hoaxers etc. And then by the time the scientists themselves had an official response, it was almost too late – the narrative had already been written and structured around that and so that’s how the public was responding to it. And eventually six different evaluative institutions that looked at these emails and came to the conclusion that there was no manipulation of the science going on. This was just sort of ordinary people talking stuff, just wasn’t something that had an impact on the science where they were distorting the science. There was no distortion going on. But by the time all that information came out it was too late at least for that group of people who wanted to disbelieve the science. 
We are very definitely seeing the same thing with coronavirus right now, the way that they’re taking something that happens in the scientific community or some disagreement. And by the way we saw this earlier with the theory of the evolution of species. The science of evolution is pretty well established by this point; the evolution of species is a fact. And in fact the whole of modern biology is built on the reality of evolution. If evolution is wrong, a lot of other things that we believe in biology would be wrong as well. But of course we have a group of people called creationists who want to challenge evolution. They don’t think evolution is consistent with the Bible. What they do, for example, is they find a recent scientific study that shows that what we once believed about the evolution between two particular species is no longer true. That this experiment shows that particular belief was false. And so the creationists pick up on that small scientific disagreement about a particularity of evolution and say well this shows scientists don’t agree about evolution.
So, they assert, perhaps intelligent design is right, perhaps creationism is right. And again, it’s a kind of propaganda campaign. They tell a different story because they have an ulterior purpose; their purpose is not to find the truth. Their purpose is to promote creationism or to promote anti-climate-change science or whatever. And ultimately one of the problems is that in scientific research, there is an assumption that we are having a conversation to find the truth and to build our knowledge. And unfortunately that conversation is being disrupted by people who have a different agenda. Their goal is not necessarily to find the truth; instead their goal is to prove a different point. They’ve already reached a conclusion and instead of being willing to be wrong, if they see evidence that contradicts that conclusion, instead they will twist whatever is out there to promote the conclusion that they wanted to promote anyway.
 It is well known that Aristophanes criticized the Sophists in ancient Greece for teaching students how to make the weaker argument stronger and the stronger argument weaker. While this comment has been interpreted to mean that the Sophists taught students the deceptive craft of making the false arguments seem true and vice versa, some contemporary scholars have argued that rather than truth or falsity, it is actually a matter of progress over time that trained rhetors are able to make arguments that have previously been seen as weak be seen as stronger. Social change, according to this interpretation, requires such rhetorical intervention. See John Poulakos, “Toward a Sophistic Definition of Rhetoric,” Philosophy and Rhetoric 16:1 (1983). A slightly different though related interpretation holds that rhetorical education offers the “weaker” social classes training that might make their arguments more powerful in the political system in order to help them gain power. See Aakash Singh Rathore, Plato’s Labyrinth: Sophistries, Lies, and Conspiracies in Socratic Dialogues (London and New York: Routledge, 2018): 112.
 Saul Alinsky, of course, suggested these tactics as interventions in the public sphere from the political left, but they have recently been adopted most visibly (and most destructively in U.S. politics) by members of the political right who seem to thrive on media attention, including James O’Keefe and Jacob Wohl. See Saul Alinsky, Rules for Radicals (New York: Vintage, 1971); Jim Rutenberg and Campbell Robertson, “High Jinks to Handcuffs for Landrieu Provocateur,” New York Times (30 January 2010): https://www.nytimes.com/2010/01/31/us/politics/31landrieu.html; Paul Farhi and Elahe Izadi, “A Fake FBI Raid Orchestrated by Right-Wing Activists Dupes The Washington Post,” Washington Post (14 September 2020): https://www.washingtonpost.com/lifestyle/media/a-fake-fbi-raid-orchestrated-by-right-wing-activists-dupes-the-washington-post/2020/09/14/c07ccc7e-f6c1-11ea-be57-d00bb9bc632d_story.html.
 See Jennifer Schuessler, “Hoaxers Slip Breastaurants and Dog-Park Sex Into Journals,” New York Times (4 October 2018): https://www.nytimes.com/2018/10/04/arts/academic-journals-hoax.html.
 See Brett Bricker, “Climategate: A Case Study in the Intersection of Facticity and Conspiracy Theory,” Communication Studies 64:2 (2013).