Issues of methods and sources in social science
James O. Finckenauer, PhD
Distinguished Professor Emeritus
The basis for all knowledge is information. As our knowledge of the world grows almost exponentially, it is critically important to understand what is the information base for that knowledge. In reflecting on the challenges facing scholars as well as the lay public in assessing and evaluating information, I am led to think about my own discipline, which fits into a broad category called social science. Labeling this category as “science” is itself at issue because there is some question as to whether social science is really science? When one thinks of the natural sciences, i.e., biology, chemistry, physics, etc., or the mathematical sciences, i.e., algebra, calculus, geometry, etc., one thinks of laboratory-controlled settings for experiments and the accuracy and precision of measurement. Clearly, most social science is not like that. There have, however, been over the years increasing efforts to make social science research more like natural science/mathematical research in terms of reliability and validity. In my own subfield of social science which is criminology, for example, there has been a considerable move to make criminological research more quantitative as opposed to qualitative. This has resulted in a downplaying of methods such as ethnographies in favor of sophisticated statistical analyses of large data sets. One of the downsides of this has been pressure on scholars, particularly young scholars, to generate quantitative studies that will get published in high impact journals in order to survive in the publish or perish world that now characterizes major research institutions. This can then, unfortunately, become an ends-and-means equation that can influence the veracity of the published research in criminology. With this as context, let me backtrack to my own experience with these developments.
During some 50 years of working as a criminologist, I have edited two respected criminological journals, served on the editorial boards of a dozen others, and been a peer reviewer for scores of manuscripts submitted for review to various social science journals. In addition, I have served on literally hundreds of doctoral dissertation committees supervising budding scholars. In all of these instances, my role has been to assess and evaluate the quality and truthfulness of the information and findings being reported; and a major part of that assessment involves looking at the methods used to produce the reported findings. Based upon the assessment, in the reviewer’s judgement can the findings be trusted?
There are basically two kinds of data used in social science research: primary data and secondary data. Primary data are those that are collected originally by the researcher via methods I will describe shortly. Secondary data are already existing data collected by someone else for some other purpose than the particular study being reported upon. An example of the latter would be the Uniform Crime Report data collected annually by the FBI that reports on crime in the United States.
Social scientists (criminologists) usually use one or more of about a half dozen research methods to collect primary data. Only one of these (randomized controlled trials) is really similar to the laboratory-based methods used in the natural sciences. Each of these methods has advantages and disadvantages, and each has room for error that can effect the results.
Some criminologists use observation to collect their data. This is exactly what it sounds like, but is carried out in accordance with a structured set of rules about who, what and when is being observed. The observations made are usually coded so as to be reduced to quantitative form. Obviously, there is considerable subjective judgement being used in this process, and that subjectivity permits error to creep in. The same is true of content analysis – another social science research method. Here, researchers review documents of some kind, such as letters or diaries or reports, and code data in accordance with a research plan. Again, this is a rather subjective process. One of the correctives for this subjectivity is to have more than one researcher independently code the data in accordance with a systematic protocol.
The principal way to gain information from human subjects is simply to ask them for their opinions or experiences. This is done through interviews – conducted either face-to-face or via the telephone. Or, it may be done through mail surveys. There are numerous ways that errors can enter into each of these processes. One example is selecting the sample of persons to be interviewed or surveyed. If the sample is too small or especially if it is not representative of the population to which the researcher wishes to generalize, the results can and probably will be misleading. An example of this problem is seen in the presidential election polling conducted in the U.S. in both 2016 and 2020. In both opinion survey cycles, the actual election results differed considerably from the predictions from the polling. Pollsters are still trying to sort out the explanations for this, but at least some contend that Trump supporters were underrepresented in the samples surveyed.
Some other ways in which error is introduced into interview/survey methods is through the use of poorly worded questions that are misunderstood by the subjects; or subjects may lie in answering; or, as I have found in my own experience, interviewers may simply fill out the answers themselves rather than actually conducting interviews. The low response rates in mail surveys and telephone surveys can result in unrepresentative samples. Any of these will produce misleading and erroneous information.
Most experts agree that the gold standard for conducting research is the use of randomized control trials (RCTs). This is true in both the natural sciences and the social sciences. For instance, the search for a vaccine for the pandemic Covid-19 which is proceeding as I write this, is using randomized control trials to test the effectiveness of some form of vaccine against a placebo, using experimental and control groups of subjects. The assumption underlying randomization is that it controls for any extraneous variables that may influence the outcome, e.g., age, gender, ethnicity, health conditions, etc. Those effects are controlled for, so that the effect of the experimental variable can be separated out.
Even here, however, error can creep in, because “the best laid plans of mice and men sometimes (often) go astray”! For example, a number of years ago I carried out a study of the effectiveness of a prison visitation program for juveniles called Scared Straight! I carefully constructed experimental and control groups through randomization from a pool of eligible candidates. The idea was that the experimental group of juveniles would participate in the prison visitation program and the control group would not. I would then follow-up with both groups to assess the outcome. Among the breakdowns in this design, which unfortunately are fairly typical, was that some sponsors took what were to be control subjects (not intended to visit the prison) to the prison – and thus they became by definition experimental subjects. This obviously created problems maintaining the size and composition of the control group under comparable conditions. Further, in some instances, subjects could not be found for the follow-up, or particularly with the controls, subjects refused to participate further because they saw no reason to do so. I made a number of adjustments which accounted, I think appropriately, for these issues, but in other instances, sample size may be so reduced as to throw the validity of the research into question, or bias may be introduced into the reconstruction of the sample so as to produce unreliable results. In sum, even under the most rigorously constructed study conditions, errors can occur – and these errors can produce less than truthful and misleading information.
The main corrective for these various threats is the peer review process to which I alluded earlier. A research report — including everything from the statement of the research problem, to the hypotheses, the definition of the variables, the data collection methods, the analyses, the findings, and, most importantly, the limitations of the research – is presented for an independent review by a group of peers, i.e., persons who are acknowledged experts in the particular field of study. It is then left to these reviewers to determine the reliability and validity of the research and the resulting information. And only if they say okay, will (should) this information be added to our knowledge base.
Despite all this, there is still room for inaccurate information to pass muster. That inaccuracy can be the result of honest error, or unfortunately, of misconduct on the part of the researcher(s). In either of these instances, because of various pitfalls in the peer review process, it can be very difficult for the peer reviewers to discern any inaccuracies. Among these pitfalls are the following:
- The reviewers almost always see only the data and the methods by which the data were collected as these are presented by the researcher, meaning the reviewer must trust the honesty of the researcher in reporting these;
- Researchers may report only partial findings as if they are complete;
- Researchers do not report negative results, i.e., those that do not support the original hypotheses, in part because most scientific journals have little or no interest in publishing negative results;
- Researchers may reformulate their hypotheses to make them fit the actual findings, and thus avoid the negative results problem; and ultimately, when all else fails —
- Reviewers depend upon the reputation of the researcher(s) to be open and honest in what they are reporting.
In what well may be a case representing the risks of this dependence upon the researcher’s reputation, in 2019, at least six research articles that had been published by some of the most prestigious journals in criminology had to be retracted, because the editors of those journals no longer trusted the truthfulness of the results that had been reported — and that they had published; articles by a very reputable scholar in the field. To the point of peer review, the issues/problems with that published research were detected, not by the reviewers, but subsequently by a fellow researcher who had worked on some of the same projects being reported. This latter co-researcher pointed to the falsification of data, and to the refusal to release the raw data on the part of the principal senior author, as among the reasons for concern. The journal editors obviously agreed with these charges, and thus the retractions.
The challenges for the consumers of research information, be they scholars or lay persons, in deciding whether or not to believe what they see or hear, are seemingly quite daunting. Admittedly, there are no simple answers to this dilemma. In this brief space, I would mention just two areas for attention. First, so-called honest errors in reporting can be detected and corrected by making more complete data available to peer reviewers in the review process. Then, critically important to the corrective process is the replication of studies to insure the reliability of findings. No single study should be relied upon as the final word! Only when multiple studies across different scenarios produce comparable results should one have confidence in the findings.
Second, and perhaps most daunting, is dealing with instances of dishonesty and misconduct in research. Here we might think about putting more stress on ethical practice and better mentoring in the training of young scientists. Research institutions, especially universities, might rethink the “publish or perish” dictum for researchers seeking promotion and tenure, and research grants. Does the pressure to produce go so far as to entice scholars into cutting corners in their research? Finally, and relatedly, research consumers should be extremely skeptical of any information produced by scholars who have a proprietary interest in the research outcomes. Two recent examples of the latter come to mind – scientists employed by cigarette manufacturing companies, and scientists employed by pharmaceutical companies. Lest anyone have any doubts about the potential harm from bad information produced by bad research, there you have it!