african american man in pink shirt on computer

Bioethics Forum Essay

Was This Job Market Study Ethical?

A paper titled “Social Media and Job Market Success: A Field Experiment on Twitter” posted on the Social Science Research Network in May has sparked criticism for lack of informed consent, use of deception, and potential harm to job candidates. While this kind of experiment isn’t an example of clinical research, we think that the ethical norms of clinical research are useful in considering and addressing the criticism of it.

The paper discusses an experiment in which researchers created an account on Twitter (now X) called Econ Job Market Helper and invited people looking for academic jobs in economics to submit a tweet of their job market paper to be posted by the account. Unbeknown to the candidates, some tweets, selected at random, were assigned to be retweeted with a quote–quote-tweeted–by established economists (“influencers”). Candidates from underrepresented groups (women, racial and ethnic minorities, and LGBTQ+ individuals) had the greatest chance of having their tweet quote-tweeted. The researchers’ goal was to assess whether social media promotion could improve employment outcomes, particularly for job applicants from underrepresented groups.

The results were striking. Tweets that were quote-tweeted by influencers (the intervention group) received about four times as many views and three times as many likes as those that weren’t (the control group). This increased Twitter activity appeared to translate into better outcomes: quote-tweeted job candidates secured one additional in-person interview and 0.4 additional job offers compared to those in the control group. Notably, women in the intervention group received 0.9 more job offers than their counterparts in the control group. Both findings were statistically significant.

Was this experiment ethically problematic? Or was it a useful study involving common practices on Twitter?

As research ethics scholars, we see this debate as an opportunity to shed some light on ethical issues in nonclinical research. We see two primary ethical concerns with the study–the deception of participants and the use of randomization. And we aim to show how established research ethics principles and frameworks may be helpful for working through them.

Deception and Informed Consent

An obvious ethical concern with the study is its use of deception. Job market candidates knew they were in a study—they completed surveys–and knew their tweets would be posted by Econ Job Market Helper. However, they were unaware of the quote-tweeting experiment and were not informed about the purpose of the study. This lack of transparency likely led candidates to develop false beliefs about the study’s nature and precluded informed consent.

Deception and lack of informed consent in a research study are not necessarily a problem. Deception is sometimes necessary for research, and U.S. regulations allow waivers of informed consent (including deception) in certain circumstances, including if the research poses minimal risk or does not adversely affect participants’ rights and welfare.

Did this experiment meet these requirements? The researchers could have informed job market candidates that there was a second stage to the tweet promotion that involved randomization, but it would be difficult to assure that candidates would not disclose information about the study publicly thus intentionally or unintentionally undermining it. The key question is whether the experiment posed  minimal risks–whether “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.”

On the one hand, prominent economists quote-tweet job market papers frequently, suggesting the risks associated with the study are similar to those that candidates ordinarily face. On the other hand, the academic job market season is highly consequential and has a zero-sum structure, with advantages for one candidate coming at the expense of disadvantages for others. The experiment introduces the risk of fewer “flyouts” or job offers for candidates in the control group (as well as for bystander candidates). Such a risk brings with it the possibility of less desirable job offers, lower salaries, or unemployment. Given the high stakes of the academic job market, randomization of quote-tweets may pose more than minimal risk to candidates who don’t get them.

Deception of third parties–in this case, Twitter users and search committee members–is not something that research regulations have fully addressed. These third parties do not meet the definition of human subjects because no identifiable information is collected about them. Yet, they may have developed false beliefs that prominent economists were endorsing papers or simply that their Twitter feeds were not shaped by an experiment. Public health and social science research need guiding principles to consider deception beyond the binary participant-researcher relationship.

Randomization

The second contentious feature of the study was the randomization of participants into intervention and control groups, with candidates from underrepresented groups given a two-thirds probability of being in the intervention group.

Randomization could be considered permissible if it is consistent with the investigator’s duties. For example, if a medical researcher has a duty to provide participants with a specific treatment—such as a standard cancer therapy for participants with cancer who have volunteered for a study to test the safety and effectiveness of an experimental treatment–it is widely acknowledged that randomization is permissible when the relevant expert community is uncertain about whether one of these treatments is superior. In the economics job market study, if influencers’ random quote-tweeting of job market papers is consistent with the researchers’ professional obligations, randomization may be permissible.

What are the relevant professional obligations of this study’s researchers? We think the answer is ambiguous, based on the American Economic Association’s Code of Professional Conduct.It does not include norms regarding the use of social media, but the study’s weighted randomization scheme would seem to be consistent with its responsibility for “supporting participation and advancement in the economics profession by individuals from all backgrounds, including particularly those that have been historically underrepresented.” In addition, professional economists do not have a duty to use social media to promote job candidates, so not promoting all job candidates isn’t an ethical shortcoming. Thus, although the randomization scheme poses risks to candidates in the control group and to bystanders, these risks are not the result of wrongful behavior and so are not morally problematic.

But the AEA also calls on economists to create a “professional environment with equal opportunity and fair treatment for all economists,” which arguably implies that they should avoid advantaging or disadvantaging job market candidates on arbitrary grounds. If, as the study team believes, prior evidence supports the claim that quote-tweets by prominent economists are likely to improve job market outcomes, any randomization scheme would appear to violate this norm. In other words, if this interpretation of the AEA code is correct, then the argument that randomization is permissible because it is consistent with professional norms will fail.

Some commenters on Twitter argued that the weighted randomization scheme offers a defensible balance of costs and benefits because it yields socially valuable knowledge and minimizes unfairness by giving candidates from underrepresented groups a higher probability of being in the intervention group (and perhaps even improves on the status quo in the economics job market). This argument could work when research has minimal risks. But proponents of this argument should be prepared to explain the social value of the knowledge gained from the experiment and to explain why it is sufficient to justify the unfair costs to candidates in the control arm (some of whom were from underrepresented groups).

Our goal in this essay is not to condemn the study but to raise ethical concerns about its use of deception and randomization and show how principles and frameworks from research ethics may be used to work through them. We think there are two lessons to draw from this study. First, it lends support to the call for social science researchers to include structured ethics appendices in their papers, both to improve discussions of the ethics of studies and to clarify and improve ethical norms in the ways that the studies are conducted. Second, it may be worthwhile for experimental economists to consider the ethical dimensions of their experiments more systematically. The U.S. political science community offers a potential model here, with the 2022 edition of the American Political Science Association’s A Guide to Professional Ethics in Political Science including a lengthy section outlining principles and guidelines for human subjects research.

Douglas MacKay, PhD, is an associate professor in the Department of Public Policy, the Center for Bioethics, and the Philosophy, Politics, and Economics Program at the University of North Carolina, Chapel Hill. @douglaspmackay

Katherine W. Saylor, PhD, is a fellow in the Ethical, Legal, and Social Implications of Genetics and Genomics at the Perelman School of Medicine at the University of Pennsylvania. @kwsaylor

Read More Like This

Hastings Bioethics Forum essays are the opinions of the authors, not of The Hastings Center.

Leave a Reply

Your email address will not be published. Required fields are marked *