top of page

Further details about the Legal Academy & Theory Survey

 

This project is a survey of members of the legal academy. The survey collects members’ views regarding (i) their perception of the “centrality” of different areas of law within the legal academy and (ii) their views about legal theory questions. Here we detail the reasoning behind the decision to run the survey, what types of questions and issues to include, and how to formulate the question and answer choices for each part of the survey.

 

Who is running the survey?

 

The study is led by Kevin Tobia (Georgetown Law) and Eric Martinez (MIT Cognitive Science). We hope to publish the results of the study within two years (by summer 2023). The study has been approved by the Georgetown University IRB. The study has been pre--registered on Open Science, and those materials will be made available at or before the time of publication. Please direct all questions, comments, or concerns to Professor Kevin Tobia.

​

What is the purpose of running the survey?

​

Broadly speaking, we are interested in learning more about the legal academy: What areas of law are seen as most central, and which legal theories do members of the academy endorse (or reject)?

 

One motivation of this study relates to the mismatch between the volume of legal theory scholarship and the dearth of documented academic consensus resulting from this scholarship. Academics have long debated natural law vs. positivism, realism vs. formalism, originalism vs. living constitutionalism, and many other theories. But there is no systematic account of the legal-academic community’s propensity to endorse or reject these views. By surveying legal academics on their beliefs regarding the most often-debated questions, we hope to begin to resolve this mismatch.

 

We think of this project primarily as a form of sociology of law; we aim to know more about how legal scholars see the field of law and what they think about some of the big questions within the field. Prior work in other fields has shown that academics can often have inaccurate sociological beliefs about the distribution of views held by their peers (see Bourget & Chalmers, 2014), and answering these sociological questions may be of interest and benefit not only to future legal academics and legal historians but to the legal academics today.

 

There is also a psychological aspect to the project. The study aims to uncover evidence about why the legal community understands the field and questions in particular ways. While we believe this psychological aspect to be interesting and informative in and of itself, it may also inform a jurisprudential aspect of the project. Legal theory questions certainly should not be settled by appeal to whatever 51% of law professors report to accept. Yet, some theorists might take broad expert consensus in favor of one view to offer some weight in favor of that view.

 

Finally, the public may be curious to know more about what legal theory experts think about the nature of the legal system that surrounds and affects them on a daily basis. Thus, a legal theory survey can provide information that is of interest and benefit to a variety of audiences, both within and beyond legal academia.

​

Justification of the areas of law used in the study and the legal theory topics

 

The survey includes two parts, which respectively consist of questions about: (i) the centrality of different areas of law within the legal academy, and (ii) specific issues in law and legal theory, which together are designed to provide sociological, psychological, and jurisprudential insight in order to satisfy the aims laid out above.

 

With regard to the centrality part, we sought an objective “smaller” and “larger” list of different areas of law. We relied on (a) the 18 areas reflected in Jotwell: https://jotwell.com/, and (b) the 107 areas listed by the Association of American Law Schools (AALS) in their FAR recruitment material. We combined these lists, to eliminate some redundant areas. Each participant is asked about all 18 of the “smaller” list areas and a random subset of 7 from the “larger” list. Each participant may also choose to evaluate one additional area (e.g. if they would like to evaluate an area from the larger list that was not one of the 7 randomly presented).

 

Our reasoning behind using these two lists was threefold:

  • First, given that these two lists were established independently of this survey and with no knowledge of its hypotheses, using these lists reduces the potential for personal bias or prejudices in selecting areas arbitrarily.

  • Second, given the vast number of items on the combined lists, we are able to examine law professors’ beliefs on a wide range of areas as opposed to simply the main areas (as would be the case with one smaller list). 

  • Third, given that the two lists are well-known and respected within legal academia, we hope that using these lists is taken to be a reasonable choice by the legal-academic community.

 

Nevertheless, there are many other important areas that are not reflected in the smaller list (drawn from Jotwell) or the larger list (drawn from the AALS). The survey welcomes participants to include written feedback, including suggestions of areas to include in future iterations of the survey.

 

With regard to the legal theory part of the survey, we constructed an initial list of questions and answer choices, with some emphasis on breadth and diversity of issues reflected. The list was circulated to a diverse set of U.S. law professors, and we received and incorporated feedback from approximately 20 law professors, resulting in a final set of 25 questions. In designing the survey, we tried our best to incorporate questions that were of interest to and representative of a wide range of perspectives within legal theory. 

 

There are some areas that are over- or under- represented in the survey. Some areas of law did not lend themselves as well to questions with a sufficiently small list of most common answers. For example, we solicited feedback on several property law questions, but ultimately none was viewed favorably by the group from whom we solicited feedback. The 25 questions include a relatively larger number of criminal law questions, which tended to be more comprehensible by those that were not specialists in that field. 

 

Despite our best efforts, we acknowledge that the survey is by no means perfect. The set of issues covered do not perfectly reflect all important perspectives and questions, particularly those which can not be succinctly captured via brief labels (see justification of question and answer format below). To the extent that the survey is in fact biased against certain topics or views, we hope to address this in future iterations of the survey, and in the meantime encourage other theorists (with much greater expertise in such areas) reach out to us with advice and feedback.

​

​

Justification of question and answer format

 

With regard to the format of the questions, we relied heavily on similar work pioneered in philosophy. For Part II we follow Turri (2016), with a few deviations. Turri’s (2016) survey asked participants to rate their agreement, on a scale of 1 to 7, with the statement: “This area is central to the discipline of philosophy,” with respect to 10 different areas of philosophy.

 

In our own approach, we decided to break up centrality into a normative and descriptive component to avoid potential confusion among respondents. We also decided to adopt a 0-10 point scale as opposed to a 1-7 point scale in order to potentially measure more subtle differences in mean ratings between areas and because 11 point scales are generally perceived by those responding to survey as better allowing them to express their feelings adequately (Preston & Colman, 2000).

 

For Part III, we follow Bourget & Chalmers (2014), who pioneered a similar study in philosophy. However, we decided to deviate from that model by asking participants to rate each answer choice individually as opposed to asking them to give one answer choice to each question. For example, whereas Bourget & Chalmers’ (2014) questions followed the following format:

 

Normative ethics: Consequentialism or deontology? 

(1) Accept Consequentialism; 

(2) Lean Consequentialism; 

(3) Lean Deontology; 

(4) Accept Deontology; 

(5) Other (with various options, such as “no fact of the matter”)

 

Our questions instead follow this format:

 

Constitutional interpretation

Originalism: (1) Reject; (2) Lean against; (3) Lean towards; (4) Accept  [or other]

Living Constitutionalism:  (1) Reject; (2) Lean against; (3) Lean towards; (4) Accept  [or other]

 

In designing both the questions and answer choices, we tried to strike a balance between clarity and brevity--that is, we wanted to keep the answer choices as succinct as possible without sacrificing too much complexity or nuance. The obvious benefits of this approach are that (a) it is quicker to read and presumably answer questions this way, thus hopefully easing the participants’ burden of completing the survey, and (b) it is easier to report results clearly (e.g. “X% of survey participants answered that they "accept" or "lean towards" "textualism").

 

One potential downside is ambiguity; terms like "realism", “natural law”, or "originalism" may not mean the same thing to academic A as they do to academic B, even if both report accepting those views on the survey. This is, of course, a greater concern for some questions than others; in cases where we thought it was a particularly greater concern, we added clarifying language to a question or simply removed a proposed question altogether, such that the remaining question list would be as free from ambiguity as possible. We also allow participants to respond to particular answer choices by choosing “other: question unclear,” so as to further minimize this risk, though it is obviously important to be cautious about the interpretability of the results.

​

Will you run the study in the future, in other countries, or in other languages?

​

Concerning the future of the study, we hope to conduct further rounds (e.g. every 5 or 10 years), to learn about developing trends in the legal academy. 

​

We would be excited for similar studies to be conducted in other languages and/or countries. The current (English language) version is tailored to the U.S. in terms the subject category labels and legal theory topics. We would be happy to share our materials and discuss our experience in planning and preparing the study. 

 

References

 

Carolyn C. Preston & Andrew M. Colman. 2000. Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1),1-15.

 

John Turri. 2016. Perceptions of philosophical inquiry: a survey. Review of Philosophy and Psychology, 7(4), 805-816.

 

David Bourget & David J. Chalmers. 2014. What do philosophers believe? Philosophical Studies 170, 646-500.

bottom of page