Back to issue
Cover of: Language Model Interpretability and Empirical Legal Studies
Michael A. Livermore, Felix Herron, Daniel N. Rockmore

Language Model Interpretability and Empirical Legal Studies

Section: Conference Article 2
Volume 180 (2024) / Issue 2, pp. 244-276 (33)
Published 16.07.2024
DOI 10.1628/jite-2024-0009
  • article PDF
  • available
  • 10.1628/jite-2024-0009
Summary
Large language models (LLMs) now perform extremely well on many natural language processing tasks. Their ability to convert legal texts to data may offer empirical legal studies (ELS) scholars a low-cost alternative to research assistants in many contexts. However, less complex computational language models, such as topic modeling and sentiment analysis, are more interpretable than LLMs. In this paper we highlight these differences by comparing LLMs with less complex models on three ELS-related tasks. Our findings suggest that ELS research will - for the time being - benefit from combining LLMs with other techniques to optimize the strengths of each approach.