Issue 3

Parrots

By the time Emily Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell’s paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” was published in March 2021, it had already been shaking up the artificial intelligence (AI) world for some time. Two of its authors were fired from Google, and the natural language processing (NLP) community was grappling with the uncomfortable truths the paper had raised about the harms posed by algorithms that many claimed would change modern life. Those of us working in the computational humanities certainly took notice of the Parrots paper, too. While most humanists’ day-to-day research tasks may not directly involve large language models (LLMs), the concerns raised by the authors — about the ethical, social, and environmental risks of emerging technologies — were familiar terrain. The paper also helped us revisit the humanistic principles we espouse in our approaches to text, language, and meaning, and to consider what happens when they intersect with tools emerging from a tech culture obsessed with chasing what is biggest and fastest.

The history of NLP in the humanities goes back longer than most people would think. There was a time — in the sixties or seventies, during the early days of humanities computing and the heyday of Chomskian linguistics — where NLP more or less made sense to your average humanities scholar. This was because NLP was to a large extent rule-based. Computer scientists and linguists built complex systems on top of explicit rules (and exceptions, of course) that would, in theory at least, cover all the grammatical and syntactical structures of natural language. These systems could then be used in automatic translation, text summarization, question and answering, and similar tasks. Rule-based NLP made intuitive sense to humanists because this is how many of us — or at least those of us of a certain generation who went to school before the internet — learned foreign languages: by learning and following explicit rules. If you think of grammar as a system of declensions and conjugations that can be mastered only by group recitation and hard-core drilling, that means you’ve been around the block quite a few times.

But the dominant approach in NLP these days has nothing to do with explicit rules; instead, it is based on statistical models. Statistical NLP infers rules from existing texts and annotations by converting words into vectors in a multidimensional space. These — to the human mind — impenetrable models are used to make predictions on new data. This works fairly well for certain types of tasks. But it also feels a bit like magic.

Most humanists come to quantitative analysis with a healthy dose of skepticism to begin with: we are trained to recognize that context is everything, that meaning is always irreducibly complex, that texts are often inherently contradictory, and that there is no such thing as ideology-free space. So it should come as no surprise that humanists are especially sensitive to the challenges of power dynamics, data availability, and domain specificity, as well as structural and representational bias in statistical language models based on large datasets. As we watch statistical models transform not just data-driven research, but the way the world economy functions, the way our healthcare is managed, and the way we are policed, we see the human and human experience being pushed further and further to the margins.

While most of us nodded vigorously while reading the Parrots paper, we also knew that substantive discussion across the disciplinary boundaries of the humanities and data and computer science can feel insurmountable. How do we talk to each other in this age of intense academic overspecialization? How do we prevent our segregated disciplines from turning us into methodological, epistemological, and ideological loners? Is there a meaningful way to combine positivistic or empirical approaches to language with those rooted in humanistic idealism?

We wanted to bring varied perspectives together to discuss “On the Dangers of Stochastic Parrots” and consider how the humanistic view may help forge the path toward mitigating the risks posed by emerging technologies such as LLMs. We invited three leading digital humanists — Gimena del Rio Riande, Lauren Klein, and Ted Underwood — to share their thoughts, and asked two of the article co-authors — McMillan-Major and Mitchell — to respond during a live-streamed Zoom roundtable in late October 2021.

The three humanists’ position papers comprise this issue of Startwords. In “Mapping the Latent Spaces of Culture,” Underwood acknowledges the serious risks posed by LLMs, but also asks us to take a broad view of how language functions, how models produce meaning, and how disruptive technologies have also always played a generative role in transforming cultural practices.

Del Rio Riande’s piece, “On Spanish-Speaking Parrots,” examines the role of language by foregrounding another vexed problem in NLP, which is its lack of linguistic diversity. She discusses the BERTIN project, which not only produced a monolingual LLM for Spanish, but as a collaborative and community-driven effort, exemplified a more ethical alternative to the technopositivist and resource-intensive models that concern the Parrots authors. (The “small batch” approach to creating annotated data and training models for new languages has also been the approach taken by the CDH in our NEH-funded New Languages for NLP: Building Linguistic Diversity in the Digital Humanities Institute, a collaborative project with DARIAH-EU.)

Finally, in “Are Large Language Models Our Limit Case?,” Klein asks provocatively whether all of us — in industry, in academia, or in our personal lives — who keep finding ourselves trying to work around these asymmetrical configurations of power and resources, should perhaps embrace radical refusal and work toward changing our systems instead.

Spanish translation by David Rivera
skip to featured content

The technology I need to discuss in this paper doesn’t yet have a consensus name.

Mapping the Latent Spaces of Culture

EN
  • Ted Underwood
10.5281/zenodo.6567481

Spanish is the second most widely spoken language in the world as a mother tongue. Official reports, survey-based studies, and Wikipedia confirm it. And Google can predict it.

On Spanish-Speaking Parrots

EN ES
  • Gimena del Rio Riande
10.5281/zenodo.6567850

How is it possible to go forward with large language models with the knowledge of just how biased, how incomplete, and how harmful — to both people and the planet — these models truly are?

Are Large Language Models Our Limit Case?

EN
  • Lauren Klein
10.5281/zenodo.6567985

Credits

Editor Grant Wythoff

Technical Lead Rebecca Sutton Koeser

UX Designer Gissoo Doroudian

Spanish Editor David Rivera

Manuscript Editing Camey VanSant