Below The Screen Lurk Sub-Screenic Literacies
A look back at how software, including AI, might better be thought of as writing.
It has been twenty years since a group of scholars proposed a paradigm shift for how we understand literacy practices in the 21st century. Coining the term “multiliteracies,” the New London Group offered a framework for embracing multiple literacy practices, including the use of emerging digital technologies, as essential for students and teachers to “design social futures”
(Cazden et al.).
In 2016, at the annual conference for the American Educational Research Association, two members of the famed literacy researcher collective, the New London Group, reflected on their influential paper. Bill Cope noted that with such rapid change in the technological sector, it is imperative that a new generation of scholars continues renewing the Group’s powerful yet aging theory. Much research uses multiliteracies as a theoretical starting point: digital literacies, new literacies, new media, and transliteracies are just a few of the better- known lines of inquiry. And yet, since 1996, there has remained a particular limitation to both the New London Group’s original paradigm and the ways research in multiliteracies has evolved.
Put simply, multiliteracies scholars have limited their work to researching and theorizing the relationship between new technologies and literacies nearly exclusively to what occurs on and above the screen. That is, the rich work we as a field have conducted has focused on what it means to use digital technologies to compose and critique texts in the world, treating digital technologies as instruments akin to 21st-century pens, pencils, and typewriters. What we have not yet acknowledged is that digital technologies are themselves linguistic constructs. That is, below the screen, there exists a world of computational and human languages that mediate and shape the very kinds of social futures we can design above the screen with digital technologies. In what follows, I make a brief theoretical case for what goes on below the screen, which I call sub-screenic literacies, as a necessary space for literacy researchers to examine in the digital age.
Sub-screenic literacies refers to the computational and human languages that are used to create software—including AI. It refers to things like computer code and APIs (application program interfaces), which are long strings of data pushed to and pulled from systems. It also refers to human-facing texts that are essential to software such as developers’ documentation and other communications between programmers. Sub-screenic literacies are not just “code.” In fact, the word code oversimplifies the complex subscreenic assemblages that software theorists expose: as code/space, as the language of new media, as inherently unstable, and even computer code as itself a form of speech.
When multimodal researchers examine digital phenomena, it is not uncommon for the inquiry to focus on ways that digital technologies are used by people to communicate and create things in the world. What they tend to leave unexplored, however, is the way that sub- screenic literacies make possible what happens above the screen, what I refer to as super-screenic literacies. An illustration will be helpful. In a study I did several years ago on the design of an online English course used in New York City schools, I used methods of multimodal analysis similar to what scholars in the field typically use. For example, I analyzed how much of what users saw on the screen was related to curriculum (i.e., literary content), how much was administrative (i.e., tracking one’s progress), and how much was more functional (i.e., navigating to different sections of the course). I also examined the buttons users clicked to interact with the course. My analysis demonstrated how a particular online course provider positioned students as “managers” of their learning, limiting their learning experience to consuming content.
Had I, at the time, used more sub-screenic methods of analysis, I might have found myself asking different questions. For example, when one encounters a multiple- choice question in an online course—which is sadly typical—there are three main programming languages developers use to create what one sees on the screen. Hypertext markup language (HTML) controls the main text and structural relationship between components on the webpage, looking something like this:
<!DOCTYPE html>
<html>
<head>
<title>Dynamic Quiz Project</title>
<link type=’text/css’ rel=’stylesheet’ href=’stylesheet.css’/> <link rel=”stylesheet” type=”text/css” href=”http:// fonts.googleapis.com/css?family=Open Sans”/>
</head>
<body>
<div id=’container’>
<div id=’title’>
<h1>Moby Dick Quiz</h1>
</div>
<br/>
<div id=’quiz’></div> <div class=’button’ id=’next’><a href=’#’>Next</a></div> <div class=’button’ id=’prev’><a href=’#’>Prev</a></div>
<div class=’button’ id=’start’> <a href=’#’>Start Over</a></div>
<!-- <button class=’’ id=’next’>Next</a></ button>
<button class=’’ id=’prev’>Prev</a></ button>
<button class=’’ id=’start’> Start Over
</a></button> -->
</div>
The cascading style sheet (CSS) determines how the raw elements of HTML will appear on a webpage—things like font types, colors, design of the buttons, and overall layout —and might read partially like this:
body { font-family: Open Sans;
}
h1 { text-align: center;
}
#title { text-decoration: underline;
}
#quiz { text-indent: 10px; display:none;
}
.button { border:4px solid; border-radius:5px; width: 40px; padding-left:5px; padding-right: 5px; position: relative; float:right;
background-color: #DCDCDC; color: black; margin: 0 2px 0 2px;
}
And then there’s JavaScript. JavaScript powers the elements and activity of the quiz itself (i.e., the text within the quiz, the logic that accepts some answers and not others, and what happens with the data generated from a student taking the quiz). It might resemble this:
(function() {
var questions = [{ question: “What is the first word in the book?”, choices: [Ishmael, You, Call, Me, Once], correctAnswer: 2
}, {
question: “What is the month Ishmael uses to describe his soul?”, choices: [May, September, March, July, November], correctAnswer: 4
}, {
question: “Better sleep with a sober than a drunken Christian?”, choices: [cannibal, whale, pirate, dog, shadow], correctAnswer: 0
},
The answers in the JavaScript code above have to be predetermined by the developer. In most cases, the developer would rely on a content-area expert to provide the questions and “answers.” Does the fact that JavaScript in this case requires predetermined and circumscribed answers to questions (if it is to automate assessment) force curriculum experts to perpetuate multiple-choice-style questions? How does the cost of writing code like the example above affect the kinds of software-powered products companies produce? Are there other uses of programming languages that might be more expensive but could support students in asking their own inquiries and exploring them more authentically? These are just some questions that emerge when sub-screenic literacies are briefly theorized and explored.
When we engage with and research super-screenic literacies, which is overwhelmingly where multiliteracies researchers have focused their energy for the last two decades, we must acknowledge that such literacy practices are limited by the sub-screenic literacies that make software possible. This is increasingly important in an age of AI. More accurately, our super-screenic literacy practices are limited by those who produce and control sub-screenic literacies. We have spent 20 years mapping the tip of a linguistic iceberg. It’s time to suit up, dive in, and explore the unfamiliar depths of software space, so full of darkness and breadth and life.



