Generative AI as Playable Media

Something I’ve been thinking about for a while involves not only situating generative AI as a digital collaborator (with it’s own degrees of agency), but fundamentally reorienting us to these technologies by situating them as playable media. Then last week I presented a poster session at EDUCAUSE 2023 on Genarative AI and faculty development (i.e., how we integrate these technologies into the Digital Gardener Faculty Fellows program) and I kept coming back to this idea as one mode of entry. It’s still very much an in-progress bit of thinking on my end, but I wrote up about 1500 words or so and wanted to share it here.

“Generative AI as Playable Media”
Across Higher Education we have seen both awe and apprehension with the rapid advancement of generative AI tools like ChatGPT, Adobe Firefly, Soundful, and others. The ability of these platforms to generate remarkably coherent text, images, and even music with simple prompts seems to herald a new era of automated (or at least machine-assisted) creativity. Their speed of production coupled with their potential depth of content and expression is not only incredible but, for many in higher education, incredibly alarming—raising concerns around the future of assessment, the future of learning, and potentially even the future of higher education. This is especially prominent in relation to writing, whether involving to the study of writing (composition), writing-to-learn, or writing-as-assessment. There even seems to be murmurings that “ChatGPT and friends” may be ushering in the death of writing in post-secondary learning. Don’t get me wrong, as a writing teacher I absolutely recognize that there are valid concerns with these tools, especially with their ability (currently) to spread misinformation, to enable harassment, to infringe on intellectual property, and the like. But our collective impulse to policy (and/or policing) has fostered something of a culture of fear around these technologies, and this is not only a matter I think we need to correct, but one that stems from misrecognition of how these emergent platforms are different from their predecessors.

We need to start by recognizing these are not just another tool, another computational element with a utilitarian function and purpose. At their base, generative AI technologies offer a different kind of experience than the common trope of “the calculator” because they invite (if not necessitate) a different kind of engagement. We experience them as having their own type of agency, which means they are more akin to a collaborative partner (i.e., co-authoring a paper with a colleague) than a tool to fix our writing (i.e., the squiggly lines in Microsoft Word). Or put another way, our impulse with technologies (writing included) has long been one of a “will to mastery” and what we need is a reorientation toward a “willingness to play.” Meaning, I think we are better served if we start talking of these mediating partners not as tools but as playable media

The term “playable media” derives from game studies, where it refers to any software environment that offers an interactive experience and/or forms that “invite and structure” play” (Harrigan and Wardrip-Fruin, xiii). Unlike static, linear media, playable media comes alive as people make choices that shape their experience in real-time. The key elements of playable media then are not only a technological entity (as playable), but user/player agency, a space (or invitation) for co-creation, and open-ended possibilities (to explore, discover, co-generate). If we look at generative AI through this lens, we might readily recognize the participatory nature of the experience of generative AI. The growing expansive that is AI-powered technologies collectively offer productive opportunities through which “players” can engage and/or co-create unique outputs: from prompts to conversations, file sharing and modeling, generative content results not simply from generative AI, but from an emergent human-technology collaboration. Of course, this is actually true of lots of contemporary creative media, as making a TikTok video results from an emergent collaboration between a human and a technology (i.e., human-technology dynamic). But we don’t situate the video editor and social media platform in the same way we do generative AI because the agency (or experience of creative agency) of those tools appears to be a passive kind of agency—this despite the fact that a range of video editing platforms automate many digital creative practices (as one example, see the ripple delete default function in Adobe Rush).  

Part of why I prefer situating generative AI as playable media instead of tool or platform is that it foregrounds the value of the human element, the player. As I’ve argued elsewhere (see Post-Digital Rhetoric), what matters in human-technology assemblages is not simply what the mediating technologies can do (whether making video or generating writing or creating a photo from text), but specifically the abilities, knowledge, understanding, and the like that the human “player” brings to the table. (What is possible [i.e., the capacities of the engagement] is neither reducible to the human or the technology alone.) Our job is not to remove the play experience from higher education, but to better prepare students to be digitally inflected players, to play responsibly/ethically with and across digital platforms, and to understand how to be successful in these emergent spaces and practices. 

Along these lines, part of what makes generative AI so engaging to students (and so disconcerting in traditional academic spaces) is that it embodies the Amplification of Input principle central to video games (see Gee): type a five-word question, get two paragraphs or ten bullet points back. We all know that writing is hard work. But with AI, I ask a question, click a button, and one quickly receives amplified output. Rather than ignoring the appeal of these practices, we need to think strategically about how we and students can use them to achieve learning outcomes and disciplinary success. Yes, in some clear ways they are offsetting the labor of writing, but in so doing they create new opportunities: in my discipline of writing studies, the shift in labor allows us to engage rhetoric more attentively – drilling into purpose, audience, style, voice. We can focus more time (and more explicitly) on higher order considerations, and even use things like “word count” not as the minimum requirement but the boundary limit. 

Outside of my own interests in using these technologies in a writing class, perhaps the most valuable part of adopting a playable media (or even digital collaborative) perspective is that doing so opens us to (if not invites) an ethical orientation of responsible co-creation. Just as good game design balances player freedom with constraints that maintain coherence and care, and how working with human partners has an implicit ethical dynamic (peer-to-peer engagement), using generative AI and using it well means guiding players and the technologies toward meaning, not exploiting capabilities. To this end, structuring engaging prompts, working iteratively on outputs, and filtering results with discernment and intentionality all contribute to the kinds of responsible cycles we might seek in the AI-human creativity dynamic. But there is also a need of openness to surprise and wonder, of flexibility and working conductively with outputs when generative AI contributes something inspiring yet unexpected. The playful orientation invites receptivity and flexibility, not just instrumentalism. 

Treating generative AI as playable media opens up rich possibilities for education. Students can learn about AI and creativity by tinkering with these tools to co-create projects spanning textual, visual, and auditory practices in courses ranging from composition to computer science, history to human sciences. By making the experience playful and participatory (for faculty and students alike), we can demystify these technologies while cultivating an ethics of practice (or at least valued habits of practice) regarding what we co-produce: i.e., the outputs of the engagements. In so doing, generative AI moves a bit further away from being an opaque black box dispensing fully formed results toward an intelligent creative partner that students learn with through engagement. And rather than displacing human creativity, these tools can augment and inspire human creativity when approached as playable media. Much like improvising jazz musicians build off of and in response to each other’s musical ideas, generative AI can help stimulate human creativity through unexpected exchanges and contributions. 

Of course, generative AI remains a technology requiring careful consideration (perhaps even governance) regarding issues like biases and misinformation. But excessive prohibitions risk losing out on the rich potentials of this playable media. We need to be attentive to both policy and practice considerations, but I think it better to foster a spirit of cultivation, where a breadth of diverse voices can learn, critique, and create with these technologies. Structuring forums centered on ethics of practice, as we’ve done at Indiana University, can guide this collective learning, as can webinars, workshops, and the like designed to get faculty and students to tinker with these technologies, to use ‘making’ and ‘doing’ as a way into ‘knowing.’

What might higher education conversations sound like if we reframed generative AI in terms of playable media? Beyond practical uses or dystopian risks, my hope is that we would start to discuss how to realize the possibilities for creative human flourishing in conjunction with these playful collaborators. Students would learn something of a generative AI play literacy alongside an ethics of practice. Faculty might start considering how these tools could enhance their practices and expand their own collaborative possibilities. Policy committees (and policy language) would weigh fostering innovation and joyful co-creation against regulation and restriction. By situating emergent generative AI as playable media and an inviting participatory experience, we start to shift the conversation toward fostering a culture of cultivation and building learning communities in and around the practices of creative engagement.

Work Cited

Gee, J.P. (2003). What video games have to teach us about learning and literacy. Palgrave Macmillan. 

Harrigan, P. and Wardrip-Fruin, N. (2007). Second person: Role-playing and story in games and playable media. The MIT Press. 

Hodgson, J. (2019). Post-digital rhetoric and the new aesthetic. The Ohio State University Press. 

2 comments

Leave a comment