Rethinking Academic Excellence in the Age of AI

This week, my 3rd grade son was working on a presentation on Greek myths. He had to make a tri-fold poster on Zeus, king of the Greek gods, and when it came time to find a picture, we decided it might be fun to create one with AI. It became a family activity — one that reveals magical opportunities and real limits of generative AI for all grade levels. It also became a clarifying moment for what it means to teach in the age of generative AI.

Zeus and Cubism

Our prompts were almost always simple. We started with “Please make me a picture of Zeus.” The first outcomes appeared in the style of a fantasy novel: stereotypical, bearded, and muscled. In our first few attempts, we tried specifications for details like how to hold a lightning bolt:

Our son loves soccer, so we tried some other formats: “imagine he is a soccer player” wearing the school uniform and “make it more kid-friendly”:

My wife had the idea to ask for a cubist Zeus. We learned that ChatGPT can’t do cubism. Make it more abstract, we instructed it. Try it all in shades of brown:

These didn’t work, so we took a brief detour to try to teach ChatGPT cubism. “Draw a face in the cubist style,” we typed. Not quite. But we saw some incremental progress with more explicit direction: “Show multiple perspectives of specific parts of the face, so we might see the nose from both sides.” This moved the AI in the right direction, but it still couldn’t quite capture or replicate the abstract style:

We returned to Zeus and third grade. How about Zeus in the style of stencil graffiti like Banksy? Or a Renaissance painting? Or a French impressionist? Not bad. Now we were getting somewhere interesting:

And more to grade level, we tried: a crayon sketch, a Minecraft character, or a teenage version of Zeus as a cartoon character:

ChatGPT’s responses were astonishing in their range. All this took place in less than ten minutes. In that time, it produced complete works drawn from nearly a dozen styles.

Awash in the Generic, We Still Need Artists

But for its astonishing range of style, ChatGPT was always generic in its composition. These are indicative of the countervailing forces at work with generative AI: prompts might push image subject matter towards new ideas, but generative AI’s statistically-driven mode of creativity pushes style and composition back to the average. 
 
Novelty in content, but normalcy in form. In the images above, most pictures have an easily identifiable style. As a result, it’s the content that pops, and the style sets a mood: crayons, cartoons, Minecraft, Renaissance, graffiti, etc. The opportunities in this are that we can have high-quality, ready-made, on-demand images in almost any style. The drawback of this is that without significant human effort and customization, the work will be generic and won’t create truly innovative outcomes.

Prompts might push image subject matter towards new ideas, but generative AI’s statistically-driven mode of creativity pushes style and composition back to the average. Novelty in content, but normalcy in form.

In this way, generative AI turns basic creativity and productivity into a commodity. It is next-level, custom-made clip art. True innovation is still the provenance of artists and creators, who need not fear for their work — though their work will change.  
 
Developing a new style or a groundbreaking work of art still requires expert creators using expert tools. GarageBand, the easy-access music-making app bundled on Apple computers for years now, has made recording and producing music easier and more accessible than ever. But it is not an expert tool, and I can guarantee you haven’t heard music made with GarageBand streaming on professional channels like Spotify or Apple Music. Professional artists use professional tools to create innovative and professional results. Tools that appeal to the general consumer — even generative AI — simply won’t be able to achieve the same outcomes.
Great art comes from a human being’s ability to push a medium to greater and greater novelty and nuance. Doing this requires a tool — whether a paintbrush or a software program — that can deeply customize what is being created in order go beyond what is normal. Generative AI, applied broadly, cannot do this.  But, professional tools will integrate generative AI technology within them, though in more sophisticated ways. Already, Adobe Photoshop uses generative AI to make more customized changes within an image, while still allowing the truly granular image manipulation that has always been the hallmark of the software. 
 
The professional artist or writer or creator, through their specialized knowledge of domain tools and vocabulary, pushes composition to more divergent places. Generative AI, meanwhile, always pushes composition back towards the statistical mean.

A Boon to Those Who Struggle

Pushing quality to the mean is a gift to those whose work is below average. I mean this in a constructive way, because it refers to all of us in one way or another. People who struggle with writing, those who have difficulty drawing, those who aren’t sure how to make a good recipe, those for whom big tasks are hard to break into smaller tasks — those who are, in a given context, less capable than the average person, for all of us, generative AI will enable a baseline level of competency in most thinking and creating tasks. This has already borne out in academic research.

The result is a commodification of creativity and productivity. Anyone can now produce average work. Writers and artists should only feel threatened in so far as their work is below average and insofar as employers are looking for above average levels creativity and productivity. But average work has its place in the marketplace, which will still need people to manage the AI to produce it. Average artists have produced average work for centuries, and people have consumed it for centuries. Again, I don’t mean this in a bad way. Some visual artists just want a job and are happy to make letterhead and grocery label designs. At this moment now in history, those who struggle to reach the average will have the tools and perhaps also the employment opportunities to make art every day, which previously might have been out of reach. This is a great development.
 
The commodification of creativity and productivity unequivocally aids human beings in our personal and professional lives.  It also creates a challenge that has been at the front of many educators’ minds recently, related to one particular kind of creativity and productivity: How do we teach kids to write in an age of AI? Or maybe: Why do we teach kids to write in an age of AI?

Writing and School

Writing is undergoing the same commodification as visual art. So why teach it?

We write and teach writing for two reasons: we write to explain and we write to explore. Writing to explain is the domain of analytical essays, lab reports, term papers, précises, and other writing for which we learn to organize and lay out our thinking. Writing to explore is the domain of journal entries, response papers, rough drafts, and other writings that serve the purpose of helping us surface, sort out, and develop our unformed thoughts.

We teach these ways of writing to students without the aid of AI before we ask students to leverage AI to make the work easier. Why? Because students must be able to assess whether the work of the AI accomplishes what they wanted it to do. 

We write and teach writing for two reasons: we write to explain and we write to explore.

We do this already in math, where we teach students arithmetic before we offload complex arithmetic to calculators, and we teach students to draw graphs before they use graphing calculators. Then, once the work is automated by machines, we teach students to assess what the machines produce and leverage it for more sophisticated learning. In math, students learn to analyze the graphs to understand their meaning, and then they learn to manipulate the inputs to the calculator to produce different results from the calculator.
 
So it is in English courses: we learn to write without AI so we can understand the forms and purposes of writing. Then, we leverage AI to produce writing more quickly. We then assess it and iterate on it to achieve the outcomes we seek. We learn how to change what we ask the AI to do in order to produce different results that meet our needs.
 
Leveraging AI brings us and all of our students to a competent and baseline level of skill, if not to the level of excellence or originality. When most of the work that we do is with commonly taught texts and topics, AI aids us in gathering our thoughts and laying them out in an organized manner. It helps us generate ideas that may be new to us, but not new to the thousands of students and scholars who have studied them before. AI helps by bringing what every student and teacher can do up to the average.
 
But what if our goal is excellence? What if our goal is producing students who drive original thought and who think creatively? What is the role of AI in service of these goals? To explore this, let’s return to cubism for a moment.

Excellence: Cubism, Novelty, and Breaking Boundaries

Why was cubism so difficult for the AI? One explanation might be that the sample size of cubist works in the AI training sets was too small. But the internet is vast, and AI training sets are vaster still.

More likely is that the revert-to-the-mean norming of how generative AI produces its results makes it incapable at present of producing true novelty. In other words, AI doesn’t experiment. AI doesn’t ask, “what if I try this?” Instead, it asks, “what is most likely to meet the expectations of what I’ve been asked to do?” And it figures out “what is most likely” by looking for the average of what it has in its model.

Cubism, on the other hand, doesn’t look for an average. Instead, it requires looking at something from new perspectives, or in a disjointed manner, or juxtaposing contrasting elements. These are fundamentally counter to the practice of finding an average or statistically normal representation, which means creating cubist art is something like the opposite of the process for how generative AI produces its results. When AI tries to produce a cubist image, it likely is conflicted between the prompt, which calls for something radical, and its generative programming, which calls for statistical likelihoods. 

Image generation right now can’t escape the average. If you try asking for something very common to be excluded from an image that would typically include it, the results are sometimes laughable. See the efforts to ask the AI to draw a sky with no clouds or a tree with no branches. In the tension between the average and the innovative, the AI maddeningly defaults to the average.
Example images generated by AI after it was instructed to create a sky with no clouds and a tree with no branches- as AI defaults towards a statistical average in its results, it's often unable to produce an image that is, statistically, un-average.
This is not how creative individuals think. Asking “what is most likely from my knowledge to meet expectations” is not how excellent scholars, artists, students, mathematicians, scientists, or leaders look at a problem. Instead, creative excellence follows a question or idea into the unknown, into places where there isn’t an average to draw from.

AI appears to be not yet capable of this kind of excellence, novelty or innovation. It can rapidly expand our options and generate novelty based on our prompts, but its novelty will for the most part exist within the context of convention. Any true imagination will depend on the human being wielding the tool.

Making Meaning, Redefining Excellence, and How Teachers Can Move Writing Beyond AI

But even more important than assessing quality or driving imagination is making meaning.

AI can help students analyze text, identify detail in an image, and structure a work of writing — many people have done this before, so there are plenty of examples for AI models to draw from — but only the student can apply this understanding to her world. Only the student can integrate new understanding into his school community and personal relationships. Only the student can practice new habits based on new ideas and understanding enabled by AI.

Local levels of specificity are beyond the reach of AI. AI can produce a skilled work of writing about a common text, and it can produce statistically common insights, but it cannot access the details of our students’ specific world. So while AI can help students compose an analysis of the imagery in Shakespeare’s Othello, it is the student who must absorb the Bard’s investigation of race in society and then see relationships with his peers in a new light. 

Assignments designed to investigate personal experiences in a local context require active reflection and social engagement by students. AI can write an analysis — and even what looks like a reflection — but its analysis and reflection are not the same as students transferring understanding from books to their own lives and context. The more we ask students to make connections between texts and personal or local contexts, then the more likely the work will belong to students — and the more likely it will have meaning to them, too.

I am reminded that a school leader once shared his regret that while students were once asked to explore authorial intention — a fundamental act of stretching one’s mind to understand and think like someone else — now students are increasingly asked to share personal responses. He framed it as a less sophisticated learning experience. I tend to agree, but I also believe that this isn’t an either/or scenario.  We must ask students to articulate other people’s perspectives and insights, and then make sense of these new perspectives and insights in the context of their own lives and communities.  AI can help more with the former, less with the latter.

In my teaching years, I would often ask students to write two-part essays: part one would focus on an explication of some element of a text or piece of data, and part two would focus on an element of the student’s life as seen through the lens of part one.  This works with primary sources in history, labs in physics, literature in English, and many other academic settings. Today, AI might help more with part one — and that’s ok — but part two entails a level of specificity that requires engagement and reflection and meaning making by the student.  And AI may yet help with developing and refining these ideas and the language they are couched in.

In an AI age, we might redefine excellence. Excellence across disciplines has often been typified by fluency of expression combined with domain-specific insights around content.  Now both of these are commodities. Anyone, with the help of AI, can assemble fluently composed, well-written essays and reports. With fluid writing and flexible thinking about core content at our students’ fingertips, we might redefine excellence to describe how well students make connections between what they’re learning and their lives and schools. AI can’t do this. One of technology’s greatest limitations is that it does not understand context even as well as a child. In today’s age, excellence might focus on how well students connect content to context in thoughtful, precise and specific ways. Excellence might focus on how students make meaning not in the abstract, but concretely in their world. Fruitfully, this definition of excellence insists on teaching for transfer, which has always been a central goal of well developed programs.
 
AI will do some of the heavy lifting in this work going forward, and that’s ok. Calculators do some of what once may have been thought of as the heavy lifting in math. As a foundation, competency begins with being able to assess the work done by calculators and AI. But more important is that calculators and AI can’t imagine the way humans do, and this — thinking with insight and originality — remains a grounding for ongoing academic excellence, as it has in a pre-AI time. Most significant, however, is that AI won’t be able to make meaning for us. This, which may be the heart of how we redefine excellence, we have to do in our own context, by ourselves, and with each other.

Peter Nilsson is a Senior Consultant with Aptonym.

His experience includes independent school leadership and teaching positions, among other innovative roles focused on innovation in education. He is the founder of Athena, a resource and collaboration hub for teachers, and is the editor and curator of the newsletter The Educator’s Notebook. Peter also serves on the Advisory Board for SXSWedu and the Center for Curriculum Redesign.

In his own words, Peter is “an educator committed to learning and making the world a better place.” Connect with Peter on LinkedIn.

This post was originally published in its entirety on Sense and Sensation.