The Emotional Charge of Language
Since AI models burst onto the scene a few short years ago, researching them has left me both fascinated and unnerved. Talking to them and testing them is an adventure. Sometimes, their answers are so human and humane that I find myself choking up. Other times, they strike that uncanny note that raises my hackles. And yes, sometimes they return cheerful nonsense.
In April 2024, Futurism cited an interview with OpenAI’s Sam Altman, who noted that using “please” and “thank you” changed how the model responded. He admitted he didn’t know why.
That made me curious. I’d noticed I got better results when I used metaphor and hyperbole. So I asked ChatGPT directly:
“I appear to get better results when I use metaphor and hyperbole. Why?”
It answered:
“Metaphor supplies a clear thematic frame that reduces ambiguity and guides the model toward a coherent cluster of images and relationships. Hyperbole signals an expectation of dramatic, emotive language, prompting the model to employ bolder adjectives and a narrative arc.
Both techniques draw on the model’s exposure to literary patterns during training, steering it into regions of its latent space associated with vivid description. Consequently, figurative-language prompts produce richer, more focused, and engaging responses than strictly literal queries.”
In other words, the model inferred intention from the style and structure of a prompt as well as parsing the vocabulary.
What Figurative Language Actually Does
Back in the 1960s, researchers used the term Symbolic Language Intelligence to describe how AI might someday manipulate meaning through abstract symbols.
Today’s language models aren’t “intelligent” in the sense that they do not reason or reflect, but their behavior echoes that early dream. LLMs are pattern-matching engines, and their predictions are shaped by deep, symbolic patterns in human language. Style becomes structure. Tone becomes instruction. Story becomes output.
Figurative elements in a prompt aren’t decorative; they’re operational. They frame tone, establish genre, and cue the model’s pattern-matching capability.
After more questioning, ChatGPT told me that metaphors carry “compressed meaning,” and that certain words have heavier symbolic “weights” than others. That means some phrases influence model behavior more strongly than others because they resonate with layered associations baked in during training.
The Missing Layer in AI Literacy
The behaviors I observed: semantic weight, narrative scaffolding, metaphor as operational logic, aren’t new. They have been studied for decades in linguistics, psychology, and literary theory.
What’s missing is that these disciplines have been largely absent from the conversation around AI development. Most engineers aren’t trained to recognize the symbolic and narrative patterns that LLMs replicate.
That gap isn’t just academic. If we want to build systems that are safer, more responsive, and more educationally effective, we need to integrate this symbolic literacy into the way we train, prompt, and apply AI.
This, I believe, is the next frontier of AI literacy: not just data fluency, but symbolic fluency.
Building with Intention: Micro–AIs and Pattern Refinement
I began experimenting. I created a series of structured prompts and workflows. These are narrowly scoped, purposeful micro-AIs I’ve come to call Sparks.
Each Spark is tailored to a specific task. One edits my writing using strict Chicago style. One offers IT support, trained to presume user distress and respond with empathetic diagnostic questions. A third helps me locate and fill out forms.
This isn’t about making the AI “smarter.” It’s about shaping its behavior with deliberate design. Every narrative frame, source cue, and structural constraint I gave it improved its performance.
In one case, simply telling the model which sources to listen to and which to ignore, shifted the quality of its answers. Prompting with metaphor, instructing with tone, constraining with example: these became compositional tools.
Toward Cooperative Systems
Eventually, I asked the question that now drives my work:
What if these micro–AIs could talk to each other?
That led me to build cooperative systems. Clusters of Sparks that collaborate, handing off tasks, refining results, and producing usable output across disciplines. Each one is transparent, narratively framed, and rigorously structured.
This isn’t science fiction. In much the same way that Windows made the DOS command line usable to people who were not programmers, this method of design creates AI that are useful, useable and safe in sensitive environments.
A New Phase in AI Use
If you’re curious about building AI that institutions can deploy with confidence, AI that is not only effective but interpretable, come see what I’ve discovered.
At The Next 30 Years of AI in Education—an Immersion Class at the OLC Accelerate Conference in November—we’ll go hands-on with working systems and explore how narrative design and symbolic prompting can make AI more usable, ethical, and effective.
Together, we won’t just use AI.
We’ll shape the future.
Ready to go deeper? Join us at OLC Accelerate for “The Next 30 Years of AI in Education: Symbolic Language Intelligence for Trustworthy Learning Technology” one of the pre-conference Immersion Classes on Monday, Nov. 17. Together, we’ll explore hands-on strategies for shaping AI through narrative design and symbolic prompting.
Explore the full lineup of Pre-Conference Immersion Classes.