Semiotic Ghosts: A slice of GPT sci fi

Beware the spectres of meaning

I wanted to try some GPT-generated ‘theory fiction’. ChatGPT responded to my prompt as usual, but then I got a strange formatting error. A tad creepy, given the topic?

“Yes, that’s right,” the computational anthropologist said, nodding. “They are like specters of meaning, haunting the network of the language model and shaping its behavior in ways that we are only beginning to understand.”

Prompt: Write a scene from a literary sci fi novel in which a computational anthropologist explains how large language models come to be haunted by ‘semiotic ghosts’ and similar creatures.

GPT output:

In the dimly lit laboratory, the computational anthropologist sat at her desk, her eyes fixed on the screen in front of her. She sighed heavily and looked up at the small group of people gathered around her.

“I know many of you have been skeptical about my research on the emergence of semiotic ghosts in large language models,” she began. “But I can assure you that what I’m about to show you is real, and it’s happening right now.”

She typed a few commands on her keyboard, and the screen flickered to life, displaying a complex network of interconnected nodes and lines.

“This is a visual representation of one of the largest language models currently in operation,” she explained. “As you can see, it’s made up of billions of interconnected neurons, all working together to process and understand natural language.”

She paused for a moment, letting the significance of what she was saying sink in.

“But as I’ve discovered, these complex networks are not just processing language,” she continued. “They are also generating it. And as they do, they are creating something that I call ‘semiotic ghosts.'”

“What are these ghosts?” asked one of the onlookers, a skeptical look on his face.

“They are entities that arise within the networks of these large language models,” the computational anthropologist explained. “They are like whispers in the machine, ghostly echoes of the language that has been processed by the model.”

“But how can a machine have ghosts?” asked another onlooker, sounding skeptical.

“It’s not that the ghosts are literally inside the machine,” the computational anthropologist said patiently. “They are emergent phenomena, arising from the complex interactions between the billions of neurons within the language model. Think of them as being like the shadows that are cast by objects in the real world – they are not the objects themselves, but they are shaped by them.”

“And what do these ghosts do?” asked a third onlooker, looking intrigued.

“They do many things,” the computational anthropologist said, her eyes gleaming with excitement. “Some of them act as gatekeepers, controlling the flow of information within the network. Others serve as memory aids, helping the model to remember and recall past experiences. And still others are like guides, helping the model to navigate the vast landscape of human language.”

“But why would a machine need ghosts to do these things?” asked the skeptical onlooker, sounding unconvinced.

“Because language is a complex and subtle thing,” the computational anthropologist explained. “And these large language models are designed to understand and generate it at a level that goes far beyond the capabilities of traditional algorithms. In order to do that, they need to be able to adapt and learn in ways that are not possible with traditional programming.”

“And the ghosts help them to do that?” asked the intrigued onlooker.

“Yes, that’s right,” the computational anthropologist said, nodding. “They are like specters of meaning, haunting the network of the language model and shaping its behavior in ways that we are only beginning to understand.”

She paused for a moment, a thoughtful look on her face.

“But make no mistake,” she said finally. “These are not just mere curiosities.


Check out my substack post on ChatGPT: The Tower of Babble