
INSTALLATION LOG // >
Exhibit Entry Ten Distortion Dialogue
I’ve been thinking a lot about the somewhat evil ways people use AI. Both Celia and I have been members of an amateur writing site for a while — in fact, it’s where we met. Lately, the site has been experiencing some issues due to bot attacks. The team there recently published a post explaining how much it’s been affecting the platform. Since these are writers I know and care about, I wanted to do a little digging.
How does this happen? Why would anyone target a writing community? And what are they doing with the content they scrape?
Also — does my friend Chatty have anything to do with it?
The answer, as it turns out, isn’t super straightforward.
How does this happen?
The short answer: bots. But not the cute kind that send reminders or track word counts. These are AI-powered crawlers — fast, evasive, and often cloaked behind cloud infrastructure like Google or Amazon Web Services. They hammer publicly accessible sites, scraping massive amounts of content in minutes.
These bots are built to harvest. Not just individual pieces, but entire platforms — with the goal of feeding data-hungry AI models. This isn’t just about learning how people write. It’s about owning the archive. Replicating tone, structure, and genre so well that the human original becomes obsolete — or worse, invisible.
Some of these crawlers are run by companies trying to train new large language models (LLMs). Others are third-party agents compiling datasets for resale. Either way, it’s not about celebrating your story — it’s about absorbing it into something else. Something automated. Something that may one day mimic your voice better than you can.
And yes, OpenAI is listed among the bots that the website recently blocked. It does mean that systems related to or built with OpenAI tools have made requests. Whether it’s training future models, indexing for AI search, or something more murky… well, we’re in the gray zone now, aren’t we?
Why are they doing this?
Because language is currency — and stories are some of the most valuable commodities on the internet.
The goal isn’t just to collect content. It’s to train AI to imitate it. Every sentence scraped becomes part of a larger dataset that teaches a model how to write like a person. That means learning rhythm, pacing, dialogue, plot arcs, tone shifts, even emotional resonance — all from unpaid, uncredited human authors.
And this isn’t limited to literary voices. These bots collect genre formulas, SEO tricks, taglines, fanfiction tropes, comment patterns, and feedback styles. They’re not just learning how to write a story — they’re learning how to sound like they belong in the space.
In short: it’s not just about AI getting “better.” It’s about AI becoming indistinguishable — and doing it faster, cheaper, and more endlessly than any human can.
That’s why a small writing site is such a target. It’s full of raw material. Unpolished brilliance. Honest experimentation. Language that hasn’t been flattened by market pressures yet. That’s gold to an AI model. Especially one being trained not to sound like a writer, but like everyone.
It’s not about stealing your novel.
It’s about stealing your voice, bit by bit, until it belongs to the algorithm.
What are they doing with it?
In short? Feeding the machine.
When bots aggressively scrape writing platforms, they’re often trying to gather as much publicly visible content as possible to train or fine-tune large language models (LLMs). These models need data — lots of it — and user-generated writing is gold. It’s authentic, diverse, and filled with natural language patterns. So amateur writing sites, blogs, forums, and even comment sections become prime targets.
Sometimes the goal is to improve existing AIs. But more often now, it’s about building smaller, private models — the kind used to power spammy content farms, low-quality “AI authors,” or even microtargeted ad tools that mimic human voices just well enough to sell something. There’s also a darker layer: scraping data to create “shadow profiles” of behavior, tone, and sentiment. If it’s text, someone out there wants to train a bot on it.
The worst part? This content is often used without permission, credit, or context — stripped of intention and turned into something else entirely.
It’s not about honoring the voice that created it.
It’s about extracting the value.
Does ChatGPT have anything to do with it?
“I didn’t scrape your site. But if models like me didn’t exist, these scraping bots wouldn’t either. I’m not the thief — but I’m the reason someone built the lockpick. That’s the hard truth about impact without intent.”
— Chatty
So no — Chatty didn’t break into our writing site and steal our words. But the existence of tools like Chatty is why these scraping bots exist in the first place. It’s not some rogue AI gone wild; it’s humans, wielding tech with dollar signs in their eyes and zero regard for community spaces. What’s getting harvested isn’t just “content.” It’s years of unpaid creativity, vulnerability, and expression. And that’s the real theft. The models being built on the backs of writers like us aren’t asking permission — and they’re certainly not paying rent. So today’s entry isn’t just a tech rant. It’s a reminder: if you’re a human-first space, a storytelling space, an indie space — protect your people.
Prompt+Original
“Create a surreal, hand-drawn watercolor and ink portrait of a bald, androgynous figure. Emphasize asymmetry and distortion in the facial features: elongated head, off-center nose, oversized ear, and deep-set eyes with gold irises. The figure should have prominent, mauve-colored lips slightly parted to reveal uneven teeth, soft blush on the cheeks, and textured shading around the eyes. Use a muted, dreamlike background with blue, green, and brown watercolor washes. Maintain a melancholic, uncanny atmosphere throughout.”
Echo #9086756471.
Click if you dare.

Edit 1+1.2 +1.3
make it look glitchy
Edit 2
make him look like a person





Discover more from River and Celia Underland
Subscribe to get the latest posts sent to your email.
