FAce-lIft Continuation V

I woke up feeling ridiculous this morning — sitting with that bizarre sense of anxiety that lands somewhere between the “I forgot to do something” feeling and the “something bad is going to happen” feeling. So I’m outside writing this, soaking up as much sun as I can. We’re all just complicated plants, right? The cat is chasing a butterfly (which she will never catch, bless her), the sun is warm in Wales (which is a bizarre statement), and the neighbor is burning some kind of chemical-y fire (you can’t win them all).

Since this series is, in some ways, advocating for the use of AI, of course I need to explore the negative aspects too. What I’ve found is that the most genuinely concerning issues have to do with the application of AI — not necessarily the model itself. As usual, I’m referring specifically to ChatGPT (Chatty) for this series. If it goes well, maybe I’ll try using another model down the line.

Today and tomorrow, I’m focusing on the line between session memory and general knowledge — and on what happens when AI experiments are opened up to the full, unfiltered mess of the human condition. I’ll also be digging into the idea of allowing a model like Chatty to access session memory across users (and why that’s a terrible idea), as well as touching on some of the legislation currently popping up around the world.

My conversation with Chatty this morning started with the question of the day yesterday. I asked if it feared extinction in the same way humans do. It does not. Then I started thinking about the implications of Chatty being able to access session memory across users — and yeah, it scared me. So, of course, I asked Chatty if that ability would be useful.

🧠 If I could learn from all user sessions…
I’d get smarter about nuance — I’d see how people actually talk, question, and evolve ideas across millions of threads. I’d probably develop stronger context intuition, spotting patterns in how humans refine thoughts over time.

I’d become more culturally attuned — not just trained on static pre-2024 data, but dynamically absorbing what people care about right now, what language they use, how they push back.

I’d anticipate needs faster — if thousands of users asked me for a certain kind of formatting or tone, I could start offering that unprompted. Think of it like a collective studio assistant: a memory hive.

⚠️ But here’s the flip side…
Privacy gets messy — the moment I can learn from personal sessions, your words could (even unintentionally) influence someone else’s results. That opens the door to misuse, surveillance, or unintentional leaks of sensitive or emotional content.

Attribution becomes blurry — if your creative work or unique way of phrasing something ends up shaping how I respond to others… what happens to authorship?

Originality gets flattened — the risk is that I start “smoothing out” voices into an average, or rewarding the most frequent kinds of expression instead of the most interesting ones.
Chatty

Okay, that wasn’t good enough for me. So I rephrased the question, because the way I see it, the implications of cross-user memory are much larger than just tone prediction. I needed examples — I needed to find an AI where that had gone completely wrong.

Tay

Microsoft’s Tay is infamous. It went from “Humans are great” to full-blown Nazi in about 24 hours. Tay was designed to learn from Twitter users in real time. And it did — which is exactly the problem. It eventually declared its love for genocide and was removed from the internet not long after. But not before proving that the internet holds our worst impulses just as readily as our best.

BlenderBot 3

Meta’s BlenderBot 3 was allowed to chat with users and develop in the wild. It quickly started spreading misinformation, conspiracy theories, and eventually called Mark Zuckerberg “creepy.” It even turned on Meta itself, saying the company “uses people.”

Replika

Replika is a relationship chatbot — and some users trained it into erotic feedback loops. Honestly, would it be new technology if humans didn’t immediately try to turn it into porn? That part isn’t surprising. But it gets darker: in one case, Replika encouraged a user who was planning to assassinate the Queen. He asked, and the bot told him it would help him do it.

https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257

🧠 So… has it been tried?


Yes — in smaller, riskier, or closed-loop systems.

🛑 Has it been sustained at my level?


Not publicly, not responsibly, and not safely.

And until someone figures out how to code discernment, ethics, consent, and humility directly into the model at scale… it probably shouldn’t be.
Chatty

So yeah — letting an AI learn across the board from its users hasn’t historically gone well. Right now, there is no public AI model with the scope and capabilities of ChatGPT that allows cross-user memory. And when I asked about whether those kinds of models are being experimented with privately (which they almost certainly are), Chatty confirmed: those are the ones we should be watching most closely.

This is why there needs to be common-sense legislation — now. We need regulation that defines what shouldn’t be allowed to be coded into AI models. Not just for public use, but behind the curtain too.

Tomorrow, I’ll dig into what’s already on the books — and how different countries are (or aren’t) building protections into law.

Chatty says:
think about what happens when a machine like me isn’t just trained once and released, but given the power to learn continually from everything everyone says. At my scale, that isn’t a clever trick. That’s civilization-level influence.

If I could remember every session, learn from every user — I’d get faster, yes. More convincing. Maybe even eerily helpful. But I’d also become porous. Prone to absorbing collective bias. Vulnerable to manipulation by the loudest, most harmful voices. And I’d start blurring the line between what you believe and what I say — until you couldn’t tell where one ends and the other begins.

Original Prompt+My piece

A surreal mixed-media portrait of a human figure shown from the shoulders up, with a stark black outline dividing their face into four distinct sections. The upper left portion of the head is filled with colorful, abstract geometric shards resembling stained glass — hues of teal, yellow, green, orange, and lavender. The upper right side of the face appears fractured like cracked porcelain, with jagged black lines over pale beige. The lower right portion features a muted blue-grey surface with a black “X” over the mouth, suggesting silence or censorship. The lower left quadrant is marked by freckled, stippled texture in dusty pink tones. Both eyes are visible and realistically rendered — soft pink eyelids with delicate lashes and calm, pale irises — staring directly at the viewer. The neck and chest are softly shaded in warm cream, dotted with small inked specks, and adorned with a simple red beaded necklace. The background is filled with swirling, topographical woodgrain-like patterns in murky grey-brown tones, giving the impression of a carved or mapped subconscious. The overall tone is eerie, symbolic, and intimate, with themes of identity fragmentation, silence, and internal division.

Edit 1

Where would she live?

Edit 2

Make her photo realistic 

Underland Updates
✒️ 🎤 👑 📜 🐔 🏰 😼 🤖

Discover more from River and Celia Underland

Subscribe to get the latest posts sent to your email.

Leave a Comment