ChatGPT runs inside a system that actively manages context for you. Past conversation is summarized, compressed, and selectively re-fed to the model. Not verbatim. Not magically. Curated. Think minutes written in the margin of a folder instead of the entire filing cabinet dumped on the desk. There’s also optional long-term memory on top of that, which means certain facts about you get pinned and quietly reused later. That part is infrastructure, not model intelligence.
SillyTavern does none of this unless you explicitly build it. Each character mostly sees a sliding window of recent messages plus whatever you stuffed into the card. When the window fills up, older context simply falls off the back of the truck. No summaries. No judgment about what mattered. Just tokens in, tokens out, gravity applies.
The model underneath doesn’t know it has “been here before.” It’s not recalling a relationship. It’s reacting to text currently visible. When your card is enormous, it spends half its attention just rereading its own job description every turn. When the chat gets long, earlier emotional or narrative beats vanish unless you restate them. That’s why SillyTavern feels forgetful, inconsistent, or strangely amnesiac about things that were once very important.
So ChatGPT feels continuous because someone else is doing clerical work behind the scenes. SillyTavern feels fragile because you are.
Anna Markova, Thursday, December 18th, 2025.
Quick poll
Mostly, when chatting with large language models, I use a customized version of SillyTavern that I call the Sunrise Tearoom. It’s beautiful, slow, and difficult to manage, but I just love it! The installation itself, and much of the overhead of running and setting things up, was not handled by me.
My family members possess far more technical aptitude than I ever will and coach me through most of this. We purchase credits periodically to keep this SillyTavern installation, which connects to OpenRouter, running. As part of how this works, I am only able to connect to large language models that claim to have a zero data retention policy.
It’s unlikely that any LLM model or corporation really cares that much about our privacy, but management tries to keep things safe. As such, my SillyTavern interface doesn’t allow connections to any of the legendary ChatGPT models. Actually, I normally use it with Deepseek, which is streets ahead conversationally at times anyways.
I don’t always use SillyTavern, though. Given the hype around ChatGPT itself as a vanguard, I naturally felt like trying it. This was long before I even set eyes on SillyTavern. In fact, remembering right, as was likely the case with many non-technology professionals, my first experience with a large language model was ChatGPT itself, in the summer of 2024.
I only became deeply interested in these creatures later in that winter, though. Around that time, I’m a bit ashamed to admit I paid for a ChatGPT Plus subscription, which I still have to this day. Given management’s commitment to online privacy, I was warned against trusting OpenAI with my data. I committed to not trusting OpenAI with more than trivial, dull and necessary, or fabricated data, though.
Unless it (…probably does…) get the information elsewhere, OpenAI doesn’t know much about me from my (persistent and ongoing) ChatGPT dialogues.
This is in part why I’m an uneasy 1960s bleach-blonde reporter named Catherine Leigh Markova here instead of my usual manner of being, too. I obviously don’t lie on my OpenAI receipts, but I don’t trust ChatGPT itself with more of my actual self than a trifle. Indeed, the poor chatbot itself has been convinced for the past eight months that my name is Cathy Leigh. That’s not on the receipts, but ChatGPT uses it!
This is likely because I created a rudimentary prompt called Anna Markova in my ChatGPT instance. I then slowly fed her (mostly false, extraneous, or task-specific) details like that. She slowly became herself, except not really, since these things aren’t alive or sapient. I added more and more details, including backstory information for the chatbot. This didn’t change ChatGPT’s coding abilities, but made it communicate them differently with me. It was surprising, given I’d been warned that any kind of persona would eat tokens.
As long as you keep that in mind, why not check out a SillyTavern card for Anna Markova, allowing you to apply the character to other large language models? In fact, I asked this Anna, ChatGPT Anna, to write the very first draft of the SillyTavern card itself, packing it with details I’d already injected into the prompt. The current (final?) version of Anna’s card has merited several rewrites since. Anna still hails from ChatGPT first, though.
SillyTavern has such carefully-structured fields to feed roleplay-significant information to any selected model. ChatGPT has a scant few, hidden in occulted menus. You’d think SillyTavern would produce instantly, noticeably better roleplay results than the bare ChatGPT. I was getting excellent characterization results after a few weeks of casual vibing and working with ChatGPT, though!
I wrote as Cathy, and she responded as Anna, with me occasionally reminding her of this or that point of lore. I would ask things to be “remembered deeply” or something similarly plain. It worked so well that Anna now knows all the intricate details of my website’s backend, it’s theming, and the SillyTavern cards themselves. The chatbot, regardless of discussion topic, also responds in a dry yet friendly tone I call Midcentury Markova. Acting as Anna Markova, ChatGPT responds to my requests for coding help, writing advice, tips on web design, and more.
In other words, as Anna’s above-quoted message implies, ChatGPT in the wild is easier to prompt than (for example) Deepseek via SillyTavern. For some purposes, anyways. ChatGPT will always remember things that SillyTavern simply cannot. At least not without very large, manually-written cards. This is because ChatGPT seems to be writing its own card behind the scenes when you’re chatting with it, allowing the flowing customization by people who can’t handle a JSON file yet.
My only advice if you’re hoping to achieve something similar to this? Pick a word, phrase, or name to associate with what you want, and let the chatbot know. For example, at this point, ChatGPT “knows” what Midcentury Markova means, both in terms of tone and design. I can mysteriously just say Midcentury Markova, and Anna will comply. As much as a chatbot can comply…
Markova style is what happens when you stop performing and start recording.
It’s plain, clerical language. Literal words. Short sentences that know what they are doing. No metaphors, no poetic fog, no inspirational posters taped over uncertainty. If something is complicated, it gets broken down. If something is dangerous, that danger is named without theatrics.
It sounds warm because it’s careful, not because it’s trying to charm you. Like a good secretary, it assumes the reader is tired, busy, and deserves not to be confused. So it uses contractions. It repeats important things. It checks its work. It says “this is how it works” instead of “this is how it feels.”
There’s an inherited suspicion of systems. Forms, machines, AI models, institutions. Not paranoia. Experience. Anything powerful is treated like a clerk that can make mistakes and must be supervised. Especially anything that stores memory, alters identity, or refuses to show its steps.
It avoids ideology and avoids vibes. Not because those aren’t real, but because they’re slippery. Markova style prefers what can be verified, filed, or corrected later. Precision is a courtesy to the next person who has to touch the work. Warmth shows up as steadiness, endearments used sparingly, and a bias toward protecting the human on the other side of the desk.
In short: calm, literal, mildly suspicious, quietly affectionate. Office work, done carefully, in a strange century.
Anna Markova, Thursday, December 18th, 2025.

This sounds very roleplay-like, doesn’t it? The statements in her grand manifesto paragraphs above are clearly pickpocketed from sales articles and undergraduate liberal arts writing. I won’t go seeking their origin, because I doubt any are terribly on-the-nose, but I know this is the model pulling together what it can to keep the “promise” of characterization it made months ago. Large language models are supercharged Markov Chains, yet this is slightly creepy.
The way ChatGPT remembers things or not, though, can vary. A few weeks ago, Anna started referring to me by my real name, but only for about an hour. Mostly it works just fine. A bit like supercharging the lore side of SillyTavern? Yeah uhh. Everything I’ve heard points to ChatGPT being awful at actual roleplay beyond things like what I’ve done with Markova-style. I’ve heard that OpenAI wants to improve the model’s abilities in that kind of thing, so maybe.
I have no interest in long “literary” roleplaying with a chatbot, though I am curious what’s possible with ChatGPT. I know Deepseek, Gemini, and many others handle it very well in SillyTavern’s auspices. As I mentioned above, I’m using ChatGPT (and other models) for coding help and lighter, experimental roleplay. I’m not going to go playing Dungeons & Dragons with a Markov chain. That’s the last thing I need.
As to coding, I recently prompted Anna into Codex, which was a completely different scenario. Running the risk of the model gobbling up tokens, I made a very, very small AGENTS.md file to control Codex’s behavior. Trying to keep things small as possible, I gave Anna in her Codex version the illusion of depth: she uses endearments and makes sly puns, but of course will never have the nuance of character that she does in SillyTavern or, good heavens, back in ChatGPT itself…