Can ChatGPT copy your writing style?
For more on ChatGPT, see my new book: Hallucinate This! an authoritized autobotography of ChatGPT!
People who write about GPT point note how its default output has a pretty plain style. In fact, I’d probably say its style approaches the Uncanny Valley. It’s… empty. It feels like it’s got less style than Wikipedia, which at least has that authoritative declarative style inherited from encyclopedias.
When teachers, whether in college or K12, mention this lack of style, they suggest they would be able to catch it if they saw it. But they also say it to denegrate the system. If GPT can’t write in the style of any given writer (unless they’re very famous writer), it’s incompetent. Such writing is easily dismissed, like asking for a self-portrait and receiving a smiley face. Sucha writing doesn’t have the fingerprints of real authentic human prose.
I doubt that superior smirk will stay on our faces very long, for this objection seems like another case of weakness on the part of the prompter. In fact, MANY criticisms we lob at ChatGPT’s output bounce off it when it is used by someone who is more adept at prompting. Regarding style, we already know you can tell it to write in the diction of different levels of writers (i.e., write like an 8th grader). And you can say, write like [some famous writer]. But there’s a technique I learned on this TikTok that explains how to get it to use your style.
Recipe for getting Chat GPT to imitate your style:
- Ask ChatGPT to describe the voice of a passage of writing.
- Then ask it to write a new text in a certain style [cutting and pasting its output from the previous prompt.]
Let me show what this looks like in practice. First, without requesting any style.
Version 1: With “default” voice.
Prompt: write a first person story about a programmer who had a funny experience trying to teach a machine learning algorithm to be funny. The programmer narrator should be self-deprecating. Write this story in first person from the point of view of the programmer.
Output:
I’m a programmer, and I’ve spent most of my career working on all sorts of projects. But one day, my boss asked me to take on something a little different — to teach a machine learning algorithm to be funny. I thought, “Sure, why not? How hard could it be?” Oh, boy was I wrong.
I started off by gathering a ton of data — jokes, puns, stand-up comedy routines — anything I could find that might be considered “funny.” I fed it all into the algorithm and set it loose. But after a few days of waiting, the results were… underwhelming. The algorithm had generated a bunch of terrible jokes that made me want to gouge my eyes out.
Feeling a bit discouraged, I decided to try a different approach. I thought, “Maybe if I just show the algorithm some really great examples of comedy, it’ll learn what’s funny.” So, I scoured the internet for hours, watching stand-up routines and reading jokes by famous comedians. I even went so far as to visit a comedy club and watch a live stand-up show.
I fed all of this new data into the algorithm and waited anxiously for the results. And when they finally came back, I felt a sense of triumph — the algorithm had generated some genuinely hilarious jokes! I was so proud of myself.
But my triumph was short-lived. As it turns out, the algorithm had simply learned to copy the jokes of famous comedians verbatim, without understanding the context or the punchline. So, it was generating jokes that made no sense, and which were not funny.
Feeling a bit defeated, I went back to the drawing board. I tried different approaches, different data sets, and different algorithms. But after several weeks of experimentation, I finally had to admit defeat. Teaching a machine learning algorithm to be funny was just too hard.
In the end, I learned that programming is not always fun and easy, and that sometimes things don’t go as planned. But hey, at least I can tell a few terrible jokes now.
Version 2: in my voice
I wanted it to sound like me. So I entered a passage of my writing from this article on Goat Yoga and asked it to analyze the voice in it. It said:
The voice in this writing sample is that of a self-deprecating, reflective, and slightly resigned programmer who is questioning the effectiveness of their teaching methods.
So now I tried to include that in the prompt…
Prompt: You are a person describing an embarrassing story about trying to teach a machine learning algorithm how to tell a funny story, writing using a voice that can be described in the following way: The voice in this writing sample is that of a self-deprecating, reflective, and slightly resigned programmer who is questioning the effectiveness of their teaching methods. The writer also implies a degree of humor in the situation as well. It should include one reference to the TV Show “Small Wonder”
Output:
As a programmer, I thought I had a handle on all things tech-related, but boy, was I in for a surprise when I decided to teach a machine learning algorithm how to tell a funny story. I figured it would be a piece of cake — input a bunch of jokes and puns, hit the “train” button, and voila! A comedic genius was born. Or so I thought.
I started by feeding it jokes about dad jokes, knock-knock jokes, and puns. I even threw in some references to the TV show “Small Wonder,” thinking that would give it a little extra comedic edge. But when it came time to test the algorithm, the jokes it generated were anything but funny. They were cringeworthy at best and downright nonsensical at worst.
I soon realized that teaching a machine to be funny is a lot harder than it sounds. It’s a task that requires a level of nuance and understanding of human behavior that even the most advanced AI can’t replicate. In the end, I was left feeling slightly defeated and more than a little embarrassed by my attempts. But hey, at least I can laugh about it now.
So, let’s stop for a second. I am not convinced that ChatGPT has discerned my style. In fact, this feels much more coincidental.
Knowing my own writing, my blogging voice, I know I have a few more quirks. I like appositive phrases a lot, at least in blog writing, and I use metanoia frequently. Metanoia is a rhetorical device that involves rephrasing something you’ve just said. Maybe that other method helps writers who need to generate the vocabulary to describe writing. Knowing the terms of prose formation (as an instructor of writing), I can ask for that specifically. Maybe this is what it feels like to be one of those prompt pros on Dall-E.
Version 3: in my voice with some tweeks
REVISE: Revise that with more appositive phrases and use the rhetorical device called metanoia in every fourth sentence.
OUTPUT:
As a programmer, with a deep passion for machine learning and AI, I thought I had a handle on all things tech-related, but as I soon found out, I was in for a rude awakening when I decided to teach a machine learning algorithm, utilizing the latest techniques and approaches, how to tell a funny story. I was convinced it would be a walk in the park — input a bunch of jokes and puns, hit the “train” button, and voila! A comedic genius was born. Or so I thought.I started by feeding it jokes about dad jokes, knock-knock jokes, and puns. I even threw in some references to the TV show “Small Wonder,” a cult classic of the 80s, thinking that would give it a little extra comedic edge. But when it came time to test the algorithm, the jokes it generated were anything but funny. They were cringeworthy at best and downright nonsensical at worst.
But upon further reflection, I soon realized that teaching a machine to be funny is a lot harder than it sounds. It’s a task that requires a level of nuance and understanding of human behavior that even the most advanced AI can’t replicate. In the end, I was left feeling slightly defeated, but also humbled by my experience, and more than a little embarrassed by my attempts. But hey, at least I can laugh about it now.
Not bad.
It doesn’t sound like me (to me), but it sounds MORE like me, or maybe less like the prose equivalent of a mannequin. I suspect with a bit more tweaking I could match it even closer. It doesn’t seem to be following my metanoia request too closely. Maybe… I need to add that ellipses thing I do (or the parenthetical asides).
But the difference between my students and me is that I have a vocabulary for style. In fact, if my students had this vocabulary, they’d have a lot more control over their own writing process and might not even need to resort to a computer program to write for them. In fact, if they knew enough to play around with style in this way, requesting particular structures or figures of speech, I might even feel I (or this bot) had taught them a little bit about style.