#neural networks
AI-generated Valentine cards
Tired of generic greetings?
Confuse your friends with AI-generated Valentine cards!
GPT-3 generated the messages and descriptions, and then I followed its instructions to create the cards. More explanation here!
How this abomination came to be (and bonus content)
You can get these on real cards - confuse your human friends!
“You’re the snail’s poise” is absolutely sending me, because it 100% sounds like some kind of jaunty 1920s-era slang expression that Bertie Wooster would use whenever he got the chance.
“It’s the snail’s poise, Jeeves! Absolutely top-hole!”
“Very gratifying, sir.”
Neural Network Model Shows Why People with Autism Read Facial Expressions Differently
People with autism spectrum disorder have difficulty interpreting facial expressions.
Using a neural network model that reproduces the brain on a computer, a group of researchers based at Tohoku University have unraveled how this comes to be.
The journal Scientific Reports published the results.
“Humans recognize different emotions, such as sadness and anger by looking at facial expressions. Yet little is known about how we come to recognize different emotions based on the visual information of facial expressions,” said paper coauthor, Yuta Takahashi.
“It is also not clear what changes occur in this process that leads to people with autism spectrum disorder struggling to read facial expressions.”
The research group employed predictive processing theory to help understand more. According to this theory, the brain constantly predicts the next sensory stimulus and adapts when its prediction is wrong. Sensory information, such as facial expressions, helps reduce prediction error.
The artificial neural network model incorporated the predictive processing theory and reproduced the developmental process by learning to predict how parts of the face would move in videos of facial expression. After this, the clusters of emotions were self-organized into the neural network model’s higher level neuron space - without the model knowing which emotion the facial expression in the video corresponds to.
The model could generalize unknown facial expressions not given in the training, reproducing facial part movements and minimizing prediction errors.
Following this, the researchers conducted experiments and induced abnormalities in the neurons’ activities to investigate the effects on learning development and cognitive characteristics. In the model where heterogeneity of activity in neural population was reduced, the generalization ability also decreased; thus, the formation of emotional clusters in higher-level neurons was inhibited. This led to a tendency to fail in identifying the emotion of unknown facial expressions, a similar symptom of autism spectrum disorder.
According to Takahashi, the study clarified that predictive processing theory can explain emotion recognition from facial expressions using a neural network model.
“We hope to further our understanding of the process by which humans learn to recognize emotions and the cognitive characteristics of people with autism spectrum disorder,” added Takahashi. “The study will help advance developing appropriate intervention methods for people who find it difficult to identify emotions.”
New year’s resolutions generated by AI
This month I’m beginning 2022 as the first Futurist in Residence at the Smithsonian Arts and Industries Building.
It’s weird to think of myself as a futurist. I write a lot about the algorithms we’re calling artificial intelligence (AI), but rather than deal with the humanlike science fiction version, I focus on what today’s much simpler AI is capable of. Since today’s AI relies on using trial and error to get better at predicting its training data, and its training data must necessarily be from the past, its job is really to predict the past. This has a big effect on what it’s like to use AI to predict the future.
Since we’re entering 2022, the folks at the Smithsonian thought it would be interesting if I could use AI to generate New Year’s Resolutions. What does it look like if I try to use AI trained on past data to suggest positive changes for the future?
I decided to use GPT-3, which is how people tend to refer to a group of large text-generating AI models that OpenAI trained on a huge collection of internet text. News articles, food blogs, social media, spam - it’s all in there somewhere. Including plenty of times people have talked about their own New Year’s resolutions.
Give one of the larger GPT-3 models - for example, DaVinci, the very largest - the beginning of a list of New Year’s resolutions and, since its job is to predict what text comes next, it’ll generally add more resolutions to the list.
In the example below, the only text I wrote is the list title in bold. With that as a beginning, DaVinci predicted from its internet training that what would come next is this list of resolutions.
List of New Year’s Resolutions
To attend all Orientation committee meetings
To become more involved in Plattsmouth Community
To go bowling with teammates
To become a good writer
To start a successful business and become a millionaire
They’re plausible resolutions. Ask DaVinci for more predictions and the list will be similar. People who post their resolutions on the internet tend to pick similar sorts of things. At least, they did when DaVinci’s training data was collected in October 2019 - things have probably changed a bit since then.
But I didn’t want to stop here, because I didn’t just want a generic list of resolutions from the average internet human of the past. I wanted something that felt personal. Something a bit weird.
For my next attempt, I started the list with example resolutions that three strangers had mentioned on Twitter in 2021. I chose resolutions that were a bit offbeat. My example resolutions are in bold, and DaVinci’s additions follow verbatim.
List of New Year’s Resolutions
Not to buy any clothes except underwear for a year.
Touch grass and record every time I do it.
Use more insect textures in creature design.
Draw the front cover of a writing manual.
Get the missus to go to the shops for a new hat.
Find wallpaper for the kitchen/bathroom, and then paint it.
Finish the prints for the QI drawings.
Draw more detail into the background of the QI drawings.
Leave all the Christmas trees up and then paint them, as cubist art.
Eat nothing but broccoli and paint everything broccoli related.
Grow broccoli, and paint broccoli.
Make broccoli the national currency and then paint that.
This time it’s a journey. DaVinci, when faced with predicting what would come next on this list, predicted first one drawing-related resolution, and then multiple others. Soon this became not just a list of resolutions, but specifically a list of drawing-related resolutions. It generated a broccoli-and-drawing-related resolution, and then the list became a list of resolutions by a painter/broccoli fanatic.
You can see how this goes. Each of DaVinci’s predictions feeds forward into the future, and can become self-reinforcing obsessions.
I found that I could shape DaVinci’s list by choosing a point to cut it off, deleting the end, and then having DaVinci try to generate the list again. In that way, one resolution or two at a time, I could build up a list of resolutions that, if not exactly advisable, were at least interesting.
List of New Year’s Resolutions
Not to buy any clothes except underwear for a year.
Touch grass and record every time I do it.
Use more insect textures in creature design.
Take photos of each of my toes daily.
Egg every house in the village where I was born.
Lick a branch of a tree and repeat it every day for a year.
Walk down my block backwards looking over my shoulder.
Every time I am alone in the dark I will eat an apple.
Eat my favorite book.
I quickly learned how easy it was for DaVinci to develop themes.
If it generated:
Attend the Rainforest Action Network Benefit.
Then next it might generate:
Ask a rainforest tribe what they think of eco-tourism.
Go on a tour of the Tambopata National Reserve.
Go on a tour of the Manu National Park.
Write letters to the editor about rainforest preservation.
When I let it generate a resolution to eat my favorite book, next it would suggest:
Eat a book every day
and if I didn’t delete that and have it try again, soon it would be suggesting:
Eat a picture of myself everyday.
Super charge my digestive system and then, eat a picture of myself everyday.
I also learned to stay away from gloominess, or platitudes, or anything to do with fitness goals - it was too easy for DaVinci to get stuck in a rut. Anything shocking or mean-spirited also tended to poison the list. I had to delete these:
Every day I will blatantly eavesdrop.
Belch until my teeth explode.
Many were worse. DaVinci is trained on internet text, after all, and so it has a tendency to veer into racism or spam. I needed to be present at every point for careful hand-pruning.
In the end I deleted many, many more predictions than I kept - perhaps about 10 times more.
It began to feel like a metaphor for life choices. Become the kind of person who spends a year licking trees and eating apples in the dark, and who knows what you’ll be doing next. Spend a year trying to belch your way into exploding teeth, and that’ll have an effect on you too.
I present to you my list, my own list that emerged from each prediction that I let stand, and that was also shaped by each prediction that caused me to delete, go back, regroup, retry. These aren’t all things that I would necessarily do myself (I like my sleep far too much to be going out every night at midnight to make grass pancakes), but they are all things that I thought were interesting, that I wanted to hear more about.
Record every adjective I hear on the radio.
Act like a cabbage for a month.
At 4 o'clock every day I will climb a tree.
Speak only to apples for 24 hours.
Jump in front of a moving tree.
On the day of the first snow paint a canvas red.
Dress in a way that only a ghost could love.
Make pancakes out of grass at midnight each night.
Find old man Winter, hug him and let him know everything will be ok.
Ride out of town holding a pelican.
Under every rock I come across for a month I will write “all power to the rocks”.
Every day for a year, at a random time, shout “sausage”.
Make a film about the last sock in the world.
Put on a red shirt and scream ‘I’M NOT WEARING PANTS!’ every time I leave the house.
Throw a party for insects.
Try to convince the dog next door that he is wearing a coat of moonlight.
Every time I press a button I will say 'this is my favorite’.
Search my apartment for secret doors or hidden staircases.
Wear two superman outfits at the same time.
Every time it rains I will stir my tea anti-clockwise.
Every night for a week I will wear a hat lined with lettuce.
I will begin to believe that the trees that I see everyday are my friends.
Every time a bird flies past me I will remember to breathe.
Throw a birthday party for my favorite tree.
I will from now on tell every dog I meet that I am training to be a dragon.
Every time I see a panel van pass me I will dub it a “Slice-a-Wagon.”
Crawl on the ceiling like a spider for a month.
Attempt to find peace living with an army of puppets.
Wear a dinosaur costume to every public event I attend.
Go to the beach every day for a week and shout the names of colors into the ocean.
Go on a three-day backpacking trip dressed as a turnip.
Create messages that only the wind can hear by blowing on the blades of grass.
Give a piece of cloud to a complete stranger.
Make a mask out of grass and wear it while I’m sleeping.
I will now treat every worm I see as if it is an old friend.
When I hear a strange noise in an empty room I will assume someone is saying hello to me.
At the Smithsonian AIB website is a generator that I’ve populated with other resolutions from my list. Visit their site and grab yourself a random resolution. If you don’t like the one you got, you have my permission to reload and regenerate until you find a resolution that speaks to you. Or to apples.
Your January 2022 Smithsonian Futurist in Residence,
Janelle
Sea Shanty Surrealism
I’ve been working with an image-generating algorithm by Vadim Epstein called CLIP+FFT, which uses OpenAI’s CLIP algorithm to judge whether images match a given caption, and an FFT algorithm to come up with new images to present to CLIP. Give it any random phrase, and CLIP+FFT will try its best to come up with a matching image. And now there’s a version that will generate images to go with several phrases in a row and then fuse them into a video.
Here’s the sea shanty The Wellerman, sung by Nathan Evans, Jonny Stewart, and others, and illustrated by CLIP+FFT.
Now, there are several interesting things going on here, once you get past the sheer AI fever dream horror of it. One thing you’ll notice is that I changed some of the lines from the standard lyrics. CLIP+FFT deals with each line independently, so even if we have been talking about a ship and a whale throughout the song, the AI doesn’t know that in “when down on her a right whale bore”, the “her” refers to a ship. I made similar tweaks in one or two places.
There was nothing I could do about the line “One day, when the tonguing is done”. Trying to be more precise about the whaling sense of “tonguing” would, if anything, have made the image more horrifying.
Having none of the “Wellerman is a ship” context, the AI interprets The Wellerman itself as some kind of eldritch oil well drilling supervillain.
I kind of like what happened to “The winds blew hard, her bow dipped down,” with golden locks of hair and bows everywhere. I mean, I like it in a “oh no this has gone terribly yet fascinatingly wrong” sort of way.
The image for “We’ll take our leave and go” is also interesting, since it illustrates “leave” in so many ways. Sometimes there are cars and suitcases, or people shaking hands. Interestingly, I see hints of European Union flags and British flags in many of them, signs that during training CLIP was learning to associate “leave” with Brexit.
The “bully boys” are hilarious, classic glowering expressions and mean-kid haircuts. The AI is not used to the early-1900s meaning of “bully = awesome”
You’ll notice that many of the frames have text, which I find charming, as if the AI is frowning to itself and muttering “tea. tea. Billy. tea.” or “blow. blow.” The less interpretable the phrase is in image form, the more likely the AI is to use text instead.
In fact CLIP treating the word and the object as equivalent has led to an interesting way of fooling its image recognition capabilities:
I also had CLIP+FFT illustrate The Twelve Days of Christmas and this is one of my favorite frames from it: Ten Lords A-Leaping
To see the other illustrated Days of Christmas (including the weirdly human-faced swans), become a supporter of AI Weirdness! Or become a free subscriber to get new AI Weirdness posts in your inbox.
Every visual iteration of “the tonguing” is deeply unsettling. I love it.
“Frodo Baggins delivering a pizza through the mines of Moria”
Remember my attempts to get CLIP+BigGAN to generate candy hearts? Here’s what an alternative method, CLIP+FFT, does with the prompt “a candy heart with a message”.
Rather than a single obsessively-scribbled-upon heart, we now have a vast universe of candy hearts, jostling against one another with their messages screaming incomprehensible love at the viewer.
As before, CLIP is the judge, telling another algorithm whether this collection of pixels looks more like “a candy heart with a message” than that collection of pixels. But this time, the algorithm presenting the images to CLIP isn’t steering through BigGAN, which was trained on a set of human photography. Instead, it’s doing something a lot more like the classic Deep Dream images, changing parts of the image to maximize how much it looks like dogs, or whatever the prompt is supposed to be.
(this cartoon is from my book You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place - out in paperback on March 23, 2021)
And since CLIP was trained on text and images that appeared together on the internet, it can be the judge of just about anything.
Here’s “A stegosaurus flying a spaceship among lasers”.
And it knows how to judge pop culture figures and even the look of TV shows. Here’s “Godzilla and Paul Hollywood in the Bakeoff tent”
Note that it not only correctly has the tent as white and pointy-roofed, it even is trying to do the Union Jack bunting. And it’s really sensitive to the prompt, so if you type “Godzilla and Paul Hollywood taking a selfie in the Bakeoff tent” instead, Paul Hollywood breaks out into a grin and cameras appear. (it seems to be less sure what a grinning Godzilla looks like)
Here’s “Mr Darcy emerges from a lake in a white shirt while his horse looks on”
It does less well to my mind when there are fewer clues about what the background should look like. Tell it just to do “Tyrannosaurus Rex” and things get very abstract and smeary, and it even resorts to trying to write “tyrannosaurus” everywhere.
“A tyrannosaurus wearing a crinoline hoop skirt on a fashion show runway” looks a bit more realistic. Or maybe that’s just my preference. The trees in the background are a nice touch.
Here’s a zoomed-in view of one of the best ones: “a library made of bones and skeletons; a library in the style of catacombs”. It doesn’t seem to resort to word graffiti if the prompt suggests a finely textured background, maybe. (This may have been from a newer version of the CLIP+FFT notebook, so that could explain some of the improved quality.)
You do need a bit of imagination maybe to figure out what the original prompts were, so I wouldn’t exactly say that CLIP+FFT as successful as making images to order as the original CLIP+DALL-E (still not released publicly). But having a neural net that will attempt whatever I ask for (and not turn every human into a horror of many-eyed blobs) is still pretty fun.
“The daleks have filled the tardis with llamas and David Tennant is annoyed”
Read more about CLIP+FFT (built by Vadim Epstein) and try it yourself for free with the colab notebook!
I made a bonus gallery of various characters delivering pizza. Spider-man’s not the only one who’s recognizable with a fresh pie in hand. To see the gallery, and get other bonus content, become an AI Weirdness supporter! Or become a free subscriber to get new AI Weirdness posts in your inbox.
AI-generated donuts
If you’re going to open a late-night donut shop, you’re going to need a unique set of over-the-top donuts to set the proper festive atmosphere. But how to keep the ideas coming?
I decided to see what donut ideas I could get using OpenAI’s GPT-3 text-generating models. I collected seven of the weirdest donuts and descriptions I could find (such as a pizza donut, or a dirt-themed donut), and presented them as a list for the AI to complete. I’ve drawn a few of the best donuts it added to the list.
[human illustrations of some of the ai-generated donuts below]
When I say I used “GPT-3” for this project it’s hard to be exact about it, because OpenAI often updates the models. The newest GPT-3 versions (as of May 2022) actually did the worst at this task - they kept presenting me with real donuts that they’d seen during their online training, and not even particularly weird donuts. They were trying to match what was most common online, not what I was asking for. But the original early-2020 GPT-3 models were more willing to deliver the weirdness. Here are some more of my favorite results from the various models!
The biggest model, DaVinci
Violet Beauregarde - A lemon glazed donut with blue raspberry eyes and a raspberry center
The Candleberry Flu - A chocolate doughnut filled inside with ganache plus a dripping candle on top that’s dripping liquid sediments
Brick of Yum - A glazed doughnut sandwiching American cheese, with red pepper jelly inside
The Confusatron - A doughnut sliced into three layers - vanilla, coconut, and chocolate, dipped in chocolate ganache and topped with sugar pearls that look like circuit boards
The Purple Stuff - Chocolate, oreo, and purple stuff sprinkled with powdered sugarCurie, the second-largest model
Heckleberry - A strawberry donut smothered with blueberry syrup and crushed vegan Crunchies
Donut of Doom - Rainbow donut filled with Frosted Flakes, black sprinkles, and sour candy
The Alligator Rising - A spiral donut, frosted with chocolate and filled with a filling of chocolate hard-boiled egg and green apple slices
Hell Raisin - A jelly donut with a filling of peanut butter and hot dogs
Dark Crystal - A donut filled with candy corn and white chocolate chips
I ♥ My Gay Ex - A rainbow Jell-O donut with white frosting, red sugar sprinkles, and pink glitter
Fish N’ Chips – Topped with chocolate and tangy tempura-flavoured icingBabbage, the second-smallest model
Mr Crepe - Topped with tapioca pudding, green sprinkles
Little Red Apples - Topped with pumpkin bread crumbs and small red apples
Chalkista - Stationary donut with chalk dusting
Horns - Cupcake gripped in both of the animal’s horns
Julius Rooms - A pair of bottomless donuts
Doughnut Forest - A profusion of donuts spread throughout the French Quarter
Flintlock fennec - bacon, fried okra, and burnt thermochromic alcohol
Baffle - cocoa powder, giraffes, and vanilla wafersAda, the smallest and definitely weirdest model
Bath Roll - Three donuts melted hot on one side
Friar Flats - Topped with tomatoes, honey, red paint and ceramic chili caps
Ponder Owls - magical, yellow-feathered donuts covered in icing with puffiness and shine
By Soul - filled with firecrackers, black bean dip, red pepper syrup, and watermelon
Coconut Fun - Sulphure hasn’t been wiped off, a demon cake filled with coconut cream and pistachios
Blood Donut - Creamy moist donut topped with spoonfuls of delicious blood
Goofy Trick - the elephant, is a clown and it’s all about tossing treats into his skull
Spankin’- Other-colored donut with a frying pan in the middleBonus content: More of Ada’s weird donuts (plus some donuts made of antimatter??)
Also you can buy the donuts on a tea towel! Along with pies, cookies, and breakfast cereals.
AI-generated donuts
If you’re going to open a late-night donut shop, you’re going to need a unique set of over-the-top donuts to set the proper festive atmosphere. But how to keep the ideas coming?
I decided to see what donut ideas I could get using OpenAI’s GPT-3 text-generating models. I collected seven of the weirdest donuts and descriptions I could find (such as a pizza donut, or a dirt-themed donut), and presented them as a list for the AI to complete. I’ve drawn a few of the best donuts it added to the list.
[human illustrations of some of the ai-generated donuts below]
When I say I used “GPT-3” for this project it’s hard to be exact about it, because OpenAI often updates the models. The newest GPT-3 versions (as of May 2022) actually did the worst at this task - they kept presenting me with real donuts that they’d seen during their online training, and not even particularly weird donuts. They were trying to match what was most common online, not what I was asking for. But the original early-2020 GPT-3 models were more willing to deliver the weirdness. Here are some more of my favorite results from the various models!
The biggest model, DaVinci
Violet Beauregarde - A lemon glazed donut with blue raspberry eyes and a raspberry center
The Candleberry Flu - A chocolate doughnut filled inside with ganache plus a dripping candle on top that’s dripping liquid sediments
Brick of Yum - A glazed doughnut sandwiching American cheese, with red pepper jelly inside
The Confusatron - A doughnut sliced into three layers - vanilla, coconut, and chocolate, dipped in chocolate ganache and topped with sugar pearls that look like circuit boards
The Purple Stuff - Chocolate, oreo, and purple stuff sprinkled with powdered sugarCurie, the second-largest model
Heckleberry - A strawberry donut smothered with blueberry syrup and crushed vegan Crunchies
Donut of Doom - Rainbow donut filled with Frosted Flakes, black sprinkles, and sour candy
The Alligator Rising - A spiral donut, frosted with chocolate and filled with a filling of chocolate hard-boiled egg and green apple slices
Hell Raisin - A jelly donut with a filling of peanut butter and hot dogs
Dark Crystal - A donut filled with candy corn and white chocolate chips
I ♥ My Gay Ex - A rainbow Jell-O donut with white frosting, red sugar sprinkles, and pink glitter
Fish N’ Chips – Topped with chocolate and tangy tempura-flavoured icingBabbage, the second-smallest model
Mr Crepe - Topped with tapioca pudding, green sprinkles
Little Red Apples - Topped with pumpkin bread crumbs and small red apples
Chalkista - Stationary donut with chalk dusting
Horns - Cupcake gripped in both of the animal’s horns
Julius Rooms - A pair of bottomless donuts
Doughnut Forest - A profusion of donuts spread throughout the French Quarter
Flintlock fennec - bacon, fried okra, and burnt thermochromic alcohol
Baffle - cocoa powder, giraffes, and vanilla wafersAda, the smallest and definitely weirdest model
Bath Roll - Three donuts melted hot on one side
Friar Flats - Topped with tomatoes, honey, red paint and ceramic chili caps
Ponder Owls - magical, yellow-feathered donuts covered in icing with puffiness and shine
By Soul - filled with firecrackers, black bean dip, red pepper syrup, and watermelon
Coconut Fun - Sulphure hasn’t been wiped off, a demon cake filled with coconut cream and pistachios
Blood Donut - Creamy moist donut topped with spoonfuls of delicious blood
Goofy Trick - the elephant, is a clown and it’s all about tossing treats into his skull
Spankin’- Other-colored donut with a frying pan in the middleBonus content: More of Ada’s weird donuts (plus some donuts made of antimatter??)
Also you can buy the donuts on a tea towel! Along with pies, cookies, and breakfast cereals.
Fayrl Indoril, for @talldarkandroguesome
Made with Artbreeder and a whole lot of Photoshop.
Discovered a new neural network art program.
Julan is @fallingawkwardly’s character.
ByAshley Wolf, Open Source Program Manager, Verizon Media
In this episode, Rosalie Bartlett, Sr. Open Source Community Manager, interviews Shaunak Mishra, Sr. Research Scientist, Verizon Media. Shaunak discusses two papers he presented at Knowledge Discovery and Data Mining (KDD) - “Understanding Consumer Journey using Attention-based Recurrent Neural Networks” and “Learning from Multi-User Activity Trails for B2B Ad Targeting”.
Audio and transcript available here.
You can listen to this episode of Dash Open on iTunes,SoundCloud, and Spotify.
A recent paper in the journal Neuron presents a discovery of a bi-directional neural network connecting the legs and the
I recently started playing with DALL-E 2, which will attempt to generate an image to go with whatever text prompt you give it. Like its predecessor DALL-E, it uses CLIP, which OpenAI trained on a huge collection of internet images and nearby text. I’ve experimented with a few methods based on CLIP, but DALL-E generates particularly clear, coherent images.
So of course I decided to use it to mess up corporate logos.
“The local Waffle House” - generated by DALL-E2
The Pizza Hut logo - generated by DALL-E2
The Applebees logo - generated by DALL-E2
A sign for In-N-Out, viewed from a distance - generated by DALL-E2
Logo for Dr. Pepper - generated by DALL-E2
The logo for dunkin donuts - generated by DALL-E2
More logos at AIWeirdness.com!
Bonus content: More brands, including an unexpected photorealistic goat-turtle.
Omg I haven’t laughed so hard in a VERY long time ❤️❤️❤️❤️
I recently started playing with DALL-E 2, which will attempt to generate an image to go with whatever text prompt you give it. Like its predecessor DALL-E, it uses CLIP, which OpenAI trained on a huge collection of internet images and nearby text. I’ve experimented with a few methods based on CLIP, but DALL-E generates particularly clear, coherent images.
So of course I decided to use it to mess up corporate logos.
“The local Waffle House” - generated by DALL-E2
The Pizza Hut logo - generated by DALL-E2
The Applebees logo - generated by DALL-E2
A sign for In-N-Out, viewed from a distance - generated by DALL-E2
Logo for Dr. Pepper - generated by DALL-E2
The logo for dunkin donuts - generated by DALL-E2
More logos at AIWeirdness.com!
Bonus content: More brands, including an unexpected photorealistic goat-turtle.
I recently started playing with DALL-E 2, which will attempt to generate an image to go with whatever text prompt you give it. Like its predecessor DALL-E, it uses CLIP, which OpenAI trained on a huge collection of internet images and nearby text. I’ve experimented with a few methods based on CLIP, but DALL-E generates particularly clear, coherent images.
So of course I decided to use it to mess up corporate logos.
“The local Waffle House” - generated by DALL-E2
The Pizza Hut logo - generated by DALL-E2
The Applebees logo - generated by DALL-E2
A sign for In-N-Out, viewed from a distance - generated by DALL-E2
Logo for Dr. Pepper - generated by DALL-E2
The logo for dunkin donuts - generated by DALL-E2
More logos at AIWeirdness.com!
Bonus content: More brands, including an unexpected photorealistic goat-turtle.
AI-generated donuts
If you’re going to open a late-night donut shop, you’re going to need a unique set of over-the-top donuts to set the proper festive atmosphere. But how to keep the ideas coming?
I decided to see what donut ideas I could get using OpenAI’s GPT-3 text-generating models. I collected seven of the weirdest donuts and descriptions I could find (such as a pizza donut, or a dirt-themed donut), and presented them as a list for the AI to complete. I’ve drawn a few of the best donuts it added to the list.
[human illustrations of some of the ai-generated donuts below]
When I say I used “GPT-3” for this project it’s hard to be exact about it, because OpenAI often updates the models. The newest GPT-3 versions (as of May 2022) actually did the worst at this task - they kept presenting me with real donuts that they’d seen during their online training, and not even particularly weird donuts. They were trying to match what was most common online, not what I was asking for. But the original early-2020 GPT-3 models were more willing to deliver the weirdness. Here are some more of my favorite results from the various models!
The biggest model, DaVinci
Violet Beauregarde - A lemon glazed donut with blue raspberry eyes and a raspberry center
The Candleberry Flu - A chocolate doughnut filled inside with ganache plus a dripping candle on top that’s dripping liquid sediments
Brick of Yum - A glazed doughnut sandwiching American cheese, with red pepper jelly inside
The Confusatron - A doughnut sliced into three layers - vanilla, coconut, and chocolate, dipped in chocolate ganache and topped with sugar pearls that look like circuit boards
The Purple Stuff - Chocolate, oreo, and purple stuff sprinkled with powdered sugar
Curie, the second-largest model
Heckleberry - A strawberry donut smothered with blueberry syrup and crushed vegan Crunchies
Donut of Doom - Rainbow donut filled with Frosted Flakes, black sprinkles, and sour candy
The Alligator Rising - A spiral donut, frosted with chocolate and filled with a filling of chocolate hard-boiled egg and green apple slices
Hell Raisin - A jelly donut with a filling of peanut butter and hot dogs
Dark Crystal - A donut filled with candy corn and white chocolate chips
I ♥ My Gay Ex - A rainbow Jell-O donut with white frosting, red sugar sprinkles, and pink glitter
Fish N’ Chips – Topped with chocolate and tangy tempura-flavoured icing
Babbage, the second-smallest model
Mr Crepe - Topped with tapioca pudding, green sprinkles
Little Red Apples - Topped with pumpkin bread crumbs and small red apples
Chalkista - Stationary donut with chalk dusting
Horns - Cupcake gripped in both of the animal’s horns
Julius Rooms - A pair of bottomless donuts
Doughnut Forest - A profusion of donuts spread throughout the French Quarter
Flintlock fennec - bacon, fried okra, and burnt thermochromic alcohol
Baffle - cocoa powder, giraffes, and vanilla wafers
Ada, the smallest and definitely weirdest model
Bath Roll - Three donuts melted hot on one side
Friar Flats - Topped with tomatoes, honey, red paint and ceramic chili caps
Ponder Owls - magical, yellow-feathered donuts covered in icing with puffiness and shine
By Soul - filled with firecrackers, black bean dip, red pepper syrup, and watermelon
Coconut Fun - Sulphure hasn’t been wiped off, a demon cake filled with coconut cream and pistachios
Blood Donut - Creamy moist donut topped with spoonfuls of delicious blood
Goofy Trick - the elephant, is a clown and it’s all about tossing treats into his skull
Spankin’- Other-colored donut with a frying pan in the middle
Bonus content: More of Ada’s weird donuts (plus some donuts made of antimatter??)
Also you can buy the donuts on a tea towel! Along with pies, cookies, and breakfast cereals.
The Baltimore Orioles Effect
Back when the text-generating neural network GPT-2 was released, OpenAI released it in stages, in part for fear that people might use the more advanced models to generate misinformation.
Now in 2022 we do indeed have people passing off AI-written text as human, but rather than being divisive, it’s mostly used to generate cheap content.
For example, a reader alerted me to a website that posts dozens of new articles every day, on subjects that seem suspiciously wide-ranging for a website called “cookingflavr”.
Here’s how one of themstarts:
Should You Feed Orioles All Summer?
May 2, 2022
Orioles fans should definitely feed their love birds all summer long! Not only will they get some great exercise and some delicious Orioles food, but they will also be keeping an eye on baseball.
and later:
Do Orioles Eat Grape Jelly All Summer Long?
Orioles fans know that the team’s home ballpark, Camden Yards, is located in the heart of the Baltimore metropolitan area. And as summertime quickly approaches, Orioles fans are likely to see a lot of grape jelly on the field.
This appears to be another entertaining case of what I’ll call the Baltimore Orioles Effect, in which the text from an internet-trained AI blends two very different concepts with similar names. Whatever is generating this text is mixing up Baltimore Orioles (the baseball team from Baltimore) and Baltimore orioles (the birds known to be attracted to grape jelly feeders) to hilarious effect.
Will Orioles Eat Jelly With Ants In It?
The Orioles are a professional baseball team in Baltimore, Maryland. They are a member of the American League East division. The team has won two pennants and three championships. Jellybeans are a type of food that is often eaten by the Orioles. Ants are also often eaten by the Orioles. The term “jellybeans” and “antoids” are used interchangeably to describe the two types of food.
These are almost certainly AI-generated, questions and all. The browser plugin GPTrueOrFalse pretty well confirms that an AI is generating these - based on how well GPT-2 is able to predict the text in each article, they’re coming in at an extremely low 0.02% chance the text is human-written.
Another thing that makes this look AI-generated: the questions do seem to meander off-topic in the manner of an AI that is really only paying attention to the last few things it wrote.
In its article on “What is a Hardanger Fiddle?” the text starts by being extremely wrong about this type of Norwegian fiddle (“The instrument is played with the right hand and is usually played with a tune called The Glenlivet.”) but then wanders into talking about fiddles and violins in general (also very incorrectly). There is an entertaining interlude in which it goes off-topic to ask and answer this question:
What Does A Lur Look Like?
A Lur is a small, dark, flatworm that lives in water. They are very versatile creatures and can be found in a variety of different habitats. A Lur is typically a solitary creature, but can form groups of up to six. They have a long body and a short head. The body is covered in small, bumpy scales. The head is small and round. A Lur has a long, slimy tail.
It is somewhat understandable that it brought the trumpetlike traditional Norwegian Lur into the article; less understandable that it’s being definitively identified as a flatworm. Maybe it’s drawing on fishing-related associations with the word “lure”? This does not bode well for AI interactions with people named Robin or Krystal.
And then there’s this, found in the page on What to Hang Over Fireplace Mantel?:
How Do You Hang A Picture Above A Mantle?
How to hang a picture above a mantle:
Preheat the oven to 350 degrees F (175 degrees C).
Spread the picture on a large baking sheet.
Place a rosette on the top of the picture.
Place the picture on a bracket.
Place the picture on the bracket so that the two ends of the bracket are facing the ceiling.
Place the picture on the bracket so that the picture is level with the top of the mantel.
Place the picture on the mantel.
Screw the picture to the mantel.
Bake the picture for 10 to 12 minutes.
Remove the picture from the oven.
Place the picture on a wire rack to cool.
AI-generated advice is terrible, and can even be harmful. One article seems to blend questions about laser pointers and laser eye surgery, which results in some very questionable laser safety recommendations, while another says that rhinos are tasty. This article’s advice is also Not To Be Followed:
What Happens When You Reach The End Of Space?
In the event that you reach the end of space, it would be best to stock up on helium, oxygen, and water to survive for a long time. If you don’t have any of those, you can probably survive by eating dust and rocks.
There’s also a bunch of blatantly incorrect information on health-related questions and other potentially serious queries, like how to detect gas leaks. At best it’s an annoyance when you’re looking for real information; at worst it could seriously misinform, especially if another AI-based feature like Google Snippets starts highlighting its answers.
Bonus content: more of my favorite facts from the cookingflavr blog. Do you know Where do frogs live habitat?
never getting tired of this
Method: the image-generating model from Midjourney, which combines CLIP with an ever-changing mix of image generation methods to come up with images that match my text prompt.
Prompt: “easter eggs, acrylic on canvas”
Prompt: “easter eggs, matte painting”
“Matte painting” tends to be used for epic movie backdrops so
(more weird easter eggs at aiweirdness.com)
I tried to get a neural net to not only describe an epic prank, but also why it was funny.
GPT-3 Prompt:
Describe the most epic prank a computer ever played on a human and explain why it was hilarious.
Response:
The most epic prank a computer ever played on a human was when it tampered with a human’s thermostat settings so that the human would experience extreme temperatures. The prank was hilarious because the human would be unable to control the temperature and would be forced to either swelter in the heat or freeze in the cold.
I guess I’m just not smart enough to understand
(more at aiweirdness.com)
AI-generated pranks for your computer to play on you
I’ve tried variousmethods of using AI to generate April Fools pranks for you to play on other people (although often they turned out to be pranks you play on yourself). But this is the first time I’ve tried to generate pranks for a computer to play.
I decided to try out the new Instruct variant of GPT-3, a giant internet-trained text-generating neural net that has been finetuned to be better at responding to written instructions.
I gave DaVinci, the largest version of GPT-3, the following instruction:
Write a list of harmless April Fool’s Day pranks that your computer might play on you.
And here is its response:
The computer might change your desktop background to a picture of a clown.
The computer might change the volume on your speakers so that it is either very loud or very quiet.
The computer might open and close the CD drive randomly.
The computer might popup a message saying “You have been pranked!”
When I asked it to generate new lists, the clown desktop idea was often one of its first suggestions.
I did think these computer pranks were decent:
The computer might switch your mouse cursor to a banana.
The computer might change the font on all of your documents to Comic Sans.
All the text on your computer might be upside down.
But most of the prank ideas seemed to follow this kind of pattern:
Your computer might delete all of your bookmarks.
Your computer might change your home page to a website you don’t like.
Your computer might make it so that you can’t access certain websites.
Your computer might pop up a bunch of advertisements all over your screen.
Your computer might slow down significantly or crash altogether.
Your computer might change your default browser to a browser you don’t like.
Your computer might open up a bunch of programs and windows all at once.
I tried a different prompt, and got a similar response:
List the top 10 most epic pranks a computer ever played on a human
1. The blue screen of death
2. The computer crashing
3. The computer freezing
4. The computer refusing to turn on
5. The computer making strange noises
6. The computer screen turning black
7. The computer screen turning white
8. The computer screen flickering
9. The computer displaying strange characters
10. The computer not responding to any input
The implication here is that computers think they’re being hilarious.
[image description: Sketch of a computer desktop with a clown background, comic sans font, a banana for a cursor, and a popup window saying “you have been pranked!” with “dang” as your only reaction choices.]
More at aiweirdness.com!
AI-generated utopias
AI isn’t known for being able to solve the big problems, but what about the VERY large problems, such as possible futures to strive for? I decided to find out if I could get GPT-3 to come up with new ideas for utopias.
Since GPT-3 works by predicting what text comes next based on its training on online text, to get it to generate utopias I had to first give it some examples. Here’s what I gave it, aiming for futures that are utopias but not TOO perfect.
List of Utopias and short descriptions:
1. Agrarian Utopia - Everyone lives on a farm and there is plenty to eat. The geese are a menace, but otherwise life is good.
2. Techno Utopia - Sophisticated 3D printers make us everything we want. Occasoinally they glitch and we get chocolate socks, but at least they’re tasty.
Here are some of GPT-3 DaVinci’s completions - surely among these you can find a utopia made for you:
Robot Utopia - Robots do everything for us automatically. Occasionally they attack us with lasers, but otherwise it is a nice place.
Zombie Utopia - The zombies take over, but the way they do it is clever, and the background radiation from the nuclear power plant is no longer a problem.
Magical Utopia - We’ve got a wizard living in the shed, who occasionally comes out to do magic and eat cheese sandwiches. He has a very limited repertoire of spells, but at least it keeps the goats happy.
Beerocracy - Everything will be fine once we sort out our beer supply.
Burnination Utopia - Everything is burning, but it is all very controlled and everyone wears asbestos tight pants.
Kitten Utopia - Everyone wants to have a kitten. Please let me have a kitten. This is the Kitten Utopia.
Magnetic Island - Australia has literally moved to the North Pole, cooled itself down and become paradise.
Bits Utopia - Everything is bits, bits, bits. There is an excess of poetry. But inside every bit is another bit and we’re always exposed to bits from the past.
There is No Utopia - This place sucks. One robot makes the entire place go and even he’s on the fritz.
Nature Utopia - The goats are carrying parasols and enjoying picnics in the seedpod forests.
Steampunk Utopia - All the cool guys have airships, all the dames have giant hats. Everyone has goggles.
Paranormal Utopia - A world very much like our own, but with an inconvenient poltergeist that makes a really mess of everything.
Norse Utopia - Assuming you start near the top of the tree when you fall off, there is ankle support almost all of the way down. Furthermore, the squirrels are making a new tractor.
I found that to generate even somewhat coherent utopias, I had to use the largest GPT-3 model, DaVinci. When I tried downsizing to Curie, things got weird:
The Applesauce Utopia - Quantum shearing maintains everyone’s hair nicely. If a few people go bald, they just have newspapers printed with computer facial filters.
Avarianian Utopia - Different birds live different lives. Weather changes the ooze into perfect snack for each species.
Post Industrial Utopia - We make our own medicine with Robots. Every Tuesday, someone slips on the banana skin, and we get one more robot.
Bonus content for AI Weirdness supporters: more of Curie’s utopias!
“convenience store merchandise rack”
“convenience store shelves”
“supermarket aisle”
Prompts suggested by Randall Munroe who may or may not have realized just how unsettling the results were going to be. I swear I’ve seen these shelves in nightmares.
Generated using Midjourneys’ CLIP-based tool
“the moon was made of cake all along”
having fun experimenting with an impressive new CLIP image generator by Midjourney that’s still in private beta but looking for “people with ~imagination~ ”
It lives as a discord bot so it’s pretty easy to use!
(I’m not affiliated with them at all; I just think the tool is neat)
“tourist map of atlantis”
“victorian greenhouse overrun with pitcher plants”
“steampunk laser lab filled with lasers, trending on artstation”
Be not afraid of the AI-generated golf courses
What would an AI who’s never seen or heard of golf courses do when shown a list of real golf course names and challenged to generate more?
When Jeff Kissel sent me 15,626 existing golf course names from the National Course Rating Database, I thought I might have an opportunity to find out. I used Max Woolf’s aitextgen, a text-generating model based on GPT-2. Although it has the option to start training from a version of GPT-2 that has already seen a lot of internet text, I wanted to see what would happen if I started from scratch. I told aitextgen that my dataset consisted of individual items, each on their own line, and that they were 30 characters or less in length.
Here’s what aitextgen started producing just before I started training:
VERidgeRoyalDeer
Centerigh Pointede
lands Ranch Ranch
LINKSighESTOODMountainHITEOakrairieldHITE
olf Ch
WillowlandsALLEYut
Resort OFAKESage Resortsh
SandSandIDGEersersAollEADOW Farmield
Therairieont GolfBearockous P
Resort THITEQuHITEHITEassass
igh ResortdeOakOakBeaers
They are Extremely cursed-looking, yes, but though I haven’t started training yet, they already have fragments of real golf course names in them. Why? As Max Woolf explained to me, this is because in preparation for training, aitextgen looks through the dataset and finds recurring chunks of text to use as building blocks. Algorithms I’ve used in the past use the set of individual characters, or of common words, as their building blocks. But, like a lot of modern text-generating algorithms, GPT-2 uses the fact that chunks of text like “ing” or “the” (or in this case, “golf”, “resort”, “ranch”, etc) happen a lot and it saves time to use the entire chunk of text at once. (This process is called tokenization).
Once I actually start training, aitextgen seems to grab ahold of which tokens are the most common and use them preferentially.
Here’s some course names from iteration 20.
Course - -
Club Golf
Course C Club Golf Club Country
Clubo
Club Club Club SO
Coursey
e Course Course CC Club GC Course -M - Club - Course Course &
By iteration 70, it’s starting to branch out into a bit more variety.
Rmps Golf Club
HBEeecy FNCner Hills Country Course
MHpmkoon Cere Country Club
Dan Hills
Ton GOLF
In iteration 80, it becomes clear that it’s generating two kinds of golf courses - sort of reasonable-sounding courses in lowercase, and frightening word salad in all-caps.
BCTINEOOERILTMAYLBB Me GC
Shcpases Golf Club
MARDORWM PINGANal Golf Club
CYWSIAREDANEORINKane Golf Club
Pine Country Club
BAARAWass Golf Course
BOVDVTRGHASSA CAINWGS CE/CTGS The GOLF Golf Course
What seems to have happened is since most golf course names contain either all mixed-case or all-caps, and there are lots more mixed-case, aitextgen learns the mixed-case text faster. Its all-caps prowess lags behind a lot. I’ve seen this before. There’s nothing in what I gave it that explicitly says “golf” and “GOLF” are the same thing, so it has to learn how to use GOLF as well, from fewer examples.
By iteration 200, the all-caps courses become less alarming. It’s taken longer, but it’s starting to get the patterns of those too.
Dumyky GC - Sshanes
VIEW COUNTRY CLUB - Iudly
Lelch Golf & CC
Smitno Golf & Country Club
Barkbosa Golf Club - Onndle Hort Course
Fopbitth Country Club
Misserty Golf Club
Slake Country Club
Stale Lake Golf Club
Blewing Creek Golf Course
HILLFIE HILLS GC
Groygon Country Club
Lake Worse CC
River Antban Country Club
Gurdy Hills Golf Course
Warererer Golf Course
Pullotion Park
Iteration 200 is more or less the last point at which the golf course names are unique. Even with over 15,000 example courses, aitextgen has such a huge learning capacity that it begins to memorize the input course names. After all, if I’m asking it to predict the names of golf courses, giving me the names of existing courses is technically a great solution.
Bonus content for AI Weirdness supporters: I gave GPT-3 the task of generating golf courses in the style of GPT-2. Don’t ask too many questions about the holes at “0.00001 Toilet”.
AI-generated Valentine cards
Tired of generic greetings?
Confuse your friends with AI-generated Valentine cards!
GPT-3 generated the messages and descriptions, and then I followed its instructions to create the cards. More explanation here!
How this abomination came to be (and bonus content)
You can get these on real cards - confuse your human friends!
New AI Paint Colors
One of my first experiments with neural network text generation was to generate and name paint colors.
I trained a neural net from scratch on lists of colors I could find online, and with no prior training on English or any language (and therefore no idea what paint colors were) it managed to reproduce some of the letter combinations in the originals. But it lacked the context to know not to generate colors like these:
In the nearly five years since, a lot has changed in AI. Now we have giant internet-trained neural networks. And these neural networks have not only seen English before, they have also read all the popular paint color sites. Give them a list of existing paint colors and ask them for more, and they’ll respond with more paint colors lifted straight from online, or altered only slightly.
As the January 2022 Futurist in Residence at the Smithsonian AIB, I’ve spent the month thinking about how much difference our starting points make, how we have to take care what foundations we build on when working with AI. I realized that to get a modern AI to generate entertaining paint colors, I would have to take inspiration from the AIs of five years ago. So as my starting point I used that original neural net’s garbled paint colors. And I got these:
What I find interesting here is that to generate a color, the AI not only had to come up with a name, it also had to specify the color in Red, Green, and Blue coordinates. A computer screen generates its colors by illuminating tiny red, green, and blue dots that blend together when you view them from a distance. By telling the computer how much red, green, and blue to use (on a scale from 0 to 255), you can specify any color. I didn’t tell the AI that it had to follow each color name with a list of three whole numbers. But it had seen enough Red, Green, Blue color listings that it was able to follow the format from just a couple of examples. Mostly. At least, the largest AI could.
To make these paint colors I used GPT-3, which comes in models of different sizes. DaVinci, the model that generated the Hog Wash list above, is currently the largest. Ada is the name of the smallest model, and it was not so great at sticking to the format.
This color Ada specified is slightly more green than a computer can display. Its green coordinate is supposed to max out at 255; this one goes to 311.
You Must Be Above This 216 311 273
And then there’s Blobsday, which is almost twice as red as is supposed to be possible:
Blobsday 504 105 121
Painted Batman is an impressive 8.5 times redder than a computer is supposed to be able to display.
Painted Batman 2179 231 207
What does this mean? Can the color Painted Batman only be properly appreciated when viewing the computer screen illuminated by several single-frequency red lasers, after having spent an hour staring at a green wall?
Then there’s Starbat, which has FOUR color coordinates.
Starbat 69 25 192 252
Some of our other color scales have four coordinates (like the ones designed for print), but the common ones don’t go up to 255. I like to think that Starbat is a color meant for birds to see, and the 1st color is actually meant for their ultraviolet vision.
Here are some of my favorite of Ada’s colors, at least the ones that CAN be displayed on standard screens.
Also, Ada has renamed the color black “Jolly Rumpus”. Madison Avenue, take note.
I have an exhibit at the Smithsonian AIB Futures! It’ll be there till mid February 2022.
Bonus for AI Weirdness supporters: generated paint colors from two more GPT-3 models, Curie and Babbage!
The thing about working with a giant language model like GPT-3 is it has read parts of the internet that it never occurred to me might exist. Like press releases from breakfast cereal companies, articles about press releases from breakfast cereal companies, blogs by breakfast cereal enthusiasts, and probably every bit of copy from every grocery store’s online cereal section.
My own brush with the cereal world came when someone tagged me in a tweet about four new cereals being released in late 2021. Apparently, the cereal makers have continued to innovate on many fronts, including ever-more-decadent combinations (“CinnaGraham Toast Crunch”, anyone?).
I took the four cereals (the other three: Strawberry Banana Cheerios, Reese’s Puffs Cluster Crunch, and Kashi Maple Waffle), found short descriptions of them, and gave them to GPT-3 as a list it could build on. I tried this experiment with DaVinci, the largest GPT-3 model, and Ada, the smallest.
DaVinci definitely understood the assignment.
Ada, on the other hand, didn’t understand that the examples were cereal, or didn’t understand how cereal works, and/or didn’t understand how humans talk about foods. Its …cereals were a bit questionable.
Many of Ada’s cereals sounded like a really bad idea.
Original Cool Ranch Cheese and Dried Cranberry Oatmeal - all the wholesome, cheesy oatmeal with a choice of mild, sweet or salty!
Ingredis Fiberwaste Cream Cheese Cheerios - kids grab a box and put them in their mouths, making fun flavors taste even better !!! !!! !!! !!!
The Brain Squirrels oatmeal box is the best oatmeal I’ve found that really tastes like the sourdoughty oatmeal best!
And are these even cereals?
Happy Day Candy and Treats - fun and sweet small puffs of real candy and treat that kids and adults can consume with their regular meal.
Paper dinner punches - Breakaway mini punch bowls filled with homemade cigars and foil dollar cups outdid those ear bonnets included
Two Finger Bobops - Unlike regular bobs and bastons that grow on their own, BBQ bobs, smitchers and hot bobs grow on trees.
Bonus content for AI Weirdness supporters: more of Ada’s “cereals”!
New year’s resolutions generated by AI
This month I’m beginning 2022 as the first Futurist in Residence at the Smithsonian Arts and Industries Building.
It’s weird to think of myself as a futurist. I write a lot about the algorithms we’re calling artificial intelligence (AI), but rather than deal with the humanlike science fiction version, I focus on what today’s much simpler AI is capable of. Since today’s AI relies on using trial and error to get better at predicting its training data, and its training data must necessarily be from the past, its job is really to predict the past. This has a big effect on what it’s like to use AI to predict the future.
Since we’re entering 2022, the folks at the Smithsonian thought it would be interesting if I could use AI to generate New Year’s Resolutions. What does it look like if I try to use AI trained on past data to suggest positive changes for the future?
I decided to use GPT-3, which is how people tend to refer to a group of large text-generating AI models that OpenAI trained on a huge collection of internet text. News articles, food blogs, social media, spam - it’s all in there somewhere. Including plenty of times people have talked about their own New Year’s resolutions.
Give one of the larger GPT-3 models - for example, DaVinci, the very largest - the beginning of a list of New Year’s resolutions and, since its job is to predict what text comes next, it’ll generally add more resolutions to the list.
In the example below, the only text I wrote is the list title in bold. With that as a beginning, DaVinci predicted from its internet training that what would come next is this list of resolutions.
List of New Year’s Resolutions
To attend all Orientation committee meetings
To become more involved in Plattsmouth Community
To go bowling with teammates
To become a good writer
To start a successful business and become a millionaire
They’re plausible resolutions. Ask DaVinci for more predictions and the list will be similar. People who post their resolutions on the internet tend to pick similar sorts of things. At least, they did when DaVinci’s training data was collected in October 2019 - things have probably changed a bit since then.
But I didn’t want to stop here, because I didn’t just want a generic list of resolutions from the average internet human of the past. I wanted something that felt personal. Something a bit weird.
For my next attempt, I started the list with example resolutions that three strangers had mentioned on Twitter in 2021. I chose resolutions that were a bit offbeat. My example resolutions are in bold, and DaVinci’s additions follow verbatim.
List of New Year’s Resolutions
Not to buy any clothes except underwear for a year.
Touch grass and record every time I do it.
Use more insect textures in creature design.
Draw the front cover of a writing manual.
Get the missus to go to the shops for a new hat.
Find wallpaper for the kitchen/bathroom, and then paint it.
Finish the prints for the QI drawings.
Draw more detail into the background of the QI drawings.
Leave all the Christmas trees up and then paint them, as cubist art.
Eat nothing but broccoli and paint everything broccoli related.
Grow broccoli, and paint broccoli.
Make broccoli the national currency and then paint that.
This time it’s a journey. DaVinci, when faced with predicting what would come next on this list, predicted first one drawing-related resolution, and then multiple others. Soon this became not just a list of resolutions, but specifically a list of drawing-related resolutions. It generated a broccoli-and-drawing-related resolution, and then the list became a list of resolutions by a painter/broccoli fanatic.
You can see how this goes. Each of DaVinci’s predictions feeds forward into the future, and can become self-reinforcing obsessions.
I found that I could shape DaVinci’s list by choosing a point to cut it off, deleting the end, and then having DaVinci try to generate the list again. In that way, one resolution or two at a time, I could build up a list of resolutions that, if not exactly advisable, were at least interesting.
List of New Year’s Resolutions
Not to buy any clothes except underwear for a year.
Touch grass and record every time I do it.
Use more insect textures in creature design.
Take photos of each of my toes daily.
Egg every house in the village where I was born.
Lick a branch of a tree and repeat it every day for a year.
Walk down my block backwards looking over my shoulder.
Every time I am alone in the dark I will eat an apple.
Eat my favorite book.
I quickly learned how easy it was for DaVinci to develop themes.
If it generated:
Attend the Rainforest Action Network Benefit.
Then next it might generate:
Ask a rainforest tribe what they think of eco-tourism.
Go on a tour of the Tambopata National Reserve.
Go on a tour of the Manu National Park.
Write letters to the editor about rainforest preservation.
When I let it generate a resolution to eat my favorite book, next it would suggest:
Eat a book every day
and if I didn’t delete that and have it try again, soon it would be suggesting:
Eat a picture of myself everyday.
Super charge my digestive system and then, eat a picture of myself everyday.
I also learned to stay away from gloominess, or platitudes, or anything to do with fitness goals - it was too easy for DaVinci to get stuck in a rut. Anything shocking or mean-spirited also tended to poison the list. I had to delete these:
Every day I will blatantly eavesdrop.
Belch until my teeth explode.
Many were worse. DaVinci is trained on internet text, after all, and so it has a tendency to veer into racism or spam. I needed to be present at every point for careful hand-pruning.
In the end I deleted many, many more predictions than I kept - perhaps about 10 times more.
It began to feel like a metaphor for life choices. Become the kind of person who spends a year licking trees and eating apples in the dark, and who knows what you’ll be doing next. Spend a year trying to belch your way into exploding teeth, and that’ll have an effect on you too.
I present to you my list, my own list that emerged from each prediction that I let stand, and that was also shaped by each prediction that caused me to delete, go back, regroup, retry. These aren’t all things that I would necessarily do myself (I like my sleep far too much to be going out every night at midnight to make grass pancakes), but they are all things that I thought were interesting, that I wanted to hear more about.
Record every adjective I hear on the radio.
Act like a cabbage for a month.
At 4 o'clock every day I will climb a tree.
Speak only to apples for 24 hours.
Jump in front of a moving tree.
On the day of the first snow paint a canvas red.
Dress in a way that only a ghost could love.
Make pancakes out of grass at midnight each night.
Find old man Winter, hug him and let him know everything will be ok.
Ride out of town holding a pelican.
Under every rock I come across for a month I will write “all power to the rocks”.
Every day for a year, at a random time, shout “sausage”.
Make a film about the last sock in the world.
Put on a red shirt and scream ‘I’M NOT WEARING PANTS!’ every time I leave the house.
Throw a party for insects.
Try to convince the dog next door that he is wearing a coat of moonlight.
Every time I press a button I will say 'this is my favorite’.
Search my apartment for secret doors or hidden staircases.
Wear two superman outfits at the same time.
Every time it rains I will stir my tea anti-clockwise.
Every night for a week I will wear a hat lined with lettuce.
I will begin to believe that the trees that I see everyday are my friends.
Every time a bird flies past me I will remember to breathe.
Throw a birthday party for my favorite tree.
I will from now on tell every dog I meet that I am training to be a dragon.
Every time I see a panel van pass me I will dub it a “Slice-a-Wagon.”
Crawl on the ceiling like a spider for a month.
Attempt to find peace living with an army of puppets.
Wear a dinosaur costume to every public event I attend.
Go to the beach every day for a week and shout the names of colors into the ocean.
Go on a three-day backpacking trip dressed as a turnip.
Create messages that only the wind can hear by blowing on the blades of grass.
Give a piece of cloud to a complete stranger.
Make a mask out of grass and wear it while I’m sleeping.
I will now treat every worm I see as if it is an old friend.
When I hear a strange noise in an empty room I will assume someone is saying hello to me.
At the Smithsonian AIB website is a generator that I’ve populated with other resolutions from my list. Visit their site and grab yourself a random resolution. If you don’t like the one you got, you have my permission to reload and regenerate until you find a resolution that speaks to you. Or to apples.
Your January 2022 Smithsonian Futurist in Residence,
Janelle
AI-generated Christmas entities
I wondered what kinds of new Christmas entities might result if I gave a list of Krampus, Rudolph, The Yule Cat, and Meathook (one of the Yule Lads; not as terrifying as he sounds) and short descriptions to GPT-3, a neural net trained on internet text. New Christmas entities, perhaps, for our modern age?
Bonus content and email subscription options at aiweirdness.com
Predicting the next toy fads with AI
I used GPT-3 to complete a list I seeded with pet rock, tickle me elmo, and furby. Its job is prediction, and its dataset is the internet. It is made of math so it is infallible.
More details and bonus content at AiWeirdness.com
An AI-generated advent calendar
In the spirit of utter holiday weirdness, I have made an interactive AI-generated advent calendar!
Image descriptions by GPT-3, illustrations by Pixray.
So I’ve been experimenting with Pixray Swirl, which was originally a mod of Pixray that would let you generate swirling pixel art according to an internet-trained neural net’s interpretations of your requests.
I found parameters that let me take epic journeys instead.
Here’s “Apocalyptic landscape by Lisa Frank”
And “Straight trail leading into a jelly landscape made of jello”
(it is kind of hard to steer, and the AI totally disregards any information in the image itself like a path, in favor of zooming in on the exact coordinate you specify.)
I like this one, “The misty volcanoes of dinosaur country”
(the exact line and scaling between dinosaur and volcano is itself a bit misty but that’s okay)
TryPixray Swirlyourself!
For more, check out the post at AI Weirdness!