The Save Deth videos, by Dave Poyzer and Seth Peterson, are fantastic time-capsules of yo-yo culture. Inspired by skate videos, Save Deth wanted to make high quality videos that captured the style of each yo-yo player. I really was excited about recording my segment for Save Deth Volume 2, but when the day came to record the segment in SF, everything was running later than expected and I had a hard stop that I had to get to. So I think my clip looks a little more rushed and sloppy than many of the other pros…. but I guess that I’m also known as being a little more sloppy in style too, so maybe this captures the day just right?
Here it is:
If you view the on Youtube, you can see a timeline showing what trick is playing for each part of the video. A few highlights for me are the “Gerbil Wheel” at :23 in, which might be the first time that counterweight move was documented. You can also see a bunch of my favorite moves documented here, like the Overhand 1.5 Whip, The Sabering Dismount, the Bicycle Kick Rejection, and some Astro-Style counterweight tricks.
At 1:03 I grab a coffee and start doing some one-handed tricks. This may seem kind of random, but I’ve been wanting to do a one-handed Artistic Performance routine in the World Yo-Yo Contest for many years. The idea would be I’d bring a cup of coffee on stage with me and do tricks with the non-coffee hand. I never fleshed it out much further than what you see though.
Artist: So Many Wizards Song: "My Friends Are Nice" Website: somanywizards.com Myspace: www.myspace.com/somanywizards Notes: The concept of this video was to show what goes into making a Save Deth video. Behind the scenes shots, bloopers and really good yo-yoing. Yo-yos used are the Yes, Absolutely The End and YoYo Jam + Doc Pop Bolt. Shot in San Francisco California, May 2009.
If you want to see the rest of the videos from Save Deth Volume II, you can go to http://www.savedeth.com/volumeii/
In this episode, I’m joined by Josh Yee and Connor Scholten to discuss the history of yo-yo binds. Our goal is to reveal the first time a yo-yo bind was documented, who was the first player to use one in a contest, and what was the first “bind return only” yo-yo.
This episode was a ton of fun to research, and I learned a lot. If you enjoyed it, be sure to watch our other video; The History of Slack Tricks.Links mentioned in this episode:
Y’all know me, right? So you know, when I got access to generative art tools like Dalle 2 and Midjourney, the first thing I tried using as a prompt was the word “yo-yo”. That’s a no-brainer, and you may have already seen my video about it.
While making that video, I had a strange realization that anytime I used the 🪀 (yo-yo emoji), I’d get a beautiful fantasy landscape that was filled with gorgeous pink and blue colors. Like this:
You see what I mean, right? Every time I use the prompt “🪀” (yo-yo emoji), I get images that feel like these beautiful science fiction landscapes. I wanted to test this out a little more, so here’s what happens when you try adding an extra 🪀 (yo-yo emoji) to the prompt:
Can you see it this time? Very distinct colors, outdoors, clouds… To me, it has the vibes of dawn in an N.K. Jemisin novel. Did you notice that several of these images have a figure facing away from the viewer, wearing a robe and a red hat? There’s one like that in the first batch of photos too. Hmmm…
Alright, let’s try three yo-yos:
I should mention that the 🪀 (yo-yo emoji) is one of the few emojis that has landed on a single default color yet. Depending on what browser you are using, you may see green, purple, red, or many other colors. I talk about that in this video:
Let’s add one more 🪀 (yo-yo emoji):
Let’s go crazy:
Still seeing towers and clouds, though it looks like the colors get slightly oranger when I add more yo-yos. Let’s try something completely different:
Okay, this is useful. I tried using 🤓 (nerd face emoji) as a prompt, and I felt like what I got was similar to the yo-yo emoji generated. What happens when we try the 🤷 (shrug emoji) emoji?
That’s interesting. Maybe this style of art is what happens anytime you input a single emoji as a prompt on Midjourney? Let’s try a different emoji to be sure:
Okay, that’s REALLY interesting! When I use the 🪀 (yo-yo emoji) or 🤓 (nerd glasses emoji), I don’t see anything in the AI generated images that looks like it understands the emojis, but when I use a 🥨 (pretzel emoji) I see a lot of pastries. There are still clouds and pastel colors, but there are also cookies, scones, danishes, eggs, whipped cream, and other delights. This is the first emoji that the AI seems to “understand”. Huge air-quotes on the word “understand”.
I thought this might be because the AI has seen more examples of the 🥨 (pretzel emoji) in its training, so it has a better time pulling up relevant results. Considering the 🪀 (yo-yo emoji) isn’t extremely widely used, that could explain why I’m not seeing images with yo-yos in them, but looking at the statistics on emoji usage I see that the 🤓 (nerd glasses emoji) is used far more frequently than the 🥨 (pretzel emoji), but I’m not seeing images of people wearing glasses when I use that one… so why is 🥨 (pretzel emoji) the only emoji so far that’s giving me results similar to the emoji?
What happens when we use ☕️ (coffee cup emoji)?
Okay, those both give me coffee vibes. It is worth pointing out that ☕️ (coffee cup emoji) and 🥨 (pretzel emoji) are used far less frequently than 🤓 (nerd glasses emoji), but they might get used in ways that are more consistent for the AI to generate images from. Let’s change things up. Let’s try using letters:
Oh wow, those “C” images look great! Did you notice the hooded figure again? They appear in the 🪀 (yo-yo emoji) prompts and in letters. A figure facing away from the viewer, wearing a long robe in a fantasy landscape setting. It’s almost like a ghost in the algorithm. I’m going to name them Aileen.
And what happens when we double the letters?
So what have we learned? Not much.
When we use a 🪀 (yo-yo emoji) as the singular prompt in Midjourney, we get a beautiful pink and blue image with clouds and spires that have nothing to do with the prompt.
Using other emojis like 🤓 (nerd glasses emoji) or 🤷 (shrug emoji) yields similar results.
Some emoji, like ☕️ (coffee cup emoji) or 🥨 (pretzel emoji) yield images that seem inspired by the emoji.
When we increase the number of emoji in the prompt, we tend to get fewer pink and blue colors. To me, the colors seem to have more orange. Adding multiple emoji seems to increase the number of humanoid characters in the AI generated image.
A common figure that appears in these images in a person in a robe facing away from the camera. This almost seems like a default character stuck in the AI. I call them Aileen.
When we use letters instead of numbers, we tend to see warmer images. There seems to be a fifty chance we’ll see that letter in the final image too. So if we type “Z”, we are likely to see a “Z” appear in the final image about half the time.
Here’s my best guess, Midjourney’s AI was trained on a lot of art that looks the same. When you give the bot very little info to work with, it’s going to default to something that looks a lot like a fantasy landscape image. If you give the bot more to work with (ie add more emojis or use words that it has more data on), then it will give you a more diverse set of results with different color palettes, objects, and landscapes.
In other words, the 🪀 (yo-yo emoji) emoji is the least relevant thing to feed to Midjourney, so it resorts to a default set of images that are already gorgeous to look at. That’s my guess.
To test that theory, let’s try one more experiment. What happens if we insert a blank prompt. Will we get a pastel scene with a robbed figure in the foreground and lots of clouds? Since Midjourney doesn’t allow you to have a blank prompt, the closest thing I can try is an “_” (an underscore). Let’s try it out, shall we?
Well, that certainly looks familiar to several of the earlier results, though less fantasy/sci-fi feeling. Maybe this is the closest thing MidJourney has to a default image if you don’t give it a prompt it can work with.
What are your thoughts? Do you think the “🪀” and the “_” prompts yield the same results? Is this what MidJourney cranks out when it doesn’t have enough info to work with? Let me know your theories below.
This episode of PopCast was so fun to make! I’ve been experimenting with generative art tools, like Dalle-2 and Midjourney, and loving the results. I had the idea to make a video about using classic trick names as text prompts for Midjourney, and then I spent the whole weekend creating prompts, shooting footage, and editing it. The idea of this episode I used the names of classic yo-yo tricks as prompts for Midjourney, then I wanted to see the results.
I love how this video turned out. It’s got some cool art and some footage from our yo-yo meetup in San Francisco. As an extra bonus, I’ve included all of my favorite images below, including several that didn’t make it into the video. Which one is your favorite?
In this episode of PopCast I talk with Patrick Dressel, the owner of Dressel Designs, about yo-yo design and working with small yo-yo companies. It’s a really informative conversation and we show some of the yo-yos that we are currently working on together.
Hey Bay Area yo-yo friends, lets hang out this Saturday!
San Francisco Yo-Yo Meetup on July 16th. We will be by the bandshell in Golden Gate Park from 1-4 pm. If there is a concert happening, we’ll probably be closer to the big water fountain near the bandshell. This event is open to everyone, and we’ll be joined by Mr. Yoyothrower, the owner of Rain City Skilltoys. I hope to see you there!