Creating More Consistent Images with DALL-E 3

(This post was originalyl published on the blog Dana Leeds: Creator of the Leeds Method.)

As genealogists, we find ourselves at the intersection of history and technology. Today’s example comes from the evolving field of AI-generated imagery. Earlier today, Steve Little from AI Genealogy Insights highlighted a resource that addresses a common challenge with DALL-E 3: fine-tuning the generated images to fit our specific needs by using seeds.

As part of a private Facebook group for Steve’s NGS “Empowering Genealogists with Artificial Intelligence” course, a Twitter post by Rowan Cheung was shared which sheds light on the concept of using ‘seeds’ to refine these images. While I had come across the term earlier in the week, it wasn’t until this demonstration that the methodology clicked for me—and it’s a game changer!

What’s the problem?

This and all following images generated using DALL-E 3

Often, when you try to make small changes to a generated image, it instead generates an entirely new image. And that can be frustrating!

For example, above is an image of a black cat with a sign that says “Trick or Treat” (with DALL-E 3’s notorious spelling mistakes).

I liked the cat and the scene, but I wanted to change the sign. So I prompted “Generate another image similar to #1 but have the sign say ‘Boo!’”

The new image changed a lot more than just the sign. It includes a different, younger black cat and different background though it’s similar.

Using seeds can help us to produce additional images that are much similar to the original image.

What is a “Seed”?

To better understand the concept, I turned to ChatGPT for an explanation of what “seeds” mean in the context of AI. Here’s an analogy from the first part of its response:

“Alright, imagine you have a magic book that gives you a random page number every time you ask it. But sometimes, you want to get the same ‘random’ page number every time you ask, so you can show your friend the cool picture on that page. That’s kind of what a seed does in AI.”

So, if we tell DALL-E 3 we want to modify a certain seed or page number, it does a pretty good job of giving back a similar image with the modifications we asked for.

Using Seeds

After reading through the Tweet mentioned earlier, I started playing with seeds (again). This time, I understood the process a lot better. And it worked!

My first success was a baby seal. I will use a series of prompts to get the desired image doing what is called “prompt chaining.”

Prompt #1: Draw an adorable seal with large eyes

Of the two seals DALL-E 3 generated, this was the “adorable baby” seal I chose to work with.

Prompt #2: What’s the seed for image 1?

Since the baby seal I wanted was the first of the two generated images, I asked for the seed for image 1. It responed “1122301494.” Remember, this is like me now knowing what “page” DALL-E 3 has the image on so I can go back and modify that page!

Prompt #3: Modify the image with seed 1122301494: add a beach scene with a starfish -ar 7:4

When I’m asking DALL-E 3 to modify a seed, I start with the phrase “modify the image with seed [x].” Next, I can ask it to add, remove, or edit something. And finally, I can ask for specific aspect ratios (-ar): 1:1 for square, 7:4 for wide, or 4:7 for tall. (I have also learned I can just use the words square, wide, or tall!) I love changing a square image into “wide” or “tall” which actually expands the scene!

We now have the SAME ADORABLE BABY SEAL with a wider beach scene and a starfish. WOW!!!

And just to try it again…

Prompt #4: Modify the image with seed 1122301494: add his mother -ar 7:4

In hindsite, I don’t think I needed to specifiy the aspect ratio since it was already wide. But I’m still learning!

And once again we have the SAME ADORABLE BABY SEAL with his mom!

Breathing Life Into Family History

And now a bit more from ChatGPT based on my input:

Transforming images with seeds isn’t just about tweaking a picture until it’s perfect; it’s a gateway to visual storytelling that can vividly illustrate our family histories. Whether we aim to elevate a photograph from ‘like’ to ‘love’, or we wish to infuse static images with dynamic action, the possibilities are endless.

Take, for instance, the journey I embarked on with a single image: a photograph of a young Confederate soldier. The original image captured a moment, but I envisioned more. I wanted to broaden the narrative. So, I expanded the image—widening its scope to include additional figures, thereby crafting a richer tableau.

There was also the matter of authenticity. Through prompt chaining, the soldier’s uniform had faded to brown in the “photograph.” With careful editing, I restored the uniform’s gray hue, maintaining historical accuracy while breathing new life into the image.

Join me as I continue to explore the potential of using seeds in AI to create more accurate and personalized visual stories.