Use Generative AI for Art Effectively and Responsibly
Learning Objectives
After completing this unit, you’ll be able to:
- Set realistic goals for adding generated imagery into your workflow.
- Create prompts that effectively control generated output.
- Describe the ethical concerns of using generative AI artwork.
Bring Generated Art Into Your Projects
Whether you want to illustrate a concept for a presentation or show how your product looks when used in the real world, generative AI gives you the power to beautify your work with imagery. Using AI to create images is an artform of its own. With the right approach, you can generate images appropriate for your next project.
When generating imagery, remember that art is subjective. You might want the perfect picture to punctuate your point, but, there is no perfect! What you find brilliant, others may not appreciate as much. So consider using the image you’re 95% happy with; the last 5% probably falls into the subjective zone anyhow.
Remind yourself of your goal for including imagery in your project. Your goal might be to break up your text with interesting images. But once you begin generating images, it’s tempting to let the goal shift to finding a perfect image. That narrower focus leads you to discard options that would meet your original goal of supplementing your content just fine.
This includes images that have small imperfections. If the artwork isn’t the focal point of the project, your viewer may not even notice anything amiss. For example, this picture is used in the Generative AI Basics badge.
Look closely and you can see that the table has five legs. The image isn’t perfect, but it still does the job well. In general, being flexible gets you to an acceptable result faster. It usually saves money as well, since a lot of generative AI tools are paid services. That said, you can be flexible and smart about how you work with these tools.
The Art of Prompt Engineering
As you learn in the Prompt Fundamentals badge, prompts are how you interact with generative AI models. You give a model directions through text (and maybe an image or two), and it returns its best prediction of what you want. Usually, better prompts mean better output.
But what makes a good prompt? That’s a seemingly simple question that has sparked debate among digital artists. Since we can never fully understand the connections forged when a model is trained, there will always be uncertainty in how it responds. So we make an educated guess and hope for the best. But some guesses are better than others. This is the foundation of prompt engineering. That term grew out of the subculture of artists who first adopted generative AI as a tool for creating art.
Prompt engineering is about experimenting with prompts to see what happens. Through a lot of trial and error, early prompt engineers discovered techniques that work surprisingly well to influence generative AI output. Prompt engineering has evolved into a sophisticated craft, but there are some simpler, well-established techniques to get better results as you start using generative AI tools.
-
Use style modifiers. From cave drawings to 3D renders, art has taken countless forms. Include a specific style of art, like Impressionism, or a specific artist, like Monet, in your prompt. Describe eras, geographic regions, or materials. Anything that’s frequently associated with a specific art style will be part of the model.
-
Use quality descriptors. Although AI models don’t have opinions about what is beautiful, we humans sure do, and we’re not afraid to write them down! Those subjective notions end up becoming part of the model. So asking for a picture of a “beautiful, high-definition, peaceful countryside village” will probably generate something nice to look at.
-
Repeat what’s important. It would be ridiculous to ask an artist to paint a “snowy snowy snowy snowy snowy snowy countryside village.” But generative AI models respond well to repetition (and won’t get annoyed by it). Anything repeated gets extra attention, even adjectives like “very” or “many.”
-
Weigh what’s important. Some models allow you to directly control the importance of certain terms. For example, Stable Diffusion allows you to put a number value on a portion of the prompt. So “countryside village | snowy:10 | stars:5 | clouds:-10” would make for lots of fallen snow, but a clear and starry night. Not every model supports this kind of direct weighting or will have a different syntax, so investigate the nuances of the tool you’re using.
AI-generated image using DreamStudio at stability.ai with the prompt: “A very beautiful peaceful countryside village at night painted in the style of Impressionism | snowy:10 | stars:5 | clouds:-20”
Whether you call it an art, craft, or science, prompt engineering requires practice. Remember: there’s no perfect prompt and there is no perfect artwork. Be open to surprises as you create AI generated images, and you’ll soon find imagery that works well for your next project.
Ethics of Generated Artwork
Advances in AI technology have raised several ethical questions. Although it’s hard to find answers to satisfy everyone, we can try to understand the concerns.
For many artists, plagiarism is the primary concern. If their work is used to train a model, then the model can replicate their style. In some cases, the imagery is an obvious derivation of existing work. In others, the style is so similar that the counterfeit could pass as original. Many artists want their work removed from training data, and thankfully curators of popular models are responding in good faith.
Impersonation is a less obvious, more insidious concern. You may be familiar with deepfakes, videos where AI is used to replace someone’s face with that of another. Sadly, deepfakes are often created without the consent of the person being imitated. At its most harmless, you get a funny video of a pop star saying something silly. But what if that star’s image is made to sell a product? Or if a politician is made to spread lies about an issue? This is just the tip of the iceberg. We must strengthen our skills in detecting fraud now that “seeing is believing” no longer holds true.
Generative AI is only as good as the data it’s trained on. If the data is biased, the generated output will also be biased. Historically, doctors have been depicted as men, so models could have a strong connection between “doctor” and “man”—even if that connection doesn’t reflect reality today. So even if you aren’t trying to perpetuate a stereotype, your model might do it for you. Consider using weighting to counteract biases.
Generative AI is always going to be derivative in some capacity. This might actually stifle genuine creativity. Would we have Cubism if Picasso had access to DALL-E? And as tomorrow’s AI is trained on today’s generated images, the same styles will repeat themselves. We really do need humans to contribute their own artistic vision as a form of human-in-the-loop.
Finally, if you plan to use generated imagery, consider acknowledging where it came from with something as simple as a watermark that states “AI generated.” Transparency builds trust. Models can be programmed to skip works that would contribute to a feedback loop. There’s no right way to attribute works as AI generated, but the Modern Language Association (MLA) has some guidelines.
Now that you know more about using generative AI effectively and responsibly, try adding generated imagery to your next project.
Resources
-
Trailhead: Prompt Fundamentals
-
Trailhead: Responsible Creation of Artificial Intelligence
-
Website: Prompt Engineering Guide
-
Website: MLA Style Center: How do I cite generative AI in MLA style?