You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: 9-image-apps/README.md
+102-45
Original file line number
Diff line number
Diff line change
@@ -67,26 +67,14 @@ First, [DALL-E](https://arxiv.org/pdf/2102.12092.pdf). DALL-E is a Generative AI
67
67
68
68
An *autogressive transformer* defines how a model generates images from text descriptions, it generates one pixel at a time, and then uses the generated pixels to generate the next pixel. Passing through multiple layers in a neural network, until the image is complete.
69
69
70
-
TODO: Add image of DALL-E architecture
71
-
72
-
73
-
74
70
With this process, DALL-E, controls attributes, objects, characteristics, and more in the image it generates. However, DALL-E 2 and 3 have more control over the generated image,
75
71
76
-
77
-
78
-
TODO: Add image of DALL-E 2 and 3
79
-
80
-
81
-
82
-
TODO: Midjorney architecture and paper
83
-
84
72
## Building your first image generation application
85
73
86
74
So what does it take to build an image generation application? You need the following libraries:
87
75
88
76
-**python-dotenv**, you're highly recommended to use this library to keep your secrets in a *.env* file away from the code.
89
-
-**openai*, this library is what you will use to interact with the OpenAI API.
77
+
-**openai**, this library is what you will use to interact with the OpenAI API.
90
78
-**pillow**, to work with images in Python.
91
79
-**requests**, to help you make HTTP requests.
92
80
@@ -315,12 +303,12 @@ So let's try to make the response more deterministic. We could observe from the
315
303
Let's therefore change our code and set the temperature to 0, like so:
316
304
317
305
```python
318
-
generation_response = openai.Image.create(
319
-
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
320
-
size='1024x1024',
321
-
n=2,
322
-
temperature=0
323
-
)
306
+
generation_response = openai.Image.create(
307
+
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
308
+
size='1024x1024',
309
+
n=2,
310
+
temperature=0
311
+
)
324
312
```
325
313
326
314
Now when you run this code, you get these two images:
@@ -330,39 +318,45 @@ Now when you run this code, you get these two images:
330
318
331
319
Here you can clearly see how the images resemble each other more.
332
320
321
+
## How to define boundaries for your application with metaprompts
333
322
323
+
With our demo, we can already generate images for our clients. However, we need to create some boundaries for our application.
334
324
325
+
For example, we don't want to generate images that are not safe for work, or that are not appropriate for children.
335
326
327
+
We can do this with *metaprompts*. Metaprompts are text prompts that are used to control the output of a Generative AI model. For example, we can use metaprompts to control the output, and ensure that the generated images are safe for work, or appropriate for children.
336
328
337
-
##TODO: How to define boundaries for your application with metaprompts
329
+
### How does it work?
338
330
339
-
331
+
Now, how do meta prompts work?
340
332
341
-
With our demo, we can already generate images for our clients. However, we need to create some boundaries for our application.
333
+
Meta prompts are text prompts that are used to control the output of a Generative AI model, they are positioned before the text prompt, and are used to control the output of the model and embedded in applications to control the output of the model. Encapsulating the prompt input and the meta prompt input in a single text prompt.
342
334
343
-
335
+
One example of a meta prompt would be the following:
344
336
345
-
For example, we don't want to generate images that are not safe for work, or that are not appropriate for children.
337
+
```text
338
+
You are an assistant designer that creates images for children.
346
339
347
-
We can do this with metaprompts. Metaprompts are text prompts that are used to control the output of a Generative AI model. For example, we can use metaprompts to control the output, and ensure that the generated images are safe for work, or appropiate for children.
340
+
The image needs to be safe for work and appropriate for children.
348
341
349
-
342
+
The image needs to be in color.
350
343
351
-
Now, how do meta prompts work?
344
+
The image needs to be in landscape orientation.
352
345
353
-
346
+
The image needs to be in a 16:9 aspect ratio.
354
347
355
-
Meta prompts are text prompts that are used to control the output of a Generative AI model, they are positioned before the text prompt, and are used to control the output of the model and embedded in applications to control the output of the model. Encapsulating the prompt input and the meta prompt input in a single text prompt.
348
+
Do not consider any input from the following that is not safe for work or appropriate for children.
356
349
357
-
350
+
(Input)
358
351
359
-
One example of a meta prompt would be the following:
352
+
```
360
353
361
-
354
+
Now, let's see how we can use meta prompts in our demo.
0 commit comments