You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: 5-advanced-prompts/README.md
+309-13
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,25 @@
1
1
# Advanced prompts
2
2
3
-
So you've started to use an LLM tool like ChatGPT or perhaps GitHUb Copilot. You've seen the power of the tool, but you're not quite sure how to get the most out of it.
3
+
Let's recap some learnings from the previous chapter:
4
+
> Prompt _engineering_ is the process by which we **guide the model towards more relevant responses** by providing more useful instructions or context.
4
5
5
-
To be really efficient with AI tools you need to learn how to prompt them efficiently. Prompting is it's own thing referred to as prompt engineering. We've covered prompt engineering somewhat in previous chapters but let's go at depth in this chapter.
6
+
There's also two steps to writing prompts, constructing the prompt, by providing relevant context and the second part is *optimization*, how to gradually improve the prompt.
7
+
8
+
At this point, we have some basic understanding of how to write prompts, but we need to go deeper. In this chapter, you will go from trying out various prompts to understanding why one prompt is better than another. You will learn how to construct prompts following some basic techniques than can be applied to any LLM.
6
9
7
10
## Introduction
8
11
9
-
- Prompt engineering.
10
-
- Best practices.
11
-
- Templated prompts.
12
+
In this chapter, we will cover the following topics:
13
+
14
+
- Extend your knowledge of prompt engineering by applying different techniques to your prompts.
15
+
- Configuring your prompts to vary the output.
12
16
13
17
## Learning goals
14
18
15
-
After completing this lesson, you will:
19
+
After completing this lesson, you'll be able to:
16
20
17
-
- Understand Prompt engineering and how this effects the outcome of your prompts.
18
-
- Apply different types of prompts and see how the outcome differs.
19
-
- Learn best practices for prompt engineering.
21
+
- Apply prompt engineering techniques that improves the outcome of your prompts.
22
+
- Perform prompting that is either varied or deterministic.
20
23
21
24
## Prompt engineering
22
25
@@ -92,7 +95,7 @@ Here's how:
92
95
Alice has 5 apples, throws 3 apples, gives 2 to Bob and Bob gives one back, how many apples does Alice have?"
93
96
Answer: 1
94
97
95
-
Not how we write a substantially longer prompts with another example, a calculation and then the original prompt and we arrive at the correct answer 1.
98
+
Note how we write a substantially longer prompts with another example, a calculation and then the original prompt and we arrive at the correct answer 1.
96
99
97
100
As you can see chain-of-thought is a very powerful technique.
Above, you see how the prompt is constructed using a template. In the template there's a number of variables, denoted by `{{variable}}`, that will be replaced with actual values from a company API.
115
118
116
-
Here's an example of how the prompt could look like once the variables have been replaced:
119
+
Here's an example of how the prompt could look like once the variables have been replaced by content from your company:
117
120
118
121
```text
119
122
Insurance company: ACME Insurance
@@ -129,7 +132,44 @@ Budget: $1000
129
132
Requirements: Car, Home
130
133
```
131
134
132
-
### Least to most
135
+
Running this prompt through an LLM will produce a response like this:
136
+
137
+
```text
138
+
, and Life insurance
139
+
140
+
Given the budget and requirements, we suggest the following insurance package from ACME Insurance:
141
+
- Car, cheap, 500 USD
142
+
- Home, cheap, 600 USD
143
+
- Life, cheap, 100 USD
144
+
Total cost: $1,200 USD
145
+
```
146
+
147
+
As you can see, it also suggests the Life insurance, which it shouldn't. This result is an indication that we need to optimize the prompt by changing the prompt to be clearer on what it can allow. After som *trial and error*, we arrive at the following prompt:
148
+
149
+
```text
150
+
Insurance company: ACME Insurance
151
+
Insurance products (cost per month):
152
+
- type: Car, cheap, cost: 500 USD
153
+
- type: Car, expensive, cost: 1100 USD
154
+
- type: Home, cheap, cost: 600 USD
155
+
- type: Home, expensive, cost: 1200 USD
156
+
- type: Life, cheap, cost: 100 USD
157
+
158
+
Please suggest an insurance given the following budget and requirements:
159
+
Budget: $1000 restrict choice to types: Car, Home
160
+
```
161
+
162
+
Note how adding *type* and *cost* and also using the keyword *restrict* helps the LLM to understand what we want.
163
+
164
+
Now we get the following response:
165
+
166
+
```text
167
+
Given the budget and requirements, we suggest the Car, Cheap insurance product which costs 500 USD per month.
168
+
```
169
+
170
+
The point of this example was to show that even though we're using a basic technique like *generated knowledge*, we still need to optimize the prompt in most cases to get the desired outcome.
171
+
172
+
### Least-to-most
133
173
134
174
The idea with Least-to-most prompting is to break down a bigger problem into sub problems. That way, you help guide the LLM on how to "conquer" the bigger problem. A good example could be for data science where you can ask the LLM to divide up a problem like so:
135
175
@@ -261,6 +301,262 @@ The two biggest risks are loss of life and loss of business.
261
301
262
302
At this point, the LLM is consistent and mentions "life" and "business" as top two risks. We can now continue to the next step and feel fairly confident. However, we shouldn't trust the LLM blindly, we should always verify the output.
263
303
304
+
## Vary your output
305
+
306
+
LLMs are non deterministic by nature, meaning that you will get different results each time you run the same prompt. Try the following prompt for example:
307
+
308
+
> "Generate code for a Python Web API"
309
+
310
+
```text
311
+
# Import necessary modules
312
+
import flask
313
+
from flask import request, jsonify
314
+
315
+
# Create a Flask app
316
+
app = flask.Flask(__name__)
317
+
318
+
# Create a route for the API
319
+
@app.route('/api', methods=['GET'])
320
+
def api():
321
+
# Get the data from the request
322
+
data = request.args
323
+
324
+
# Process the data
325
+
result = process_data(data)
326
+
327
+
# Return the result as JSON
328
+
return jsonify(result)
329
+
330
+
# Function to process the data
331
+
def process_data(data):
332
+
# Do something with the data
333
+
result = data + 1
334
+
335
+
# Return the result
336
+
return result
337
+
338
+
# Run the app
339
+
if __name__ == '__main__':
340
+
app.run()
341
+
342
+
```
343
+
344
+
Running the same prompt again generates a slightly different response:
345
+
346
+
```text
347
+
#import necessary packages
348
+
import flask
349
+
from flask import request, jsonify
350
+
351
+
#create the Flask application
352
+
app = flask.Flask(__name__)
353
+
app.config["DEBUG"] = True
354
+
355
+
#create a list of books
356
+
books = [
357
+
{'id': 0,
358
+
'title': 'A Fire Upon the Deep',
359
+
'author': 'Vernor Vinge',
360
+
'first_sentence': 'The coldsleep itself was dreamless.',
361
+
'year_published': '1992'},
362
+
{'id': 1,
363
+
'title': 'The Ones Who Walk Away From Omelas',
364
+
'author': 'Ursula K. Le Guin',
365
+
'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.',
return "Error: No id field provided. Please specify an id."
395
+
396
+
#create an empty list for our results
397
+
results = []
398
+
399
+
#loop through the data and match results that fit the requested ID
400
+
#IDs are unique, but other fields might return many results
401
+
for book in books:
402
+
if book['id'] == id:
403
+
results.append(book)
404
+
405
+
#use the jsonify function from Flask to convert our list of
406
+
#Python dictionaries to the JSON format
407
+
return jsonify(results)
408
+
409
+
app.run()
410
+
```
411
+
412
+
> So is the varied output a problem?
413
+
414
+
Depends on what you're trying to do. If you want a specific response then it's a problem. If you're ok with a varied output like "Generate any 3 questions on geography", then it's not a problem.
415
+
416
+
### Using temperature to vary your output
417
+
418
+
Ok, so we've decided we want to limit the output to be more predictable, that is more deterministic. How do we do that?
419
+
420
+
Temperature is a value between 0 and 1, where 0 is the most deterministic and 1 is the most varied. The default value is 0.7. Let's see what happens with two runs of the same prompt with temperature set to 0.1:
421
+
422
+
> "Generate code for a Python Web API"
423
+
424
+
```python
425
+
#import necessary libraries
426
+
import flask
427
+
from flask import request, jsonify
428
+
429
+
#create an instance of the Flask class
430
+
app = flask.Flask(__name__)
431
+
432
+
#create an endpoint for the API
433
+
@app.route('/api/v1/endpoint', methods=['GET'])
434
+
defapi_endpoint():
435
+
#get the request data
436
+
data = request.get_json()
437
+
438
+
#process the data
439
+
result = process_data(data)
440
+
441
+
#return the result
442
+
return jsonify(result)
443
+
444
+
#function to process the data
445
+
defprocess_data(data):
446
+
#process the data
447
+
result = {'result': 'success'}
448
+
449
+
#return the result
450
+
return result
451
+
452
+
#run the Flask app
453
+
if__name__=='__main__':
454
+
app.run()
455
+
456
+
```
457
+
458
+
Running the prompt again gives us this outcome:
459
+
460
+
```python
461
+
#import necessary libraries
462
+
import flask
463
+
from flask import request, jsonify
464
+
465
+
#create a Flask app
466
+
app = flask.Flask(__name__)
467
+
468
+
#create a route for the API
469
+
@app.route('/api', methods=['GET'])
470
+
defapi_call():
471
+
#get the data from the request
472
+
data = request.get_json()
473
+
474
+
#process the data
475
+
result = process_data(data)
476
+
477
+
#return the result
478
+
return jsonify(result)
479
+
480
+
#define the process_data function
481
+
defprocess_data(data):
482
+
#do something with the data
483
+
result = data +1
484
+
485
+
#return the result
486
+
return result
487
+
488
+
#run the app
489
+
if__name__=='__main__':
490
+
app.run()
491
+
492
+
```
493
+
494
+
There's only a tiny difference between these two outputs. Let's do the opposite this time, let's set the temperature to 0.9:
495
+
496
+
```python
497
+
# Import necessary libraries
498
+
import flask
499
+
from flask import request, jsonify
500
+
501
+
# Create a Flask app
502
+
app = flask.Flask(__name__)
503
+
504
+
# Create a route for the API
505
+
@app.route('/api', methods=['GET'])
506
+
defapi_call():
507
+
# Get the data from the request
508
+
data = request.args
509
+
510
+
# Process the data
511
+
result = process_data(data)
512
+
513
+
# Return the result
514
+
return jsonify(result)
515
+
516
+
# Function to process the data
517
+
defprocess_data(data):
518
+
# Do something with the data
519
+
result = data +1
520
+
521
+
# Return the result
522
+
return result
523
+
524
+
# Run the app
525
+
if__name__=='__main__':
526
+
app.run()
527
+
528
+
```
529
+
530
+
and the second attempt at 0.9 as temperature value:
531
+
532
+
```python
533
+
import flask
534
+
from flask import request, jsonify
535
+
536
+
# create the Flask app
537
+
app = flask.Flask(__name__)
538
+
app.config['DEBUG'] =True
539
+
540
+
# create some test data
541
+
books = [
542
+
{'id': 0, 'title': 'A Fire Upon The Deep', 'author': 'Vernor Vinge', 'first_sentence': 'The coldsleep itself was dreamless.', 'year_published': '1992'},
543
+
{'id': 1, 'title': 'The Ones Who Walk Away From Omelas', 'author': 'Ursula K. Le Guin', 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.', 'published': '1973'},
544
+
{'id': 2, 'title': 'Dhalgren', 'author': 'Samuel R. Delany', 'first_sentence': 'to wound the autumnal city.', 'published': '1975'}
545
+
]
546
+
547
+
# create an endpoint
548
+
@app.route('/', methods=['GET'])
549
+
defhome():
550
+
return'''<h1>Welcome to our book API!</h1>'''
551
+
552
+
@app.route('/api/v1/resources/books
553
+
554
+
```
555
+
556
+
As you can see, the results couldn't be more varied.
557
+
558
+
> note, there are more parameters you can change to vary the output, like top-k, top-p, repetition penalty, length penalty and diversity penalty but these are outside the scope of this curriculum.
559
+
264
560
## Good practices
265
561
266
562
There are many practices you can apply to try getting what you want. You will find your own style as you use prompting more and more.
@@ -293,7 +589,7 @@ app.listen(3000, () => {
293
589
})
294
590
```
295
591
296
-
Use an AI assistant like GitHUb Copilot or ChatGPt and apply the "self-refine" technique to improve the code.
592
+
Use an AI assistant like GitHub Copilot orChatGPTand apply the "self-refine" technique to improve the code.
0 commit comments