Skip to content

Commit cc0009a

Browse files
author
Chris
committed
updating chapter
1 parent 1413c97 commit cc0009a

File tree

1 file changed

+309
-13
lines changed

1 file changed

+309
-13
lines changed

5-advanced-prompts/README.md

+309-13
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,25 @@
11
# Advanced prompts
22

3-
So you've started to use an LLM tool like ChatGPT or perhaps GitHUb Copilot. You've seen the power of the tool, but you're not quite sure how to get the most out of it.
3+
Let's recap some learnings from the previous chapter:
4+
> Prompt _engineering_ is the process by which we **guide the model towards more relevant responses** by providing more useful instructions or context.
45
5-
To be really efficient with AI tools you need to learn how to prompt them efficiently. Prompting is it's own thing referred to as prompt engineering. We've covered prompt engineering somewhat in previous chapters but let's go at depth in this chapter.
6+
There's also two steps to writing prompts, constructing the prompt, by providing relevant context and the second part is *optimization*, how to gradually improve the prompt.
7+
8+
At this point, we have some basic understanding of how to write prompts, but we need to go deeper. In this chapter, you will go from trying out various prompts to understanding why one prompt is better than another. You will learn how to construct prompts following some basic techniques than can be applied to any LLM.
69

710
## Introduction
811

9-
- Prompt engineering.
10-
- Best practices.
11-
- Templated prompts.
12+
In this chapter, we will cover the following topics:
13+
14+
- Extend your knowledge of prompt engineering by applying different techniques to your prompts.
15+
- Configuring your prompts to vary the output.
1216

1317
## Learning goals
1418

15-
After completing this lesson, you will:
19+
After completing this lesson, you'll be able to:
1620

17-
- Understand Prompt engineering and how this effects the outcome of your prompts.
18-
- Apply different types of prompts and see how the outcome differs.
19-
- Learn best practices for prompt engineering.
21+
- Apply prompt engineering techniques that improves the outcome of your prompts.
22+
- Perform prompting that is either varied or deterministic.
2023

2124
## Prompt engineering
2225

@@ -92,7 +95,7 @@ Here's how:
9295
Alice has 5 apples, throws 3 apples, gives 2 to Bob and Bob gives one back, how many apples does Alice have?"
9396
Answer: 1
9497

95-
Not how we write a substantially longer prompts with another example, a calculation and then the original prompt and we arrive at the correct answer 1.
98+
Note how we write a substantially longer prompts with another example, a calculation and then the original prompt and we arrive at the correct answer 1.
9699

97100
As you can see chain-of-thought is a very powerful technique.
98101

@@ -113,7 +116,7 @@ Requirements: {{requirements}}
113116

114117
Above, you see how the prompt is constructed using a template. In the template there's a number of variables, denoted by `{{variable}}`, that will be replaced with actual values from a company API.
115118

116-
Here's an example of how the prompt could look like once the variables have been replaced:
119+
Here's an example of how the prompt could look like once the variables have been replaced by content from your company:
117120

118121
```text
119122
Insurance company: ACME Insurance
@@ -129,7 +132,44 @@ Budget: $1000
129132
Requirements: Car, Home
130133
```
131134

132-
### Least to most
135+
Running this prompt through an LLM will produce a response like this:
136+
137+
```text
138+
, and Life insurance
139+
140+
Given the budget and requirements, we suggest the following insurance package from ACME Insurance:
141+
- Car, cheap, 500 USD
142+
- Home, cheap, 600 USD
143+
- Life, cheap, 100 USD
144+
Total cost: $1,200 USD
145+
```
146+
147+
As you can see, it also suggests the Life insurance, which it shouldn't. This result is an indication that we need to optimize the prompt by changing the prompt to be clearer on what it can allow. After som *trial and error*, we arrive at the following prompt:
148+
149+
```text
150+
Insurance company: ACME Insurance
151+
Insurance products (cost per month):
152+
- type: Car, cheap, cost: 500 USD
153+
- type: Car, expensive, cost: 1100 USD
154+
- type: Home, cheap, cost: 600 USD
155+
- type: Home, expensive, cost: 1200 USD
156+
- type: Life, cheap, cost: 100 USD
157+
158+
Please suggest an insurance given the following budget and requirements:
159+
Budget: $1000 restrict choice to types: Car, Home
160+
```
161+
162+
Note how adding *type* and *cost* and also using the keyword *restrict* helps the LLM to understand what we want.
163+
164+
Now we get the following response:
165+
166+
```text
167+
Given the budget and requirements, we suggest the Car, Cheap insurance product which costs 500 USD per month.
168+
```
169+
170+
The point of this example was to show that even though we're using a basic technique like *generated knowledge*, we still need to optimize the prompt in most cases to get the desired outcome.
171+
172+
### Least-to-most
133173

134174
The idea with Least-to-most prompting is to break down a bigger problem into sub problems. That way, you help guide the LLM on how to "conquer" the bigger problem. A good example could be for data science where you can ask the LLM to divide up a problem like so:
135175

@@ -261,6 +301,262 @@ The two biggest risks are loss of life and loss of business.
261301

262302
At this point, the LLM is consistent and mentions "life" and "business" as top two risks. We can now continue to the next step and feel fairly confident. However, we shouldn't trust the LLM blindly, we should always verify the output.
263303

304+
## Vary your output
305+
306+
LLMs are non deterministic by nature, meaning that you will get different results each time you run the same prompt. Try the following prompt for example:
307+
308+
> "Generate code for a Python Web API"
309+
310+
```text
311+
# Import necessary modules
312+
import flask
313+
from flask import request, jsonify
314+
315+
# Create a Flask app
316+
app = flask.Flask(__name__)
317+
318+
# Create a route for the API
319+
@app.route('/api', methods=['GET'])
320+
def api():
321+
# Get the data from the request
322+
data = request.args
323+
324+
# Process the data
325+
result = process_data(data)
326+
327+
# Return the result as JSON
328+
return jsonify(result)
329+
330+
# Function to process the data
331+
def process_data(data):
332+
# Do something with the data
333+
result = data + 1
334+
335+
# Return the result
336+
return result
337+
338+
# Run the app
339+
if __name__ == '__main__':
340+
app.run()
341+
342+
```
343+
344+
Running the same prompt again generates a slightly different response:
345+
346+
```text
347+
#import necessary packages
348+
import flask
349+
from flask import request, jsonify
350+
351+
#create the Flask application
352+
app = flask.Flask(__name__)
353+
app.config["DEBUG"] = True
354+
355+
#create a list of books
356+
books = [
357+
{'id': 0,
358+
'title': 'A Fire Upon the Deep',
359+
'author': 'Vernor Vinge',
360+
'first_sentence': 'The coldsleep itself was dreamless.',
361+
'year_published': '1992'},
362+
{'id': 1,
363+
'title': 'The Ones Who Walk Away From Omelas',
364+
'author': 'Ursula K. Le Guin',
365+
'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.',
366+
'published': '1973'},
367+
{'id': 2,
368+
'title': 'Dhalgren',
369+
'author': 'Samuel R. Delany',
370+
'first_sentence': 'to wound the autumnal city.',
371+
'published': '1975'}
372+
]
373+
374+
#create an endpoint for the API
375+
@app.route('/', methods=['GET'])
376+
def home():
377+
return '''<h1>Books API</h1>
378+
<p>A prototype API for retrieving books.</p>'''
379+
380+
#create an endpoint to return all books
381+
@app.route('/api/v1/resources/books/all', methods=['GET'])
382+
def api_all():
383+
return jsonify(books)
384+
385+
#create an endpoint to return a single book
386+
@app.route('/api/v1/resources/books', methods=['GET'])
387+
def api_id():
388+
#check if an ID was provided as part of the URL
389+
#if ID is provided, assign it to a variable
390+
#if no ID is provided, display an error
391+
if 'id' in request.args:
392+
id = int(request.args['id'])
393+
else:
394+
return "Error: No id field provided. Please specify an id."
395+
396+
#create an empty list for our results
397+
results = []
398+
399+
#loop through the data and match results that fit the requested ID
400+
#IDs are unique, but other fields might return many results
401+
for book in books:
402+
if book['id'] == id:
403+
results.append(book)
404+
405+
#use the jsonify function from Flask to convert our list of
406+
#Python dictionaries to the JSON format
407+
return jsonify(results)
408+
409+
app.run()
410+
```
411+
412+
> So is the varied output a problem?
413+
414+
Depends on what you're trying to do. If you want a specific response then it's a problem. If you're ok with a varied output like "Generate any 3 questions on geography", then it's not a problem.
415+
416+
### Using temperature to vary your output
417+
418+
Ok, so we've decided we want to limit the output to be more predictable, that is more deterministic. How do we do that?
419+
420+
Temperature is a value between 0 and 1, where 0 is the most deterministic and 1 is the most varied. The default value is 0.7. Let's see what happens with two runs of the same prompt with temperature set to 0.1:
421+
422+
> "Generate code for a Python Web API"
423+
424+
```python
425+
#import necessary libraries
426+
import flask
427+
from flask import request, jsonify
428+
429+
#create an instance of the Flask class
430+
app = flask.Flask(__name__)
431+
432+
#create an endpoint for the API
433+
@app.route('/api/v1/endpoint', methods=['GET'])
434+
def api_endpoint():
435+
#get the request data
436+
data = request.get_json()
437+
438+
#process the data
439+
result = process_data(data)
440+
441+
#return the result
442+
return jsonify(result)
443+
444+
#function to process the data
445+
def process_data(data):
446+
#process the data
447+
result = {'result': 'success'}
448+
449+
#return the result
450+
return result
451+
452+
#run the Flask app
453+
if __name__ == '__main__':
454+
app.run()
455+
456+
```
457+
458+
Running the prompt again gives us this outcome:
459+
460+
```python
461+
#import necessary libraries
462+
import flask
463+
from flask import request, jsonify
464+
465+
#create a Flask app
466+
app = flask.Flask(__name__)
467+
468+
#create a route for the API
469+
@app.route('/api', methods=['GET'])
470+
def api_call():
471+
#get the data from the request
472+
data = request.get_json()
473+
474+
#process the data
475+
result = process_data(data)
476+
477+
#return the result
478+
return jsonify(result)
479+
480+
#define the process_data function
481+
def process_data(data):
482+
#do something with the data
483+
result = data + 1
484+
485+
#return the result
486+
return result
487+
488+
#run the app
489+
if __name__ == '__main__':
490+
app.run()
491+
492+
```
493+
494+
There's only a tiny difference between these two outputs. Let's do the opposite this time, let's set the temperature to 0.9:
495+
496+
```python
497+
# Import necessary libraries
498+
import flask
499+
from flask import request, jsonify
500+
501+
# Create a Flask app
502+
app = flask.Flask(__name__)
503+
504+
# Create a route for the API
505+
@app.route('/api', methods=['GET'])
506+
def api_call():
507+
# Get the data from the request
508+
data = request.args
509+
510+
# Process the data
511+
result = process_data(data)
512+
513+
# Return the result
514+
return jsonify(result)
515+
516+
# Function to process the data
517+
def process_data(data):
518+
# Do something with the data
519+
result = data + 1
520+
521+
# Return the result
522+
return result
523+
524+
# Run the app
525+
if __name__ == '__main__':
526+
app.run()
527+
528+
```
529+
530+
and the second attempt at 0.9 as temperature value:
531+
532+
```python
533+
import flask
534+
from flask import request, jsonify
535+
536+
# create the Flask app
537+
app = flask.Flask(__name__)
538+
app.config['DEBUG'] = True
539+
540+
# create some test data
541+
books = [
542+
{'id': 0, 'title': 'A Fire Upon The Deep', 'author': 'Vernor Vinge', 'first_sentence': 'The coldsleep itself was dreamless.', 'year_published': '1992'},
543+
{'id': 1, 'title': 'The Ones Who Walk Away From Omelas', 'author': 'Ursula K. Le Guin', 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.', 'published': '1973'},
544+
{'id': 2, 'title': 'Dhalgren', 'author': 'Samuel R. Delany', 'first_sentence': 'to wound the autumnal city.', 'published': '1975'}
545+
]
546+
547+
# create an endpoint
548+
@app.route('/', methods=['GET'])
549+
def home():
550+
return '''<h1>Welcome to our book API!</h1>'''
551+
552+
@app.route('/api/v1/resources/books
553+
554+
```
555+
556+
As you can see, the results couldn't be more varied.
557+
558+
> note, there are more parameters you can change to vary the output, like top-k, top-p, repetition penalty, length penalty and diversity penalty but these are outside the scope of this curriculum.
559+
264560
## Good practices
265561

266562
There are many practices you can apply to try getting what you want. You will find your own style as you use prompting more and more.
@@ -293,7 +589,7 @@ app.listen(3000, () => {
293589
})
294590
```
295591

296-
Use an AI assistant like GitHUb Copilot or ChatGPt and apply the "self-refine" technique to improve the code.
592+
Use an AI assistant like GitHub Copilot or ChatGPT and apply the "self-refine" technique to improve the code.
297593

298594
## Solution
299595

0 commit comments

Comments
 (0)