From 5ac97ecf715e489350e91452c648382916314649 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alexander=20F=2E=20R=C3=B8dseth?= <52813+xyproto@users.noreply.github.com> Date: Thu, 14 Nov 2024 10:41:59 +0100 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 9f04f2b0..f775a924 100644 --- a/README.md +++ b/README.md @@ -308,7 +308,7 @@ Using AI / LLMs / Ollama * The `ollama` server must be running locally, or a `host:port` must be set in the `OLLAMA_HOST` environment variable. -Example use, using the default `tinyllama` model (will be downloaded at first use, the size is 637 MiB and it should run anywhere). +For example, using the default `tinyllama` model (will be downloaded at first use, the size is 637 MiB and it should run anywhere). ``` lua> ollama() @@ -388,7 +388,7 @@ The experimental `prompt` format is very simple: * The first line is the `content-type`. * The second line is the Ollama model, such as `tinyllama:latest` or just `tinyllama`. * The third line is blank. -* The rest of the lines is the prompt that will be passed to the large language model. +* The rest of the lines are the prompt that will be passed to the large language model. Note that the Ollama server must be fast enough to reply within 10 seconds for this to work! `tinyllama` or `gemma` should be more than fast enough with a good GPU or on an M1/M2/M3 processor.