Skip to content

Commit cb59851

Browse files
committed
Merge pull request #2 from Titan-Node/livepeer-doc-updates
Updated docs for Livepeer LLM integration
1 parent ccb8ed4 commit cb59851

File tree

4 files changed

+49
-20
lines changed

4 files changed

+49
-20
lines changed

.env.example

+1-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ LARGE_HYPERBOLIC_MODEL= # Default: meta-llama/Meta-Llama-3.1-405-Instruc
128128
LARGE_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-1-405B-Instruct-FP8
129129

130130
# Livepeer configuration
131-
LIVEPEER_GATEWAY_URL= # Free inference gateways and docs: https://livepeer-eliza.com/
131+
LIVEPEER_GATEWAY_URL=https://dream-gateway.livepeer.cloud # Free inference gateways and docs: https://livepeer-eliza.com/
132132
LIVEPEER_IMAGE_MODEL= # Default: ByteDance/SDXL-Lightning
133133
LIVEPEER_SMALL_MODEL=
134134
LIVEPEER_MEDIUM_MODEL=

docs/docs/advanced/fine-tuning.md

+40-18
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ enum ModelProviderName {
2626
REDPILL,
2727
OPENROUTER,
2828
HEURIST,
29+
LIVEPEER,
2930
}
3031
```
3132

@@ -272,24 +273,45 @@ const llamaLocalSettings = {
272273

273274
```typescript
274275
const heuristSettings = {
275-
settings: {
276-
stop: [],
277-
maxInputTokens: 32768,
278-
maxOutputTokens: 8192,
279-
repetition_penalty: 0.0,
280-
temperature: 0.7,
281-
},
282-
imageSettings: {
283-
steps: 20,
284-
},
285-
endpoint: "https://llm-gateway.heurist.xyz",
286-
model: {
287-
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
288-
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
289-
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
290-
[ModelClass.EMBEDDING]: "", // Add later
291-
[ModelClass.IMAGE]: "FLUX.1-dev",
292-
},
276+
settings: {
277+
stop: [],
278+
maxInputTokens: 32768,
279+
maxOutputTokens: 8192,
280+
repetition_penalty: 0.0,
281+
temperature: 0.7,
282+
},
283+
imageSettings: {
284+
steps: 20,
285+
},
286+
endpoint: "https://llm-gateway.heurist.xyz",
287+
model: {
288+
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
289+
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
290+
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
291+
[ModelClass.EMBEDDING]: "", // Add later
292+
[ModelClass.IMAGE]: "FLUX.1-dev",
293+
},
294+
};
295+
```
296+
297+
### Livepeer Provider
298+
299+
```typescript
300+
const livepeerSettings = {
301+
settings: {
302+
stop: [],
303+
maxInputTokens: 128000,
304+
maxOutputTokens: 8192,
305+
repetition_penalty: 0.4,
306+
temperature: 0.7,
307+
},
308+
endpoint: "https://dream-gateway.livepeer.cloud",
309+
model: {
310+
[ModelClass.SMALL]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
311+
[ModelClass.MEDIUM]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
312+
[ModelClass.LARGE]: "meta-llama/Llama-3.3-70B-Instruct",
313+
[ModelClass.IMAGE]: "ByteDance/SDXL-Lightning",
314+
},
293315
};
294316
```
295317

docs/docs/core/characterfile.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ The character's display name for identification and in conversations.
9292

9393
#### `modelProvider` (required)
9494

95-
Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, and others.
95+
Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, `livepeer`, and others.
9696

9797
#### `clients` (required)
9898

docs/docs/quickstart.md

+7
Original file line numberDiff line numberDiff line change
@@ -92,10 +92,17 @@ Eliza supports multiple AI models:
9292
- **Heurist**: Set `modelProvider: "heurist"` in your character file. Most models are uncensored.
9393
- LLM: Select available LLMs [here](https://docs.heurist.ai/dev-guide/supported-models#large-language-models-llms) and configure `SMALL_HEURIST_MODEL`,`MEDIUM_HEURIST_MODEL`,`LARGE_HEURIST_MODEL`
9494
- Image Generation: Select available Stable Diffusion or Flux models [here](https://docs.heurist.ai/dev-guide/supported-models#image-generation-models) and configure `HEURIST_IMAGE_MODEL` (default is FLUX.1-dev)
95+
<<<<<<< HEAD
9596
- **Llama**: Set `OLLAMA_MODEL` to your chosen model
9697
- **Grok**: Set `GROK_API_KEY` to your Grok API key and set `modelProvider: "grok"` in your character file
9798
- **OpenAI**: Set `OPENAI_API_KEY` to your OpenAI API key and set `modelProvider: "openai"` in your character file
9899
- **Livepeer**: Set `LIVEPEER_IMAGE_MODEL` to your chosen Livepeer image model, available models [here](https://livepeer-eliza.com/)
100+
=======
101+
- **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo`
102+
- **Grok**: Set `XAI_MODEL=grok-beta`
103+
- **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`
104+
- **Livepeer**: Set `SMALL_LIVEPEER_MODEL`,`MEDIUM_LIVEPEER_MODEL`,`LARGE_LIVEPEER_MODEL` and `LIVEPEER_IMAGE_MODEL` to your desired models listed [here](https://livepeer-eliza.com/).
105+
>>>>>>> 95f56e6b4 (Merge pull request #2 from Titan-Node/livepeer-doc-updates)
99106
100107
You set which model to use inside the character JSON file
101108

0 commit comments

Comments
 (0)