diff --git a/docs/api/enumerations/ModelProviderName.md b/docs/api/enumerations/ModelProviderName.md index 1f6e6ce0b2f..197a65b15c7 100644 --- a/docs/api/enumerations/ModelProviderName.md +++ b/docs/api/enumerations/ModelProviderName.md @@ -111,3 +111,11 @@ #### Defined in [packages/core/src/types.ts:130](https://github.com/ai16z/eliza/blob/main/packages/core/src/types.ts#L130) + +### HEURIST + +> **HEURIST**: `"heurist"` + +#### Defined in + +[packages/core/src/types.ts:132](https://github.com/ai16z/eliza/blob/main/packages/core/src/types.ts#L132) diff --git a/docs/docs/advanced/fine-tuning.md b/docs/docs/advanced/fine-tuning.md index 1065bf3ba92..95cbc2bac7a 100644 --- a/docs/docs/advanced/fine-tuning.md +++ b/docs/docs/advanced/fine-tuning.md @@ -23,7 +23,8 @@ enum ModelProviderName { LLAMALOCAL, GOOGLE, REDPILL, - OPENROUTER + OPENROUTER, + HEURIST } ``` @@ -266,6 +267,31 @@ const llamaLocalSettings = { }; ``` +### Heurist Provider + +```typescript +const heuristSettings = { + settings: { + stop: [], + maxInputTokens: 128000, + maxOutputTokens: 8192, + repetition_penalty: 0.0, + temperature: 0.7, + }, + imageSettings: { + steps: 20, + }, + endpoint: "https://llm-gateway.heurist.xyz", + model: { + [ModelClass.SMALL]: "meta-llama/llama-3-70b-instruct", + [ModelClass.MEDIUM]: "meta-llama/llama-3-70b-instruct", + [ModelClass.LARGE]: "meta-llama/llama-3.1-405b-instruct", + [ModelClass.EMBEDDING]: "", // Add later + [ModelClass.IMAGE]: "PepeXL", + } +} +``` + ## Testing and Validation ### Embedding Tests diff --git a/docs/docs/api/enumerations/ModelProviderName.md b/docs/docs/api/enumerations/ModelProviderName.md index 461236918f6..55819cb77fa 100644 --- a/docs/docs/api/enumerations/ModelProviderName.md +++ b/docs/docs/api/enumerations/ModelProviderName.md @@ -109,3 +109,14 @@ #### Defined in [packages/core/src/types.ts:128](https://github.com/ai16z/eliza/blob/7fcf54e7fb2ba027d110afcc319c0b01b3f181dc/packages/core/src/types.ts#L128) + + +*** + +### HEURIST + +> **HEURIST**: `"heurist"` + +#### Defined in + +[packages/core/src/types.ts:132](https://github.com/ai16z/eliza/blob/4d1e66cbf7deea87a8a67525670a963cd00108bc/packages/core/src/types.ts#L132) diff --git a/docs/docs/guides/configuration.md b/docs/docs/guides/configuration.md index 3de1504fbf4..3e7ea20016c 100644 --- a/docs/docs/guides/configuration.md +++ b/docs/docs/guides/configuration.md @@ -71,6 +71,37 @@ TOGETHER_API_KEY= XAI_MODEL=meta-llama/Llama-3.1-7b-instruct ``` +### Image Generation + +Configure image generation in your character file: + +```json +{ + "modelProvider": "HEURIST", + "settings": { + "imageSettings": { + "steps": 20, + "width": 512, + "height": 512 + } + } +} +``` + +Example usage: + +```typescript +const result = await generateImage({ + prompt: "pepe_frog, meme, web comic, cartoon, 3d render", + width: 512, + height: 512, + numIterations: 20, // optional + guidanceScale: 3, // optional + seed: -1, // optional + modelId: "PepeXL" // optional +}, runtime); +``` + ## Character Configuration ### Character File Structure diff --git a/docs/docs/quickstart.md b/docs/docs/quickstart.md index e3824b3d2bb..ce76ef84ef8 100644 --- a/docs/docs/quickstart.md +++ b/docs/docs/quickstart.md @@ -53,7 +53,8 @@ Before getting started with Eliza, ensure you have: # Suggested quickstart environment variables DISCORD_APPLICATION_ID= # For Discord integration DISCORD_API_TOKEN= # Bot token - OPENAI_API_KEY= # OpenAI API key (starting with sk-*) + HEURIST_API_KEY= # Heurist API key for LLM and image generation + OPENAI_API_KEY= # OpenAI API key ELEVENLABS_XI_API_KEY= # API key from elevenlabs (for voice) ``` @@ -61,6 +62,9 @@ Before getting started with Eliza, ensure you have: Eliza supports multiple AI models: + - **Heurist**: Set `modelProvider: "HEURIST"` in your character file + - LLM: Uses Llama models (more available LLM models [here](https://heurist.mintlify.app/developer/supported-models)) + - Image Generation: Uses PepeXL model (more info of available models [here](https://heurist.mintlify.app/developer/image-generation-api)) - **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` - **Grok**: Set `XAI_MODEL=grok-beta` - **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`