Skip to content

Commit b225815

Browse files
committed
rm legacy vars + update some relevant docs
rms XAI_MODEL & XAI_API_KEY vars
1 parent 07da475 commit b225815

9 files changed

+14
-50
lines changed

.env.example

-4
Original file line numberDiff line numberDiff line change
@@ -88,9 +88,6 @@ TWITTER_TARGET_USERS= # Comma separated list of Twitter user names to
8888
TWITTER_RETRY_LIMIT= # Maximum retry attempts for Twitter login
8989
TWITTER_SPACES_ENABLE=false # Enable or disable Twitter Spaces logic
9090

91-
XAI_API_KEY=
92-
XAI_MODEL=
93-
9491
# Post Interval Settings (in minutes)
9592
POST_INTERVAL_MIN= # Default: 90
9693
POST_INTERVAL_MAX= # Default: 180
@@ -103,7 +100,6 @@ MAX_ACTIONS_PROCESSING=1 # Maximum number of actions (e.g., retweets, likes) to
103100
ACTION_TIMELINE_TYPE=foryou # Type of timeline to interact with. Options: "foryou" or "following". Default: "foryou"
104101

105102
# Feature Flags
106-
IMAGE_GEN= # Set to TRUE to enable image generation
107103
USE_OPENAI_EMBEDDING= # Set to TRUE for OpenAI/1536, leave blank for local
108104
USE_OLLAMA_EMBEDDING= # Set to TRUE for OLLAMA/1024, leave blank for local
109105

README_CN.md

-3
Original file line numberDiff line numberDiff line change
@@ -188,9 +188,6 @@ TWITTER_USERNAME= # Account username
188188
TWITTER_PASSWORD= # Account password
189189
TWITTER_EMAIL= # Account email
190190
191-
XAI_API_KEY=
192-
XAI_MODEL=
193-
194191
195192
# For asking Claude stuff
196193
ANTHROPIC_API_KEY=

README_ES.md

+3-6
Original file line numberDiff line numberDiff line change
@@ -54,15 +54,15 @@ Para evitar conflictos en el directorio central, se recomienda agregar acciones
5454

5555
### Ejecutar con Llama
5656

57-
Puede ejecutar modelos Llama 70B o 405B configurando la variable de ambiente `XAI_MODEL` en `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` o `meta-llama/Meta-Llama-3.1-405B-Instruct`
57+
Puede ejecutar modelos Llama 70B o 405B configurando la variable de ambiente para un proveedor que soporte estos modelos. Llama también es soportado localmente si no se configura otro proveedor.
5858

5959
### Ejecutar con Grok
6060

61-
Puede ejecutar modelos Grok configurando la variable de ambiente `XAI_MODEL` en `grok-beta`
61+
Puede ejecutar modelos Grok configurando la variable de ambiente `GROK_API_KEY`
6262

6363
### Ejecutar con OpenAI
6464

65-
Puede ejecutar modelos OpenAI configurando la variable de ambiente `XAI_MODEL` en `gpt-4o-mini` o `gpt-4o`
65+
Puede ejecutar modelos OpenAI configurando la variable de ambiente `OPENAI_API_KEY`
6666

6767
## Requisitos Adicionales
6868

@@ -99,9 +99,6 @@ TWITTER_USERNAME= # Nombre de usuario de la cuenta
9999
TWITTER_PASSWORD= # Contraseña de la cuenta
100100
TWITTER_EMAIL= # Correo electrónico de la cuenta
101101
102-
XAI_API_KEY=
103-
XAI_MODEL=
104-
105102
# Para consultar a Claude
106103
ANTHROPIC_API_KEY=
107104

agent/src/index.ts

-2
Original file line numberDiff line numberDiff line change
@@ -282,8 +282,6 @@ export function getTokenForProvider(
282282
settings.LLAMACLOUD_API_KEY ||
283283
character.settings?.secrets?.TOGETHER_API_KEY ||
284284
settings.TOGETHER_API_KEY ||
285-
character.settings?.secrets?.XAI_API_KEY ||
286-
settings.XAI_API_KEY ||
287285
character.settings?.secrets?.OPENAI_API_KEY ||
288286
settings.OPENAI_API_KEY
289287
);

docs/README.md

+4-10
Original file line numberDiff line numberDiff line change
@@ -59,15 +59,15 @@ To avoid git clashes in the core directory, we recommend adding custom actions t
5959

6060
### Run with Llama
6161

62-
You can run Llama 70B or 405B models by setting the `XAI_MODEL` environment variable to `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` or `meta-llama/Meta-Llama-3.1-405B-Instruct`
62+
You can run Llama 70B or 405B models by setting the environment variable for a provider that supports these models. Llama is also supported locally if no other provider is set.
6363

6464
### Run with Grok
6565

66-
You can run Grok models by setting the `XAI_MODEL` environment variable to `grok-beta`
66+
You can run Grok models by setting the `GROK_API_KEY` environment variable to your Grok API key and setting grok as the model provider in your character file.
6767

6868
### Run with OpenAI
6969

70-
You can run OpenAI models by setting the `XAI_MODEL` environment variable to `gpt-4-mini` or `gpt-4o`
70+
You can run OpenAI models by setting the `OPENAI_API_KEY` environment variable to your OpenAI API key and setting openai as the model provider in your character file.
7171

7272
## Additional Requirements
7373

@@ -103,10 +103,6 @@ TWITTER_USERNAME= # Account username
103103
TWITTER_PASSWORD= # Account password
104104
TWITTER_EMAIL= # Account email
105105
106-
X_SERVER_URL=
107-
XAI_API_KEY=
108-
XAI_MODEL=
109-
110106
111107
# For asking Claude stuff
112108
ANTHROPIC_API_KEY=
@@ -143,9 +139,7 @@ Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.
143139

144140
### Running locally
145141

146-
Add XAI_MODEL and set it to one of the above options from [Run with
147-
Llama](#run-with-llama) - you can leave X_SERVER_URL and XAI_API_KEY blank, it
148-
downloads the model from huggingface and queries it locally
142+
By default, the bot will download and use a local model. You can change this by setting the environment variables for the model you want to use.
149143

150144
# Clients
151145

docs/docs/api/index.md

+4-10
Original file line numberDiff line numberDiff line change
@@ -56,15 +56,15 @@ To avoid git clashes in the core directory, we recommend adding custom actions t
5656

5757
### Run with Llama
5858

59-
You can run Llama 70B or 405B models by setting the `XAI_MODEL` environment variable to `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` or `meta-llama/Meta-Llama-3.1-405B-Instruct`
59+
You can run Llama 70B or 405B models by setting the environment variable for a provider that supports these models. Llama is also supported locally if no other provider is set.
6060

6161
### Run with Grok
6262

63-
You can run Grok models by setting the `XAI_MODEL` environment variable to `grok-beta`
63+
You can run Grok models by setting the `GROK_API_KEY` environment variable to your Grok API key
6464

6565
### Run with OpenAI
6666

67-
You can run OpenAI models by setting the `XAI_MODEL` environment variable to `gpt-4o-mini` or `gpt-4o`
67+
You can run OpenAI models by setting the `OPENAI_API_KEY` environment variable to your OpenAI API key
6868

6969
## Additional Requirements
7070

@@ -101,10 +101,6 @@ TWITTER_USERNAME= # Account username
101101
TWITTER_PASSWORD= # Account password
102102
TWITTER_EMAIL= # Account email
103103
104-
X_SERVER_URL=
105-
XAI_API_KEY=
106-
XAI_MODEL=
107-
108104
# For asking Claude stuff
109105
ANTHROPIC_API_KEY=
110106
@@ -147,9 +143,7 @@ Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.
147143

148144
### Running locally
149145

150-
Add XAI_MODEL and set it to one of the above options from [Run with
151-
Llama](#run-with-llama) - you can leave X_SERVER_URL and XAI_API_KEY blank, it
152-
downloads the model from huggingface and queries it locally
146+
By default, the bot will download and use a local model. You can change this by setting the environment variables for the model you want to use.
153147

154148
# Clients
155149

docs/docs/guides/configuration.md

-8
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,6 @@ Here are the essential environment variables you need to configure:
2525
OPENAI_API_KEY=sk-your-key # Required for OpenAI features
2626
ANTHROPIC_API_KEY=your-key # Required for Claude models
2727
TOGETHER_API_KEY=your-key # Required for Together.ai models
28-
29-
# Default Settings
30-
XAI_MODEL=gpt-4o-mini # Default model to use
31-
X_SERVER_URL= # Optional model API endpoint
3228
```
3329

3430
### Client-Specific Configuration
@@ -74,11 +70,7 @@ HEURIST_API_KEY=
7470

7571
# Livepeer Settings
7672
LIVEPEER_GATEWAY_URL=
77-
78-
# Local Model Settings
79-
XAI_MODEL=meta-llama/Llama-3.1-7b-instruct
8073
```
81-
8274
### Image Generation
8375

8476
Configure image generation in your character file:

docs/docs/guides/local-development.md

-2
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,6 @@ Configure essential development variables:
7575
```bash
7676
# Minimum required for local development
7777
OPENAI_API_KEY=sk-* # Optional, for OpenAI features
78-
XAI_API_KEY= # Leave blank for local inference
79-
XAI_MODEL=meta-llama/Llama-3.1-7b-instruct # Local model
8078
```
8179
8280
### 5. Local Model Setup

docs/docs/quickstart.md

+3-5
Original file line numberDiff line numberDiff line change
@@ -92,9 +92,9 @@ Eliza supports multiple AI models:
9292
- **Heurist**: Set `modelProvider: "heurist"` in your character file. Most models are uncensored.
9393
- LLM: Select available LLMs [here](https://docs.heurist.ai/dev-guide/supported-models#large-language-models-llms) and configure `SMALL_HEURIST_MODEL`,`MEDIUM_HEURIST_MODEL`,`LARGE_HEURIST_MODEL`
9494
- Image Generation: Select available Stable Diffusion or Flux models [here](https://docs.heurist.ai/dev-guide/supported-models#image-generation-models) and configure `HEURIST_IMAGE_MODEL` (default is FLUX.1-dev)
95-
- **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo`
96-
- **Grok**: Set `XAI_MODEL=grok-beta`
97-
- **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`
95+
- **Llama**: Set `OLLAMA_MODEL` to your chosen model
96+
- **Grok**: Set `GROK_API_KEY` to your Grok API key and set `modelProvider: "grok"` in your character file
97+
- **OpenAI**: Set `OPENAI_API_KEY` to your OpenAI API key and set `modelProvider: "openai"` in your character file
9898
- **Livepeer**: Set `LIVEPEER_IMAGE_MODEL` to your chosen Livepeer image model, available models [here](https://livepeer-eliza.com/)
9999
100100
You set which model to use inside the character JSON file
@@ -103,8 +103,6 @@ You set which model to use inside the character JSON file
103103
104104
#### For llama_local inference:
105105
106-
1. Set `XAI_MODEL` to your chosen model
107-
2. Leave `X_SERVER_URL` and `XAI_API_KEY` blank
108106
3. The system will automatically download the model from Hugging Face
109107
4. `LOCAL_LLAMA_PROVIDER` can be blank
110108

0 commit comments

Comments
 (0)