Skip to content

Commit d6e84eb

Browse files
committed
Merge branch 'main' of https://github.com/ai16z/eliza
2 parents 4eba093 + 39ce28e commit d6e84eb

File tree

5 files changed

+194
-17
lines changed

5 files changed

+194
-17
lines changed

README.md

+17-12
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,23 @@
22

33
<img src="./docs/static/img/eliza_banner.jpg" alt="Eliza Banner" width="100%" />
44

5-
_As seen powering [@DegenSpartanAI](https://x.com/degenspartanai) and [@MarcAIndreessen](https://x.com/pmairca)_
6-
7-
- Multi-agent simulation framework
8-
- Add as many unique characters as you want with [characterfile](https://github.com/lalalune/characterfile/)
9-
- Full-featured Discord and Twitter connectors, with Discord voice channel support
10-
- Full conversational and document RAG memory
11-
- Can read links and PDFs, transcribe audio and videos, summarize conversations, and more
12-
- Highly extensible - create your own actions and clients to extend Eliza's capabilities
13-
- Supports open source and local models (default configured with Nous Hermes Llama 3.1B)
14-
- Supports OpenAI for cloud inference on a light-weight device
15-
- "Ask Claude" mode for calling Claude on more complex queries
16-
- 100% Typescript
5+
### [For Chinese Version: 中文说明](./README_CN.md)
6+
7+
## Features
8+
9+
- 🛠 Full-featured Discord, Twitter and Telegram connectors
10+
- 👥 Multi-agent and room support
11+
- 📚 Easily ingest and interact with your documents
12+
- 💾 Retrievable memory and document store
13+
- 🚀 Highly extensible - create your own actions and clients to extend capabilities
14+
- ☁️ Supports many models, including local Llama, OpenAI, Anthropic, Groq, and more
15+
- 📦 Just works!
16+
17+
## What can I use it for?
18+
- 🤖 Chatbots
19+
- 🕵️ Autonomous Agents
20+
- 📈 Business process handling
21+
- 🎮 Video game NPCs
1722

1823
# Getting Started
1924

README_CN.md

+169
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
# Eliza
2+
3+
<img src="./docs/static/img/eliza_banner.jpg" alt="Eliza Banner" width="100%" />
4+
5+
## 功能
6+
7+
- 🛠 支持discord/推特/telegram连接
8+
- 👥 支持多模态agent
9+
- 📚 简单的导入文档并与文档交互
10+
- 💾 可检索的内存和文档存储
11+
- 🚀 高可拓展性,你可以自定义客户端和行为来进行功能拓展
12+
- ☁️ 多模型支持,包括Llama、OpenAI、Grok、Anthropic等
13+
- 📦 简单好用
14+
15+
16+
你可以用Eliza做什么?
17+
- 🤖 聊天机器人
18+
- 🕵️ 自主Agents
19+
- 📈 业务流程自动化处理
20+
- 🎮 游戏NPC
21+
22+
# 开始使用
23+
24+
**前置要求(必须):**
25+
26+
- [Node.js 22+](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
27+
- Nodejs安装
28+
- [pnpm](https://pnpm.io/installation)
29+
- 使用pnpm
30+
31+
### 编辑.env文件
32+
33+
- - 将 .env.example 复制为 .env 并填写适当的值
34+
- 编辑推特环境并输入你的推特账号和密码
35+
36+
### 编辑角色文件
37+
38+
- 查看文件 `src/core/defaultCharacter.ts` - 您可以修改它
39+
- 您也可以使用 `node --loader ts-node/esm src/index.ts --characters="path/to/your/character.json"` 加载角色并同时运行多个机器人。
40+
41+
在完成账号和角色文件的配置后,输入以下命令行启动你的bot:
42+
```
43+
pnpm i
44+
pnpm start
45+
```
46+
47+
# 自定义Eliza
48+
49+
### 添加常规行为
50+
51+
为避免在核心目录中的 Git 冲突,我们建议将自定义操作添加到 custom_actions 目录中,并在 elizaConfig.yaml 文件中配置这些操作。可以参考 elizaConfig.example.yaml 文件中的示例。
52+
53+
## 配置不同的大模型
54+
55+
### 配置Llama
56+
您可以通过设置 `XAI_MODEL` 环境变量为 `meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo``meta-llama/Meta-Llama-3.1-405B-Instruct` 来运行 Llama 70B 或 405B 模型
57+
58+
### 配置OpenAI
59+
60+
您可以通过设置 `XAI_MODEL` 环境变量为 `gpt-4o-mini``gpt-4o` 来运行 OpenAI 模型
61+
62+
63+
## 其他要求
64+
65+
您可能需要安装 Sharp。如果在启动时看到错误,请尝试使用以下命令安装:
66+
67+
```
68+
pnpm install --include=optional sharp
69+
```
70+
71+
# 环境设置
72+
73+
您需要在 .env 文件中添加环境变量以连接到各种平台:
74+
75+
```
76+
# Required environment variables
77+
DISCORD_APPLICATION_ID=
78+
DISCORD_API_TOKEN= # Bot token
79+
OPENAI_API_KEY=sk-* # OpenAI API key, starting with sk-
80+
ELEVENLABS_XI_API_KEY= # API key from elevenlabs
81+
82+
# ELEVENLABS SETTINGS
83+
ELEVENLABS_MODEL_ID=eleven_multilingual_v2
84+
ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM
85+
ELEVENLABS_VOICE_STABILITY=0.5
86+
ELEVENLABS_VOICE_SIMILARITY_BOOST=0.9
87+
ELEVENLABS_VOICE_STYLE=0.66
88+
ELEVENLABS_VOICE_USE_SPEAKER_BOOST=false
89+
ELEVENLABS_OPTIMIZE_STREAMING_LATENCY=4
90+
ELEVENLABS_OUTPUT_FORMAT=pcm_16000
91+
92+
TWITTER_DRY_RUN=false
93+
TWITTER_USERNAME= # Account username
94+
TWITTER_PASSWORD= # Account password
95+
TWITTER_EMAIL= # Account email
96+
TWITTER_COOKIES= # Account cookies
97+
98+
X_SERVER_URL=
99+
XAI_API_KEY=
100+
XAI_MODEL=
101+
102+
103+
# For asking Claude stuff
104+
ANTHROPIC_API_KEY=
105+
106+
WALLET_PRIVATE_KEY=EXAMPLE_WALLET_PRIVATE_KEY
107+
WALLET_PUBLIC_KEY=EXAMPLE_WALLET_PUBLIC_KEY
108+
109+
BIRDEYE_API_KEY=
110+
111+
SOL_ADDRESS=So11111111111111111111111111111111111111112
112+
SLIPPAGE=1
113+
RPC_URL=https://api.mainnet-beta.solana.com
114+
HELIUS_API_KEY=
115+
116+
117+
## Telegram
118+
TELEGRAM_BOT_TOKEN=
119+
120+
TOGETHER_API_KEY=
121+
```
122+
123+
# 本地设置
124+
125+
### CUDA设置
126+
127+
如果你有高性能的英伟达显卡,你可以以下命令行通过CUDA来做本地加速
128+
129+
```
130+
pnpm install
131+
npx --no node-llama-cpp source download --gpu cuda
132+
```
133+
134+
确保你安装了完整的CUDA工具包,包括cuDNN和cuBLAS
135+
136+
### 本地运行
137+
138+
添加 XAI_MODEL 并将其设置为上述 [使用 Llama 运行](#run-with-llama) 中的选项之一
139+
您可以将 X_SERVER_URL 和 XAI_API_KEY 留空,它会从 huggingface 下载模型并在本地查询
140+
141+
142+
# 客户端
143+
144+
关于怎么设置discord bot,可以查看discord的官方文档
145+
146+
# 开发
147+
148+
## 测试
149+
150+
几种测试方法的命令行:
151+
152+
```bash
153+
pnpm test # Run tests once
154+
pnpm test:watch # Run tests in watch mode
155+
```
156+
157+
对于数据库特定的测试:
158+
```bash
159+
pnpm test:sqlite # Run tests with SQLite
160+
pnpm test:sqljs # Run tests with SQL.js
161+
```
162+
163+
测试使用 Jest 编写,位于 src/**/*.test.ts 文件中。测试环境配置如下:
164+
- 从 .env.test 加载环境变量
165+
- 使用 2 分钟的超时时间来运行长时间运行的测试
166+
- 支持 ESM 模块
167+
- 按顺序运行测试 (--runInBand)
168+
169+
要创建新测试,请在要测试的代码旁边添加一个 .test.ts 文件。

core/src/cli/index.ts

+2
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,9 @@ export function getTokenForProvider(
110110
);
111111
case ModelProvider.ANTHROPIC:
112112
return (
113+
character.settings?.secrets?.ANTHROPIC_API_KEY ||
113114
character.settings?.secrets?.CLAUDE_API_KEY ||
115+
settings.ANTHROPIC_API_KEY ||
114116
settings.CLAUDE_API_KEY
115117
);
116118
case ModelProvider.REDPILL:

core/src/core/generation.ts

+2-1
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,8 @@ export async function generateText({
113113

114114
case ModelProvider.GROK: {
115115
prettyConsole.log("Initializing Grok model.");
116-
const grok = createGroq({ apiKey });
116+
const serverUrl = models[provider].endpoint;
117+
const grok = createOpenAI({ apiKey, baseURL: serverUrl });
117118

118119
const { text: grokResponse } = await aiGenerateText({
119120
model: grok.languageModel(model, {

core/src/core/models.ts

+4-4
Original file line numberDiff line numberDiff line change
@@ -74,10 +74,10 @@ const models: Models = {
7474
},
7575
endpoint: "https://api.x.ai/v1",
7676
model: {
77-
[ModelClass.SMALL]: "grok-2-beta",
78-
[ModelClass.MEDIUM]: "grok-2-beta",
79-
[ModelClass.LARGE]: "grok-2-beta",
80-
[ModelClass.EMBEDDING]: "grok-2-beta", // not sure about this one
77+
[ModelClass.SMALL]: "grok-beta",
78+
[ModelClass.MEDIUM]: "grok-beta",
79+
[ModelClass.LARGE]: "grok-beta",
80+
[ModelClass.EMBEDDING]: "grok-beta", // not sure about this one
8181
},
8282
},
8383
[ModelProvider.GROQ]: {

0 commit comments

Comments
 (0)