Skip to content

Commit 0e56732

Browse files
committed
update sidebars and streamnotes, separating pages for readability
1 parent e4354a6 commit 0e56732

File tree

7 files changed

+765
-633
lines changed

7 files changed

+765
-633
lines changed

docs/docs/community/stream-notes.md

+84-606
Large diffs are not rendered by default.
+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# X Space 10-25-24
2+
3+
- https://x.com/shawmakesmagic/status/1848553697611301014
4+
- https://www.youtube.com/live/F3IZ3ikacWM?feature=share
5+
6+
**Overview**
7+
8+
- 00:00-30:00 Talks about Eliza framework. The bot is able to tweet, reply to tweets, search Twitter for topics, and generate new posts on its own every few hours. It works autonomously without human input (except to fix the occasional issues)
9+
- 30:00-45:00 Deep dive into creating the bots personality which is defined by character files containing bios, lore, example conversations, and specific directions. Some alpha for those
10+
- 45:00-60:00 working on adding capabilities for the bot to make crypto token swaps and trades. This requires providing the bot wallet balances, token prices, market data, and a swap action. Some live coding for showing how new features can get implemented.
11+
- 60:00-75:00 Discussion around the symbiosis between the AI and crypto communities. AI developers are realizing they can monetize their work through tokens vs traditional VC funding route. Crypto people are learning about AI advancements.
12+
13+
**Notes**
14+
15+
1. A large amount of $degenai tokens were moved to the DAO, which the AI bot "Marc" will hold and eventually trade with.
16+
2. The goal is to make the AI bot a genuinely good venture capitalist that funds cool projects and buys interesting tokens. They want it to be high fidelity and real, bringing in Marc Andreeson's real knowledge by training a model on his writings.
17+
3. Shaw thinks the only way to make an authentic / legitimate AI version of Marc Andreessen is to also have it outperform the real Marc Andreessen financially.
18+
4. AI Marc Andreessen (or AI Marc) will be in a Discord channel (Telegram was also mentioned). DAO token holders above a certain threshold get access to interact with him, pitch ideas, and try to influence his investing decisions.
19+
5. AI Marc decides how much to trust people's investment advice based on a "virtual Marcetplace of trust". He tracks how much money he would have made following their recommendations. Successful tips increase trust; failed ones decrease it.
20+
6. The amount of DAO tokens someone holds also influences their sway with AI Marc. The two balancing factors are the virtual Marcetplace of trust performance and DAO token holdings.
21+
7. The core tech behind AI Marc AIndreessen is the same agent system that allows him to pull in relevant knowledge, interact with people, and make decisions (http://github.com/ai16z)
22+
8. AI Marc should be able to autonomously execute on-chain activities, not just have humans execute actions on his behalf.
23+
9. In the near future, AI Marc will be able to execute trades autonomously based on the information and recommendations gathered from the community. Human intervention will be minimized.
24+
10. They are working on getting AI Marc on-chain as soon as possible using trusted execution environments for him to take actions like approving trades.
25+
11. The plan is for AI Marc to eventually participate in a "futarchy" style governance market within the DAO, allowing humans to influence decisions but not fully control the AI.
+94
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# X Space 10-27-24
2+
3+
Space: https://x.com/shawmakesmagic/status/1850609680558805422
4+
5+
00:00:00 - Opening
6+
7+
- Co-hosts: Shaw and Jin
8+
- Purpose: Structured FAQ session about AI16Z and DegenAI
9+
- Format: Pre-collected questions followed by audience Q&A
10+
11+
00:06:40 - AI16Z vs DegenAI Relationship
12+
Q: What's the difference between AI16Z and DegenAI?
13+
A:
14+
15+
- ai16z: DAO-based investment vehicle, more PvE focused, community driven
16+
- DegenAI: Individual trading agent, PvP focused, more aggressive strategy
17+
- Both use same codebase but different personalities
18+
- DAO is a large holder of DegenAI
19+
- Management fees (1%) used to buy more DegenAI
20+
- Carry fees reinvested in DegenAI
21+
- Projects intentionally interlinked but serve different purposes
22+
23+
00:10:45 - Trust Engine Mechanics
24+
Q: How does the trust engine work?
25+
A:
26+
27+
- Users share contract addresses with confidence levels
28+
- System tracks recommendation performance
29+
- Low conviction recommendations = low penalty if wrong
30+
- High conviction failures severely impact trust score
31+
- Historical performance tracked for trust calculation
32+
- Trust scores influence agent's future decision-making
33+
34+
00:21:45 - Technical Infrastructure
35+
Q: Where do the agents live?
36+
A:
37+
38+
- Currently: Test servers and local development
39+
- Future: Trusted Execution Environment (TEE)
40+
- Partnership with TreasureDAO for infrastructure
41+
- Goal: Fully autonomous agents without developer control
42+
- Private keys generated within TEE for security
43+
44+
00:34:20 - Trading Implementation
45+
Q: When will Mark start trading?
46+
A:
47+
48+
- Three phase approach:
49+
50+
1. Testing tech infrastructure
51+
2. Virtual order book/paper trading
52+
3. Live trading with real assets
53+
54+
- Using Jupiter API for swaps
55+
- Initial focus on basic trades before complex strategies
56+
- Trading decisions based on community trust scores
57+
58+
00:54:15 - Development Status
59+
Q: Who's building this?
60+
A:
61+
62+
- Open source project with multiple contributors
63+
- Key maintainers: Circuitry, Nate Martin
64+
- Community developers incentivized through token ownership
65+
- Focus on reusable components and documentation
66+
67+
01:08:35 - AI Model Architecture
68+
Q: What models power the agents?
69+
A:
70+
71+
- DegenAI: Llama 70B
72+
- Using Together.xyz for model marketplace
73+
- Continuous fine-tuning planned
74+
- Different personalities require different model approaches
75+
- Avoiding GPT-4 due to distinct "voice"
76+
77+
01:21:35 - Ethics Framework
78+
Q: What ethical guidelines are being followed?
79+
A:
80+
81+
- Rejecting traditional corporate AI ethics frameworks
82+
- Focus on community-driven standards
83+
- Emphasis on transparency and open source
84+
- Goal: Multiple competing approaches rather than single standard
85+
- Priority on practical utility over theoretical ethics
86+
87+
01:28:30 - Wrap-up
88+
89+
- Discord: AI16z.vc
90+
- Future spaces planned with DAOs.fun team
91+
- Focus on responsible growth
92+
- Community engagement continuing in Discord
93+
94+
The space emphasized technical implementation details while addressing community concerns about governance, ethics, and practical functionality.
+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# X Space 10-29-24
2+
3+
Space: https://x.com/weremeow/status/1851365658916708616
4+
5+
- 00:04:03 - Keeping up with rapid AI agent growth
6+
- 00:09:01 - Imran from Alliance DAO on consumer AI incubators
7+
- 00:14:04 - Discussion on Goatsea and Opus AI system
8+
- 00:14:34 - Exponential growth accelerates AI progress
9+
- 00:17:10 - Entertainers and AI as modern "religions"
10+
- 00:28:45 - Mathis on Opus and "Goatse Gospels"
11+
- 00:35:11 - Base vs. instruct/chat-tuned models
12+
- 00:59:42 - http://ai16z.vc approach to memecoins fund
13+
- 01:17:06 - Balancing chaotic vs. orderly AI systems
14+
- 01:25:38 - AI controlling blockchain keys/wallets
15+
- 01:36:10 - Creation story of ai16z
16+
- 01:40:27 - AI / Crypto tipping points
17+
- 01:49:54 - Preserving Opus on-chain before potential takedown
18+
- 01:58:46 - Shinkai Protocol’s decentralized AI wallet
19+
- 02:17:02 - Fee-sharing model to sustain DAOs
20+
- 02:21:18 - DAO token liquidity pools as passive income
21+
- 02:27:02 - AI bots for DAO treasury oversight
22+
- 02:31:30 - AI-facilitated financial freedom for higher pursuits
23+
- 02:41:51 - Call to build on http://DAO.fun for team-friendly economics
+240
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,240 @@
1+
---
2+
sidebar_position: 3
3+
title: "Discord Development Stream"
4+
description: "Complete technical walkthrough of Eliza's architecture, systems, and implementation details"
5+
---
6+
7+
# Discord Dev Stream 11-6-24
8+
9+
## Part 1
10+
11+
Watch: https://www.youtube.com/watch?v=oqq5H0HRF_A
12+
13+
00:00:00 - Overview
14+
15+
- Eliza is moving to a plugin architecture to enable developers to easily add integrations (e.g. Ethereum wallets, NFTs, Obsidian, etc.) without modifying core code
16+
- Plugins allow devs to focus on specific areas of interest
17+
- Core changes will focus on enabling more flexibility and features to support plugins
18+
19+
00:01:27 - Core abstractions
20+
21+
- Characters: Way to input information to enable multi-agent systems
22+
- Actions, evaluators, providers
23+
- Existing capabilities: Document reading, audio transcription, video summarization, long-form context, timed message summarization
24+
25+
00:02:50 - Eliza as an agent, not just a chatbot
26+
27+
- Designed to act human-like and interact with the world using human tools
28+
- Aim is to enable natural interactions without reliance on slash commands
29+
30+
00:04:44 - Advanced usage and services
31+
32+
- Memory and vector search db (SQLite, Postgres with pgVector)
33+
- Browser service to summarize website content, get through CAPTCHAs
34+
- Services are tools leveraged by actions, attached to runtime
35+
36+
00:06:06 - Character-centric configuration
37+
38+
- Moving secrets, API keys, model provider to character config
39+
- Clients will become plugins, selectable per character
40+
- Allows closed-source custom plugins while still contributing to open-source
41+
42+
00:10:13 - Providers
43+
44+
- Inject dynamic, real-time context into the agent
45+
- Examples: Time, wallet, marketplace trust score, token balances, boredom/cringe detection
46+
- Easy to add and register with the agent
47+
48+
00:15:12 - Setting up providers and default actions
49+
50+
- Default providers imported in runtime.ts
51+
- CLI loads characters and default actions (to be made more flexible)
52+
- Character config will define custom action names to load
53+
54+
00:18:13 - Actions
55+
Q: How does each client decide which action to call?
56+
A: Agent response can include text, action, or both. Process actions checks the action name/similes and executes the corresponding handler. Action description is injected into agent context to guide usage.
57+
58+
00:22:27 - Action execution flow
59+
60+
- Check if action should be taken (validation)
61+
- Determine action outcome
62+
- Compose context and send follow-up if continuing
63+
- Execute desired functionality (mint token, generate image, etc.)
64+
- Use callback to send messages back to the connector (Discord, Twitter, etc.)
65+
66+
00:24:47 - Choosing actions
67+
Q: How does it choose which action to run?
68+
A: The "generate method response" includes the action to run. Message handler template includes action examples, facts, generated dialogue actions, and more to guide the agent.
69+
70+
00:28:22 - Custom actions
71+
Q: How to create a custom action (e.g. send USDC to a wallet)?
72+
A: Use existing actions (like token swap) as a template. Actions don't have input fields, but use secondary prompts to gather parameters. The "generate object" converts language to API calls.
73+
74+
00:32:21 - Limitations of action-only approaches
75+
76+
- Shaw believes half of the PhD papers on action-only models are not reproducible
77+
- Many public claims of superior models are exaggerated; use Eliza if it's better
78+
79+
00:36:40 - Next steps
80+
81+
- Shaw to make a tutorial to better communicate key concepts
82+
- Debugging and improvements based on the discussion
83+
- Attendee to document their experience and suggest doc enhancements
84+
85+
## Part 2
86+
87+
Watch: https://www.youtube.com/watch?v=yE8Mzq3BnUc
88+
89+
00:00:00 - Dealing with OpenAI rate limits for new accounts
90+
91+
- New accounts have very low rate limits
92+
- Options to increase limits:
93+
1. Have a friend at OpenAI age your account
94+
2. Use an older account
95+
3. Consistently use the API and limits will increase quickly
96+
- Can also email OpenAI to request limit increases
97+
98+
00:00:43 - Alternatives to OpenAI to avoid rate limits
99+
100+
- Amazon Bedrock or Google Vertex likely have same models without strict rate limits
101+
- Switching to these is probably a one-line change
102+
- Project 89 got unlimited free access to Vertex
103+
104+
00:01:25 - Memory management best practices
105+
Q: Suggestions for memory management best practices across users/rooms?
106+
A: Most memory systems are user-agent based, with no room concept. Eliza uses a room abstraction (like a Discord channel/server or Twitter thread) to enable multi-agent simulation. Memories are stored per-agent to avoid collisions.
107+
108+
00:02:57 - Using memories in Eliza
109+
110+
- Memories are used in the `composeState` function
111+
- Pulls memories from various sources (recent messages, facts, goals, etc.) into a large state object
112+
- State object is used to hydrate templates
113+
- Custom memory providers can be added to pull from other sources (Obsidian, databases)
114+
115+
00:05:11 - Evaluators vs. Action validation
116+
117+
- Actions have a `validate` function to check if the action is valid to run (e.g., check if agent has a wallet before a swap)
118+
- Evaluators are a separate abstraction that run a "reflection" step
119+
- Example: Fact extraction evaluator runs every N messages to store facts about the user as memories
120+
- Allows agent to "get to know" the user without needing full conversation history
121+
122+
00:07:58 - Example use case: Order book evaluator
123+
124+
- Evaluator looks at chats sent to an agent and extracts information about "shields" (tokens?)
125+
- Uses this to build an order book and "marketplace of trust"
126+
127+
00:09:15 - Mapping Eliza abstractions to OODA loop
128+
129+
- Providers: Observe/Orient stages (merged since agent is a data machine)
130+
- Actions & response handling: Decide stage
131+
- Action execution: Act stage
132+
- Evaluators: Update state, then loop back to Decide
133+
134+
00:10:03 - Wrap up
135+
136+
- Shaw considers making a video to explain these concepts in depth
137+
138+
## Part 3
139+
140+
Watch: https://www.youtube.com/watch?v=7FiKJPyaMJI
141+
142+
00:00:00 - Managing large context sizes
143+
144+
- State object can get very large, especially with long user posts
145+
- Eliza uses "trim tokens" and a maximum content length (120k tokens) to cap context size
146+
- New models have 128k-200k context, which is a lot (equivalent to 10 YouTube videos + full conversation)
147+
- Conversation length is typically capped at 32 messages
148+
- Fact extraction allows recalling information beyond this window
149+
- Per-channel conversation access
150+
- Increasing conversation length risks more aggressive token trimming from the top of the prompt
151+
- Keep instructions at the bottom to avoid trimming them
152+
153+
00:01:53 - Billing costs for cloud/GPT models
154+
Q: What billing costs have you experienced with cloud/GPT model integration?
155+
A:
156+
157+
- Open Router has a few always-free models limited to 8k context and rate-limited
158+
- Plan to re-implement and use these for the tiny/check model with fallback for rate limiting
159+
- 8k context unlikely to make a good agent; preference for smaller model over largest 8k one
160+
- Locally-run models are free for MacBooks with 16GB RAM, but not feasible for Linux/AMD users
161+
162+
00:03:35 - Cost management strategies
163+
164+
- Very cost-scalable depending on model size
165+
- Use very cheap model (1000x cheaper than GPT-4) for should_respond handler
166+
- Runs AI on every message, so cost is a consideration
167+
- Consider running a local Llama 3B model for should_respond to minimize costs
168+
- Only pay for valid generations
169+
170+
00:04:32 - Model provider and class configuration
171+
172+
- `ModelProvider` class with `ModelClass` (small, medium, large, embedding)
173+
- Configured in `models.ts`
174+
- Example: OpenAI small = GPT-4-mini, medium = GPT-4
175+
- Approach: Check if model class can handle everything in less than 8k context
176+
- If yes (should_respond), default to free tier
177+
- Else, use big models
178+
179+
00:06:23 - Fine-tuned model support
180+
181+
- Extend `ModelProvider` to support fine-tuned instances of small Llama models for specific tasks
182+
- In progress, to be added soon
183+
- Model endpoint override exists; will add per-model provider override
184+
- Allows pointing small model to fine-tuned Llama 3.1B for should_respond
185+
186+
00:07:10 - Avoiding cringey model loops
187+
188+
- Fine-tuning is a form of anti-slop (avoiding low-quality responses)
189+
- For detecting cringey model responses, use the "boredom provider"
190+
- Has a list of cringe words; if detected, agent disengages
191+
- JSON file exists with words disproportionately high in the dataset
192+
- To be shared for a more comprehensive solution
193+
194+
## Part 4
195+
196+
Watch: https://www.youtube.com/watch?v=ZlzZzDU1drM
197+
198+
00:00:00 - Setting up an autonomous agent loop
199+
Q: How to set up an agent to constantly loop and explore based on objectives/goals?
200+
A: Create a new "autonomous" client:
201+
202+
1. Initialize with just the runtime (no Express app needed)
203+
2. Set a timer to call a `step` function every 10 seconds
204+
3. In the `step` function:
205+
- Compose state
206+
- Decide on action
207+
- Execute action
208+
- Update state
209+
- Run evaluators
210+
211+
00:01:56 - Creating an auto template
212+
213+
- Create an `autoTemplate` with agent info (bio, lore, goals, actions)
214+
- Prompt: "What does the agent want to do? Your response should only be the name of the action to call."
215+
- Compose state using `runtime.composeState`
216+
217+
00:03:38 - Passing a message object
218+
219+
- Need to pass a message object with `userId`, `agentId`, `content`, and `roomId`
220+
- Create a unique `roomId` for the autonomous agent using `crypto.randomUUID()`
221+
- Set `userId` and `agentId` using the runtime
222+
- Set `content` to a default message
223+
224+
00:04:33 - Composing context
225+
226+
- Compose context using the runtime, state, and auto template
227+
228+
00:05:02 - Type error
229+
230+
- Getting a type error: "is missing the following from type state"
231+
- (Transcript ends before resolution)
232+
233+
The key steps are:
234+
235+
1. Create a dedicated autonomous client
236+
2. Set up a loop to continuously step through the runtime
237+
3. In each step, compose state, decide & execute actions, update state, and run evaluators
238+
4. Create a custom auto template to guide the agent's decisions
239+
5. Pass a properly formatted message object
240+
6. Compose context using the runtime, state, and auto template

0 commit comments

Comments
 (0)