You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With verified inference, you can turn your Eliza agent fully verifiable on-chain on Solana with an OpenAI compatible TEE API. This proves that your agent’s thoughts and outputs are free from human control thus increasing the trust of the agent.
10
+
11
+
Compared to [fully deploying the agent in a TEE](https://elizaos.github.io/eliza/docs/advanced/eliza-in-tee/), this is a more light-weight solution which only verifies the inference calls and only needs a single line of code change.
12
+
13
+
The API supports all OpenAI models out of the box, including your fine-tuned models. The following guide will walk you through how to use verified inference API with Eliza.
14
+
15
+
## Background
16
+
17
+
The API is built on top of [Sentience Stack](https://github.com/galadriel-ai/Sentience), which cryptographically verifies agent's LLM inferences inside TEEs, posts those proofs on-chain on Solana, and makes the verified inference logs available to read and display to users.
18
+
19
+
Here’s how it works:
20
+

21
+
22
+
1. The agent sends a request containing a message with the desired LLM model to the TEE.
23
+
2. The TEE securely processes the request by calling the LLM API.
24
+
3. The TEE sends back the `{Message, Proof}` to the agent.
25
+
4. The TEE submits the attestation with `{Message, Proof}` to Solana.
26
+
5. The Proof of Sentience SDK is used to read the attestation from Solana and verify it with `{Message, Proof}`. The proof log can be added to the agent website/app.
27
+
28
+
To verify the code running inside the TEE, use instructions [from here](https://github.com/galadriel-ai/sentience/tree/main/verified-inference/verify).
29
+
30
+
## Tutorial
31
+
32
+
1.**Create a free API key on [Galadriel dashboard](https://dashboard.galadriel.com/login)**
33
+
2.**Configure the environment variables**
34
+
```bash
35
+
GALADRIEL_API_KEY=gal-*# Get from https://dashboard.galadriel.com/
36
+
# Use any model supported by OpenAI
37
+
SMALL_GALADRIEL_MODEL= # Default: gpt-4o-mini
38
+
MEDIUM_GALADRIEL_MODEL= # Default: gpt-4o
39
+
LARGE_GALADRIEL_MODEL= # Default: gpt-4o
40
+
# If you wish to use a fine-tuned model you will need to provide your own OpenAI API key
41
+
GALADRIEL_FINE_TUNE_API_KEY= # starting with sk-
42
+
```
43
+
3. **Configure your character to use `galadriel`**
44
+
45
+
In your character file set the `modelProvider` as `galadriel`.
46
+
```
47
+
"modelProvider": "galadriel"
48
+
```
49
+
4. **Run your agent.**
50
+
51
+
Reminder how to run an agent is [here](https://elizaos.github.io/eliza/docs/quickstart/#create-your-first-agent).
Use this to build a verified logs terminal to your agent front end, for example:
70
+

71
+
72
+
6. **Check your inferences in the explorer.**
73
+
74
+
You can also see your inferences with proofs in the [Galadriel explorer](https://explorer.galadriel.com/). For specific inference responses use `https://explorer.galadriel.com/details/<hash>`
75
+
76
+
The `hash` param is returned with every inference request.
77
+

78
+
79
+
7. **Check proofs posted on Solana.**
80
+
81
+
You can also see your inferences with proofs on Solana. For specific inference responses: `https://explorer.solana.com/tx/<>tx_hash?cluster=devnet`
82
+
83
+
The `tx_hash` param is returned with every inference request.
0 commit comments