Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improvement: using strict types to avoid erorrs like issue 2164 #2220

Merged
merged 3 commits into from
Jan 16, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions packages/core/src/generation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -451,20 +451,23 @@ export async function generateText({
const openai = createOpenAI({
apiKey,
baseURL: endpoint,
fetch: async (url: string, options: any) => {
fetch: async (input: RequestInfo | URL, init?: RequestInit): Promise<Response> => {
const url = typeof input === 'string' ? input : input.toString();
const chain_id =
runtime.getSetting("ETERNALAI_CHAIN_ID") || "45762";

const options: RequestInit = { ...init };
if (options?.body) {
const body = JSON.parse(options.body);
const body = JSON.parse(options.body as string);
body.chain_id = chain_id;
options.body = JSON.stringify(body);
}

const fetching = await runtime.fetch(url, options);
if (
parseBooleanFromText(
runtime.getSetting("ETERNALAI_LOG")
)
) {

if (parseBooleanFromText(
runtime.getSetting("ETERNALAI_LOG")
)) {
elizaLogger.info(
"Request data: ",
JSON.stringify(options, null, 2)
Expand Down Expand Up @@ -1102,7 +1105,6 @@ export async function splitChunks(
* @param opts.presence_penalty The presence penalty to apply (0.0 to 2.0)
* @param opts.temperature The temperature to control randomness (0.0 to 2.0)
* @param opts.serverUrl The URL of the API server
* @param opts.token The API token for authentication
* @param opts.max_context_length Maximum allowed context length in tokens
* @param opts.max_response_length Maximum allowed response length in tokens
* @returns Promise resolving to a boolean value parsed from the model's response
Expand Down
Loading