Table of content:
Aktyn Assistant is an application that allows you to interact with an AI on various types of devices while performing regular tasks.
- Quick chat window can be easily activated with a keyboard shortcut.
- Users can configure its default behavior by simply describing it in the settings.
- The assistant can read responses aloud and will soon also be able to understand what the user is saying.
- Chat can be configured to include history of previous messages for a more continuous conversation.
- A customizable and easy-to-manage system of tools allows users to extend the capabilities of the Assistant.
Click here to see some screenshots
(Coming soon) It is able to take a quick glance at your screen and answer questions about it.
By utilizing different type of AI models, it can perform different tasks like generating images, making real time conversations, understanding image context, etc.
Go to releases page and download the latest version for your platform.
Additional platforms may be supported upon request.
-
Upon first run, you will be prompted to enter your OpenAI API key.
If you already have an OpenAI account, you can generate an API key here. -
Speaking relies primarily on the media players installed on your system.
If neither of them is installed, the application will fall back to a less stable solution for playing audio.Supported players
- mplayer
- afplay
- mpg123
- mpg321
- play
- omxplayer
- aplay
- cmdmp3
yarn install
- yarn 4.2.2 or newer is recommended
yarn build:all
andyarn start:terminal
to run the application with terminal interface- Some console features doesn't work inside turbo which handles the development run.
To make sure the console features work while you develop terminal app you can runyarn dev:packages
to watch changes only in packages/ and thenyarn run build && npx cross-env NODE_ENV=dev yarn start
from apps/terminal directory
yarn build:all
andyarn start:desktop
to run the application with desktop interfaceyarn dev:packages
to watch changes only in packages/ and thenyarn dev:desktop
yarn build:all
andyarn start:desktop
will build project binaries and prepare them for distribution (check apps/desktop/out directory afterwards)
yarn build:all
andyarn publish:desktop
to build and publish the application to github releases
Assistant can be instructed to call a defined function when needed and use its output to provide an answer based on the data it has received.
This feature is called function calling and is supported by the OpenAI API.
Function calling can also be used as a way for the assistant to interact with the system using code provided by tools.
There are some built-in tools that can be used out of the box, but adding more is as simple as selecting a directory and the main file within it.
A tool can by any NodeJS project that exports a function that returns an array of objects (tools) with compatible structure.
Click here to see example tool file
exports.default = index
const toolSchema = {
version: '1.0.0',
functionName: 'get_current_weather',
//Description tells AI what the function does so it can decide whether to call it or not
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: { type: 'string', enum: ['celsius', 'fahrenheit'] },
},
required: ['location'],
},
}
// This function can call an external APIs for example. Argument is a JSON object with the parameters defined in the schema
async function getCurrentWeather(data) {
const { location } = data
if (location.toLowerCase().includes('tokyo')) {
return JSON.stringify({
location: 'Tokyo',
temperature: '10',
unit: 'celsius',
})
} else if (location.toLowerCase().includes('san francisco')) {
return JSON.stringify({
location: 'San Francisco',
temperature: '72',
unit: 'fahrenheit',
})
} else if (location.toLowerCase().includes('paris')) {
return JSON.stringify({
location: 'Paris',
temperature: '22',
unit: 'fahrenheit',
})
} else {
return JSON.stringify({ location, temperature: 'unknown' })
}
}
function index() {
return [
{
schema: toolSchema,
function: getCurrentWeather,
},
]
}
- Speech synthesis and recognition
- Attaching screenshot or selected screen region to active chat with AI
- Real time voice chat utilizing GPT-4o model possibilities
- Support for multiple AI providers