I’m proud to announce I think I am working on something pretty cool! It uses Qwen’s 32B model and supports any OpenAI-compatible API. Source code will probably be released once it is completed.
It can call itself to process complex thought processes, execute articulate lua code to give precise math results, do facial expressions, emotes, jump, follow, turn, move to, welcome people, and even vision (configurable custom context on objects, tell directions, etc.)
thats cool! did you train the ai yourself or did you just use and customize something like google gemini api or chatgpt api, altough i think it probably is custom trained
HI! It uses the Qwen 2.5 weights, but the prompts are handcrafted and it’s divided into modules that each provides actions the agent can make or adds passive context into each time the api is called
Unfortunately, we have other projects on our backlog (like servers in roblox, web servers, audio mixers, etc.), so it will take a while before development is completely finished
Hi! Everything is done using string manipulation and a Qwen Instruct model. It would be very much possible to structure this in a cleaner way with support for tool calls and strucutred output
The source code will be released when I have time to lol, then there’s also pathfinding and other to add