Did anyone else notice that the “inner workings” of the example place linked in the original post are kinda odd? Whoever made it decided to use proximity prompts only as a visual element WITHOUT binding any functions to their .Triggered event - instead, the interaction logic is weirdly handled in an obscure LocalScript under StarterPlayerScripts that listens to keyboard input and performs two distance checks (one for each character) to determine if the player is within range of the prompt. Which is why the prompts don’t react to mouse input and will only work when you actually press the E key… I don’t see any particular reason why one would do this. Seems like it was hastily put together by an inexperienced scripter.
You’re thinking of very basic systems. What he’s picturing is that the AI will interpret what is happening and respond with one or more complex commands for each system. This is absolutely not something that can be done without an LLM without a lot of complicated code.
I’ve been thinking of something similar a while ago when ChatGPT was first introduced. If you can prompt an LLM to utilize a certain set of text commands to interact with its environment and include enough information about said environment (i.e. place with certain waypoints in Workspace) and a directive or task to fulfill, what is stopping you from basically creating fully autonomous AI super soldiers? Might be a fun thing to work on, especially considering the decent roleplay capabilites that are essentially inherent to almost any LLM.
I also noticed this and found it strange. Still it is a win as most methods outside of this API (most notably Gemini) have rate limits that require upfront cost that gets costly with larger concurrent players.
Unfortunately the biggest issue is at it’s current state it doesn’t stack up well against Gemini in terms of how “human” it sounds, but I’m hopeful that’ll change with more iterations and training.
Can y’all make it not repeat the same output 4x every time?
must be really gpu expensive with the entire roblox using it
looks interesting! i already have some ideas on how to use it without completely relying on it. im going to write a bunch of basic dialogue and have the ai change up the sentences based off the personality of the npc.
First thing im doing, working phone with a Chat GPT app
This will be really useful for alive npcs in my game.
Im just going off of that one detective steam game that used AI for its responses, and it was just impossible, since AI would always lie to you and you would never get anywhere with it. It’s just the fact AI isn’t perfect, it needs to be perfect when a single mistake means you completely misunderstand the story. Especially if its asked something that wasn’t mentioned in its instructions, its just going to hallucinate a response in full confidence.
yes, as of right now, that is all it is capable of, but even when I try using it to help me it makes mistakes on simple scripts.
ya, i started out looking at tutorials and other peoples scripts before taking any actual scripting classes.
i wonder what llm does this use… is it Llama or other finetuned llm…?
i wasted my time just making a gemini api wrapper just for this to came out… wth
it’s llama 3.2, you can ask it i think
Ngl, As soon as this comes it, I will make finally be able to make something like the best adventure game. (D&D)
I like that with this every NPC has their own character, and that they express themselves with those attributes.
But despite all of that, mine outed itself fast:
I’m just hoping it wont be used for weird stuff. I mean have you seen what kids are doing nowadays with artificial intelligence.
How real time is it? Is it entire sentences or individual words? And will there be timestamps at which those words were said?
Giving me Amazing World of Gumball “Gumball roleplaying as Akane-chan” vibes LOL