DeterminantAI - chatGPT powered NPCs that you can customize!

This is really cool. Nice work!

1 Like

I’ve messed around with the tool for a while and it works suprisingly well for a free plugin, thank you and the rest of the community resources posters.

1 Like

:fire::fire: NEW FEATURE - COMMAND AI GO TO LOCATION:fire::fire:

We’ve launched a new feature that will allow AI NPCs to go to locations specified by the player. For example, a player can ask the NPC to “go to the bus stop” and the NPC would move there :running_man:. Say ‘Stop following me’ :stop_sign:, or ‘Leave me alone’ :no_good_man:, and the NPC will stop following you!

Behind the scenes the NPC is “thinking” :brain: about what the right action is. It’s not keyword matching, instead it’s “understanding” the ask, which means if you tell the NPC to “go wait for the bus” it will also go to the bus stop

Check out this video for a demo: https://www.youtube.com/watch?v=RkMEVsUI1GI

The feature is still in beta, so please bear with us! Happy creating! :sparkles::sparkles:

3 Likes

This is some interesting code! I’m glad to see others are interested in the subject as well.

I integrated your AI into my system

image
These are 3 responses. One from my local chatbot I shared, the second is from your API and the 3rd is a Experiment I did by combining the response from your API to Zephyr 7b.
I also integrated my Awareness module to create the same array as your except it grabs 3 of the closest objects from each object type, combines them into one table.

function aware.get.nearestModels(root,radius,getone,mode)--get the nearest object of each type 
local objectmegaarray={}
for i,v in aware.near do
local _,_,_,_,objectarray=aware.near[i](root,radius,getone,mode)
objectmegaarray[i]=objectarray 
end

local organizedobjects={}
for i,objectarray in objectmegaarray do
for t,o in objectarray do 
if t<3 then --get 3 objects max
table.insert(organizedobjects,o)
else 
break
end
end
end
table.sort(organizedobjects,function(a,b)return a.distance<b.distance end)
return organizedobjects
end

This was really interesting diving into. If you’re interested in seeing some of the changes I can send you copy of the modified module.

One of the main things I did was implement multiple personalities based on the queried npc. Each npc has their own message table.

it was something like this. Basically I turned all the local npc variables into tables where the npc’s name is the hash lookup for the state. If you want to see the modularized code shoot me a pm.

function DeterminantAgent.followloop(npc)
local heartbeaploop=nil
local timeSinceLastUpdate = 0
local updateInterval = 0.5  -- Update every 0.5 seconds
local isPlayerInWavingRange = false
--this function follows the player
heartbeaploop=RunService.Heartbeat:Connect(function(deltaTime)
	timeSinceLastUpdate = timeSinceLastUpdate + deltaTime

	if timeSinceLastUpdate >= updateInterval then
		timeSinceLastUpdate = 0  -- Reset the timer

		if currentstate[npc.Humanoid.DisplayName] == npcStates.following then
			if hiredPlayer[npc.Humanoid.DisplayName]~=nil and hiredPlayer[npc.Humanoid.DisplayName].Character and hiredPlayer[npc.Humanoid.DisplayName].Character:FindFirstChild("HumanoidRootPart") then
				local playerPosition = hiredPlayer[npc.Humanoid.DisplayName].Character.HumanoidRootPart.Position
				local npcPosition = npc.HumanoidRootPart.Position

				-- Calculate the direction vector from the NPC to the player
				local direction = (playerPosition - npcPosition).unit

				-- Calculate the target position for the NPC, maintaining the follow distance
				local targetPosition = playerPosition - direction * followDistance

				-- Check the distance between the NPC and the target position
				if (targetPosition - npcPosition).magnitude > 1 then -- '1' is a threshold to avoid jittery movement
					npc.Humanoid:MoveTo(targetPosition)
				end
			end
		elseif currentstate[npc.Humanoid.DisplayName] == npcStates.idle then
			if hiredPlayer[npc.Humanoid.DisplayName]~=nil and hiredPlayer[npc.Humanoid.DisplayName].Character and hiredPlayer[npc.Humanoid.DisplayName].Character:FindFirstChild("HumanoidRootPart") then
				local distance = (npc.HumanoidRootPart.Position - hiredPlayer[npc.Humanoid.DisplayName].Character.HumanoidRootPart.Position).magnitude
				if distance < 10 then -- Assuming 10 units as the waving distance
					-- Trigger waving animation
					-- Make sure to replace 'waveAnimationId' with the ID of your actual animation
					if not isPlayerInWavingRange then
						-- Player just entered the range, wave
						--isPlayerInWavingRange = true

						--emoteBindableFunction:Invoke("wave")
					end
				else
					isPlayerInWavingRange = false
					local npcPosition = npc.HumanoidRootPart.Position
					local playerPosition = hiredPlayer[npc.Humanoid.DisplayName].Character.HumanoidRootPart.Position

					local direction = Vector3.new(playerPosition.X - npcPosition.X, 0, playerPosition.Z - npcPosition.Z)
					npc.HumanoidRootPart.CFrame = CFrame.lookAt(npcPosition, npcPosition + direction)
				end
			end
		elseif currentstate[npc.Humanoid.DisplayName] == npcStates.leave then
        
            heartbeaploop:Disconnect()
			-- do something else
		end
	end
end)
end

I also inject the conventional awareness to the AI model in addition to the position data for the function calls.
I use the same pipeline as Zephyr 7b so it’s connected to the context window of Zephyr via its memories.

function registermemory(player,memory,npc)
if player and npc then
--local player=player.Name
--local npc=npc.Name

if cacheofmemories[player]==nil then--register a table for player
cacheofmemories[player]={}
end
if cacheofmemories[player][npc]==nil then--created nested table for npc
cacheofmemories[player][npc]={}
cacheofmemories[player][npc.."memories"]={}
return ""
end
if memory~=nil then

local memorystring= cm.summarrization(memory)--cm.summarrization(memory)--summary of the memory
if memorystring then
print("Created the memory")
print(memorystring)
--cacheofmemories[player][npc]={memorystring}
table.insert(cacheofmemories[player][npc.."memories"],memory)--add complete memory entry
table.insert(cacheofmemories[player][npc],memorystring)--add summarized memory entry
end
task.delay(3,function()
if #cacheofmemories[player][npc]>3 then --if summaries exceeds 3
local quantizedmemory=table.concat((cacheofmemories[player][npc.."memories"])," ")--summarize three sections of non summarized memories
local memorystring= cm.summarrization(quantizedmemory)--cm.summarrization(quantizedmemory)--summarize 3 sections
if memorystring then -- if response
print("Quantized the memory")
print(memorystring)
if #cacheofmemories[player][npc.."memories"]>=7 then--6 memories size for Zephyr
--save the old quantized memories. Create summary of all memories. and store last two memories.
--cacheofmemories[player][npc.."memories"]={table.concat(cacheofmemories[player][npc]," "),--concat the summarized cache
        --cacheofmemories[player][npc.."memories"][6],--last two memories
               -- cacheofmemories[player][npc.."memories"][7]}
local newtbl={table.concat(cacheofmemories[player][npc.."memories"])}

for i,v in cacheofmemories[player][npc.."memories"] do 
if i>2 then
table.insert(newtbl,v)
end 
end

cacheofmemories[player][npc.."memories"]=newtbl
end
cacheofmemories[player][npc]={cacheofmemories[player][npc][2],cacheofmemories[player][npc][3]}--clear the cache
table.insert(cacheofmemories[player][npc],memorystring)--memory now has 3 entries

end
end
end)
end
if #cacheofmemories[player][npc]>0 then
return "I remember "..table.concat(cacheofmemories[player][npc],". ")
end
end
end

I think this in conjunction with the normal context window could allow for long term memories.
Also in this register memory function I am using a small 400m model on huggingface to summarize the the conversation.

function cm.summarrization(inputq,mode)
	-- Get the HttpService
	--https://huggingface.co/facebook/bart-large-cnn?
	-- Get the HttpService
	-- Define the URL and the headers for the request
    
	local API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn"
	if mode then API_URL ="https://api-inference.huggingface.co/models/slauw87/bart_summarisation" end
    local headers = {
		["Authorization"] = Bearerkey,
		--["Content-Type"] = "application/json"
	}

	-- Define the payload for the request
	local payload = {
		inputs = inputq
	}

	-- Encode the payload as a JSON string
	local payloadJSON = HttpService:JSONEncode(payload)

	-- Send the request and get the response
	local success, response = pcall(function()
		return HttpService:RequestAsync({
			Url = API_URL,
			Method = "POST",
			Headers = headers,
			Body = payloadJSON
		})
	end)

	-- Check if the request was successful
	if success then
		-- Decode the response as a JSON table
		local responseJSON = HttpService:JSONDecode(response.Body)
print(response)
if responseJSON["Body"]~=nil then
if responseJSON["Body"]["Error"]~=nil then
if responseJSON["Body"]["estimated_time"]~=nil then
task.wait(responseJSON["Body"]["estimated_time"])
local success, response = pcall(function()
		return HttpService:RequestAsync({
			Url = API_URL,
			Method = "POST",
			Headers = headers,
			Body = payloadJSON
		})
	end)
responseJSON = HttpService:JSONDecode(response.Body)

end
end
if responseJSON["Body"]["summary_text"]~=nil then
return responseJSON["Body"]["summary_text"]
end
end
		-- Check if the response has a summary
		if responseJSON[1].summary_text then -- Use [1] to access the first element of the array
			-- Print the summary
			--print(responseJSON[1].summary_text) -- Use [1] to access the first element of the array
			return responseJSON[1].summary_text	
		else
			-- Print an error message
			--print(response)
			return inputq
		end
	else
		-- Print an error message
		
		--print("Request failed: " .. response)
		return inputq
	end
end

I’m also reducing API usage by saving each response in a personality specific database. But I may have to provide the awareness to the direct query match (exact personality, query, and exact surroundings, equals same response. Which would be interesting to see develop over time.

3 Likes

hey Magus!

thanks so much for trying our plugin and putting time into extending it! I’ve seen you around on the forums talking about AI related things

I’ll read through your suggestions and come up w a more in depth response

1 Like

Some main points are you can expand a max context range by using summarization and a algorithm for handling memories.
I have a library of about 100 expressive emotes labeled based on a keyword description of the emote. This algorithm leverages synonyms, antonyms, and nouns to get the likeliest emote (synonym example synonyms={“Hello”,“Hi”,“Hey”,“Greetings”}, antonyms={“goodbye”,“farewell”}) Scoring each entry by only looking for one example of each synonym group. Thus we can query a database with the sentence. “Hey there, nice to meet you. I’m very excited to go on a adventure.” and the npc would wave hello, then as it says the next sentence it would find the emotes with tags of excited, and adventure. Excited is a noun and so is adventure so adding noise to the algorithm would give it a 50/50 chance to execute either a emote tagged with adventure of excited.

Some main advice is that Good AI requires good data!

I’d be willing to share with you the library of emotes I have not open-sourced to speed up your project. I understand if you wouldn’t want to use my chat module library to leverage all those elements of the english language (synonyms, antonyms, reflections and nouns) to query such a database.
But the code is in the open sourced module it just doesn’t have the library if you were interested to see how it was implemented.
This was done by displaying each sentence at a time on a word by word basis (or character by character if FPS>30) then processing the sentence to determine the emote (and now actions based off my 170 action commands).
Here is a demonstration video of what I’m talking about. I also used a similar system to create a library of atmospheric particle effects and audio samples based on emotional tonality.
Here’s a demonstration video.

Also I just did an experiment where I inject the response as the starting string to Zephyr 7b and got this output.


   [["Kahlani: Good day to you, traveler! I am Duchess Kahlani; it's my honor to make your acquaintance. How may I assist you on this fine morning?  

 ArtStudios: I am in search of a rare artifact, rumored to be hidden in this very place. Do you happen to know anything about it?  

 Kahlani: I'm afraid I'm not privy to such information, traveler. However, I do know that there are a few chests scattered around this area, some of which may contain items of interest. Would you care to join me in exploring this island?  

 ArtStudios: That would be most gracious of you, Duchess. I would be honored to accompany you on this quest.  

 Kahlani: Very well, let us set off then. But first, let us take a moment to orient ourselves. Based on my instincts, I believe we are currently near a Broadleaf tree to the southwest, and a chest is nearby. There are also a couple of locked chests in the vicinity, but I'm afraid I don't have the key to them. Shall we begin our search?  

 ArtStudios: Absolutely, Duchess. Lead the way!  

 Kahlani: As you wish, traveler. Let us proceed with caution and vigilance, for we never know what dangers may lie ahead. But with your skills and my intuition, I'm confident we'll find what we're looking for! "]]

In this example I have Zephyr acting as a story teller and start it with the response from your AI after it displays the response. Then it simulates a conversation between the player and the npc.

Then the conversation is quantized into a memory for Zephyr. and Perhaps it should also become a memory for the other model. So they could be seamlessly integrated.

Roleplaying Zephyr 7B Luau API Documentation (Free API) System Message - Resources / Community Resources - Developer Forum | Roblox

Eliza Chatbot Ported to Luau [Open-Source] - Resources / Community Resources - Developer Forum | Roblox

Artificial Intelligence Model APIs FREE (Open Source) and Other Useful APIs - Resources / Community Resources - Developer Forum | Roblox

Also one modification I would make is something like this in the perception module!

if primaryPart then
            local pos=Vector3.new(math.floor(primaryPart.Position.X),math.floor(primaryPart.Position.Y),math.floor(primaryPart.Position.Z))
			local description = name .. " at " .. tostring(pos) 
			table.insert(descriptions, description)
		end

Should round down the floating points of each position to reduce token usage. That also may make it easier for the AI to call functions with those cordinates.

2 Likes

this is so cool, could you share how this was made, kinda wanna dig into it and try to expand, and possibly even provide my own api key and such to avoid rate limits as they might pop up

1 Like

If you encountered “Script Injection permissions” issues, please check the screenshot added in the first post

1 Like

I think it’s important to emphasize that you’re using your own Azure API for this on probably a credit grant.

If the module gets used (or abused) a lot, you’ll end up with a bill sooner or later, which will break the whole module. Every game built on it won’t work anymore.

If you want your module to be future-proof, include an explanation on how to self-host the API and a rough estimation of the $cost/month.

2 Likes

In addition you could probably fine-tune a smaller model like Zephyr 7B or even TinyLlama if you save the responses from the endpoint. that would be a fruitful endeavor and data is very valuable in this day and age even synthetic data.

Just added long term memories. :slight_smile:

 if response==nil then
local mem=registermemory(player.Name,nil,context)--get the memories
if mem==nil then--no memory so check if their is a memory token
mem=CheckToken(player.Name..responses[1],memdir)--memories are paired by a key of npcs name and players name. once memory established no need to check token

end
if mem~=nil then
--responses[2]=
if responses[6]~=nil and responses[6]~="" then 
mem=registermemory(player.Name,responses[6],context)--add the memory
responses[6]=nil
end
CreateToken(player.Name..responses[1],mem,memdir)--update the memory token
responses[7]=mem--dfg.GetStable("summarrizationencode",{mem})--
end--cm.summarrizationencode(registermemory(player.Name,nil,context))
response=cm.ZephyrStory(context,responses,player.Name,str)
        if response~=nil then
        player.PlayerGui.Chatbot.JSONString.Value=response 
		--	player.PlayerGui.Chatbot.ServerString.Value=response
		 spawn(function()
	savezephyrresponse(str,response,dir,player,context)
 end)
           else print("AI Zephyr Failure") 
            end

This has been executed very well! Interacting with these NPCs is much more conversational and makes for a great experience, and a pretty surreal one too.

What are your plans for funding this project going forward (particularly if this is used in games with high concurrent player-counts)?

1 Like

The NPC’s messages won’t show up in the chat, Is there any way to fix this?

Also when The NPC is on a seat It won’t get out of the seat

Another Issue I’m having is that my NPC thinks his name is Aeliana

I also suggest to maybe add a text to speech after all issues are fixed?

1 Like

hi!
seated issue seems like a bug in the code, working on a fix now

For the name part. you can tell the NPC its name in the background section for now. Will send in a fix to use the character’s name at the plugin.

Thanks for trying it out! May I ask are you creating a game right now?

So right now we’re in early stages and are putting a limit on the number of requests per player per day (any individual player playing it shouldn’t be hitting the limit). In the future, if we see it blow up we would probably have different tiers with some being priced

That’s a good point. Currently costs of serving large models are plummeting, our estimate is the trend should make it increasingly feasible and affordable to use AI modules in games, even those with high player counts. The tools we are utilizing help us study usage patterns and we are actively exploring cost-effective solutions for game developers.

Hi! I found another issue where he can bypass the chat filter and for example say the ww2 leaders full name.

Content moderation is something we’ve been working on, right now the endpoints are “auto-moderated”, one solution going forward could be a configurable content filtering mechanism that allows flexibility to set preferred severity level.

I’ve been experiencing an issue, whenever i try to talk to the NPC it just does a laughing emote and outputs this:

  ServerScriptService.MainModule:151: Failed to send request or receive response - MainModule:151

Please check if HTTP is enabled, you can find a screenshot in the first post.

1 Like

It’s definitely some interesting example of Zephyr 7b. Multiple models working on different tasks could be a great idea.