Giving LLM Text Vision and Self Awareness in Luau [OPEN SOURCE]

Ok so i just put the top one in a module script and the bottom in a local script or something?

and is the api key one for the entire huggingface or do i need to get one per model

As I said earlier the code should be run on the server to utilize HTTP service. Then if you want to fire to the client you can use a RemoteFunction. Also you only need one API key per model.
Also, this chat template works with Zephyr 7b and some other models, but it doesn’t work with all of them.
“place the code with your api key and the inference endpoint shown initially and insert that code.
Place it in a module and require( the module ) and call it with Chatbotresponse=query(input,system_message)
This call can only be done on the server. so you can connect to it via”

local chatbotmod=location
local query=require(chatbotmod).query
game.Players.PlayerAdded:Connect(function()
Player.OnChatted:Connect(function(input)
--insert text filtering moderation
--insert additional chat logic here to customize
Chatbotresponse=query(input,system_message)
local TextChat=game:GetService("TextChatService")
local chatgroup=TextChat.TextChannels.RBXSystem 

--	pcall(function()    chatgroup:DisplaySystemMessage("<font color=\""..rgbcolor.."\">"..npc.Humanoid.DisplayName..": </font> <font color=\"rgb(255,255,255)\">"..str.."".."</font> ")end) 
if Chatbotresponse then
  chatgroup:DisplaySystemMessage(</font> <font color=\"rgb(255,255,255)\">"..Chatbotresponse.."".."</font> ")
end		
end
end

Can you like send me a screenshot of the setup and where everything goes?

is this stuff already done within the script i just have to replace it with my api key?

and where do i put the new stuff

i dont really know how to set this up and where to put everything. Can you condense it and tell me where to put things?

Chatbotsetup.rbxm (4.4 KB)

The client invoker is located inside the server script so you can change it to suit your needs the code is very simple. good luck! It should work if you place it in ReplicatedFirst or the workspace or anywhere that a script can run and the player can access the client invoker

ok its all in place I think. I put the localscript in starter player and the script in server script service. Did i do it right. Also, how to i interact with it.

also it said text channels is not a member of textchatservice

It also says attempt to index nil with ‘OnClientEvent’ in the localscript

did it work for you when you tested it?

Yes I just finished some awesome work on it! Implementing chat history!
also it works for me the text channels local TextChat=game:GetService("TextChatService") local chatgroup=TextChat.TextChannels.RBXSystem
New function with chat history capabilities! I will be making a new post on this soon.

local module = {}
local HttpService = game:GetService("HttpService")
local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local apiKey = "Bearer " -- Replace with your actual API key

local function format_response(str)
	-- find the last assistant response in the string
	local start = string.find(str, "<|assistant|>", nil, true)
	if not start then return str end  -- If "<|assistant|>" is not found, return nil

	-- Find the last occurrence by searching backwards from the end of the string
	local last_start = start
	while start do
		last_start = start
		start = string.find(str, "<|assistant|>", start + 1, true)
	end

	-- Calculate the end of the last "<|assistant|>" tag
	local finish = string.len(str)

	-- Extract the response after the last "<|assistant|>"
	local response = string.sub(str, last_start + 13, finish)

	-- Return the response
	return response
end


function module.query(input, system_message,history)
	local system="<|system|>\n "..system_message.. "</s>\n"
	
	--	{
	if history==nil then
		history=""
	else history="<|user|>\n "..history	
	end
	
	local npcdata={inputs = system..history.."</s>\n<|user|>\n "..input.."</s>\n<|assistant|>\n",		
		max_new_tokens = 512,
		do_sample = true,
		temperature = 0.7,
		top_k = 50,
		top_p = 0.95
	}
	
	local response = HttpService:RequestAsync({
		Url = endpoint,
		Method = "POST",
		Headers = {
			["Content-Type"] = "application/json",
			["Authorization"] = apiKey
		},
		Body = HttpService:JSONEncode(npcdata),
	})
	local function format_history(str)
		-- find the assistant response in the string
		local start = string.find(str, "<|user|>")
		local finish = string.len(str)
		-- local finish = string.find(str, "</s>", start)
		local response = string.sub(str, start + 8, finish)
		-- return the response in a code block format
		return "" .. response .. ""
	end
	print(response)
	local result=HttpService:JSONDecode(response.Body)
	print(result)
	local response=format_response(result[1].generated_text)
	local history=format_history(result[1].generated_text)
	print(response)
	print(history)
	return response,history--HttpService:JSONDecode(format_response(response.Body),
end

return module

I might just not know how to use it and interact with it. Also, how do you get the api key from hugging face. It says i have 1 api key but you said you need 1 per model. I went to inference api and it brought me to a page like this.

The code is correct. I just tested it and it works with previous chat history now. it returns the chat history, so every time you interact with the model you can inject the chathistory.
Test it in the command line.
image

This code is very good I will be likely using it.
The mistral model has about 32k context window, which can hold a lot of text.

They change their interface recently but i think you click manage tokens. I use only 1 api key for all models. Oh and I didn’t say you need a different api key for each model.

Oh ok. How do i do it in the command line? Also i see something to do with text channels. Do i need that since im trying to use a text box

What command do i run to use it?

Also is my setup correct with the localscript in starterplayerscripts and the server script in serverscript service?

Introducing my Demo demonstrating all these components of the Chatbot resources I open sourced all implemented into a neat and user friendly package! API endpoint is interchangeable with Zephyr 7b
Mistral 7b Chatbot Demo: Aware, Emojis, Emote, Memory, Music, Wiki, 32k Context [Open Sourced] Place file