Giving LLM Text Vision and Self Awareness in Luau [OPEN SOURCE]

i dont really know how to set this up and where to put everything. Can you condense it and tell me where to put things?

Chatbotsetup.rbxm (4.4 KB)

The client invoker is located inside the server script so you can change it to suit your needs the code is very simple. good luck! It should work if you place it in ReplicatedFirst or the workspace or anywhere that a script can run and the player can access the client invoker

ok its all in place I think. I put the localscript in starter player and the script in server script service. Did i do it right. Also, how to i interact with it.

also it said text channels is not a member of textchatservice

It also says attempt to index nil with ‘OnClientEvent’ in the localscript

did it work for you when you tested it?

Yes I just finished some awesome work on it! Implementing chat history!
also it works for me the text channels local TextChat=game:GetService("TextChatService") local chatgroup=TextChat.TextChannels.RBXSystem
New function with chat history capabilities! I will be making a new post on this soon.

local module = {}
local HttpService = game:GetService("HttpService")
local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local apiKey = "Bearer " -- Replace with your actual API key

local function format_response(str)
	-- find the last assistant response in the string
	local start = string.find(str, "<|assistant|>", nil, true)
	if not start then return str end  -- If "<|assistant|>" is not found, return nil

	-- Find the last occurrence by searching backwards from the end of the string
	local last_start = start
	while start do
		last_start = start
		start = string.find(str, "<|assistant|>", start + 1, true)
	end

	-- Calculate the end of the last "<|assistant|>" tag
	local finish = string.len(str)

	-- Extract the response after the last "<|assistant|>"
	local response = string.sub(str, last_start + 13, finish)

	-- Return the response
	return response
end


function module.query(input, system_message,history)
	local system="<|system|>\n "..system_message.. "</s>\n"
	
	--	{
	if history==nil then
		history=""
	else history="<|user|>\n "..history	
	end
	
	local npcdata={inputs = system..history.."</s>\n<|user|>\n "..input.."</s>\n<|assistant|>\n",		
		max_new_tokens = 512,
		do_sample = true,
		temperature = 0.7,
		top_k = 50,
		top_p = 0.95
	}
	
	local response = HttpService:RequestAsync({
		Url = endpoint,
		Method = "POST",
		Headers = {
			["Content-Type"] = "application/json",
			["Authorization"] = apiKey
		},
		Body = HttpService:JSONEncode(npcdata),
	})
	local function format_history(str)
		-- find the assistant response in the string
		local start = string.find(str, "<|user|>")
		local finish = string.len(str)
		-- local finish = string.find(str, "</s>", start)
		local response = string.sub(str, start + 8, finish)
		-- return the response in a code block format
		return "" .. response .. ""
	end
	print(response)
	local result=HttpService:JSONDecode(response.Body)
	print(result)
	local response=format_response(result[1].generated_text)
	local history=format_history(result[1].generated_text)
	print(response)
	print(history)
	return response,history--HttpService:JSONDecode(format_response(response.Body),
end

return module

I might just not know how to use it and interact with it. Also, how do you get the api key from hugging face. It says i have 1 api key but you said you need 1 per model. I went to inference api and it brought me to a page like this.

The code is correct. I just tested it and it works with previous chat history now. it returns the chat history, so every time you interact with the model you can inject the chathistory.
Test it in the command line.
image

This code is very good I will be likely using it.
The mistral model has about 32k context window, which can hold a lot of text.

They change their interface recently but i think you click manage tokens. I use only 1 api key for all models. Oh and I didn’t say you need a different api key for each model.

Oh ok. How do i do it in the command line? Also i see something to do with text channels. Do i need that since im trying to use a text box

What command do i run to use it?

Also is my setup correct with the localscript in starterplayerscripts and the server script in serverscript service?

Introducing my Demo demonstrating all these components of the Chatbot resources I open sourced all implemented into a neat and user friendly package! API endpoint is interchangeable with Zephyr 7b
Mistral 7b Chatbot Demo: Aware, Emojis, Emote, Memory, Music, Wiki, 32k Context [Open Sourced] Place file

Okay so uhm I created a token but it saying its invalid?

Make sure it’s read only token

That’s not even self awarness

30303030303030

okay also how am I supposed to talk to it? which value input, system_mesage, history what is it?

In my game the way I use it is.

  1. Give the Players character a narrative awareness about the environment by choosing a random entry
  2. Give LLMs environmental awareness
  3. Used as a library for doing things with the environment and returning text input primarily the judge library and the near library.
    Using those you can systematically make interactions with the environment. (I do this in my implementation which uses the awareness library as a starting point and abstracts from that such as to create a action based chatbot.
  4. Used as Data for a chatbot when given a user input.

I have an update coming soon which implements machine learning, to find areas of interest in the environment. For example an enemy with higher hitpoints > weight than enemies with lower HP. I will be publishing it soon. It is complete I just need to test it more. It’s designed to algorithmically generate plans based on the environment and does so in a unique way which you will be able to read about in the module.
The awareness module is designed to work on any npc.
If you want to talk to the awareness module you can try out my Chatbot demonstration which gives any LLM text vision awareness. In my video where I’m talking to it I am using my chatmodule which is something kind of messy that I published a while ago that uses synonyms, reflections and nouns to a keyword counting algorithm based on the weights of a word frequency model I trained using the chat module library I published.

1 Like

The way I programmed the demo there is one main function that handles the system_message input and stuff. I demonstrated how to use all this stuff I published in the chatbot demo. I tried to code it as neat as I could at the time. to make it understandable for everyone. If you’re interested in a custom implementation reading the code in the chatbot demo should be very educational. It’s written very well.

1 Like