[FREE] ROBLOX Mistral 7b AI Chatbot Agent: Aware, Infinite Agents, 2000+ Emojis, 100+ Emotes, Memories, Wikipedia, 32k Context [Open Sourced]

uh no it just remember’s everything until it fails :sweat_smile: I guess I should make it have infinite context by summarizing memories at some point. if you change the name of the character it has different memories. To change the memories all you have to do is change the memory tokens value. Also to do that you would need to just query the endpoint or the Zephyrorpho module with a custom implementation. I would highly recommend you read the code because it’s designed to be simple to understand is very organized.

All memories are located inside the memory module during runtime but only those that are called . So a possible function could be [reset] which resets the memory of the chatbot.
I’ll be sure to make those updates concerning infinite context via summary and clearing the chat history when I get around to it!

did you limit the amount of characters it can say because it always cuts off mid sentence.

Nice work, I love your progress so far, this is looking great.

1 Like

max_new_tokens is set to 256 you should be able to set it to 512 and see a difference. Otherwise, a method I did was resend the output to the model and it should extend the section.

Thanks for the positive feedback! If you make something cool with this feel free to share it on this thread to inspire others! :slight_smile:

how do i resend the output?


Also i know why its doing that. This is because the chatbot is broken not the roblox part i tested it in playground.

And what do tokens do like is it the more the merrier?

More tokens is more output so changing the max_new_tokens value in the api endpoint function should get you longer potential outputs. Alternatively you can insert this code and try to post twice by giving it the output it just gave you.

	local result=HttpService:JSONDecode(response.Body)
--	print(result)

local response = HttpService:RequestAsync({
	Url = endpoint,
	Method = "POST",
	Headers = {
		["Content-Type"] = "application/json",
		["Authorization"] = apiKey
	},
	Body = HttpService:JSONEncode(result),
})
local result=HttpService:JSONDecode(response.Body)

I’ve made a game, but I don’t know if it’s safe to release. I’ve tested it extensively, and it hasn’t done anything. However, I’m concerned that if someone uses a jailbreak prompt, they or I might get banned. Do you know if the game creator or the person using the jailbreak prompt is at risk of being banned?

You might want to look to implementing a text filter, Perhaps using a function that separates the message into sentences then checks each individual sentence for safety.

local TextService = game:GetService("TextService")

function module.Filter(message, player)
	local textObject
	local success, errorMessage = pcall(function()
		textObject = TextService:FilterStringAsync(message, player.UserId)
	end)
	if player.Name=="Magus_ArtStudios" then return message end --doesn't work in studio mode.
	if success then
		return true
	end
	print("Error generating TextFilterResult: ", errorMessage)
	return false
end

This is proper text filtering technique it depends on the chat settings of the individual player, it only works in live games and returns not successful in studio which is why I included the condition of my player name. Less effective for longer outputs which is why you should seperate them into strings and see if each string passes before returning the output.

Hello! I wanted to create a similiar code using the hugging face website but i cant find the endpoints or the model that you provided. Do you how could i fix this?


local HttpService = game:GetService("HttpService")
local endpoint = "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local apiKey = "Bearer _my_key_" -- Replace with your actual API key

local function format_response(str)
	-- find the last assistant response in the string
	local start = string.find(str, "<|assistant|>", nil, true)
	if not start then return str end  -- If "<|assistant|>" is not found, return nil

	-- Find the last occurrence by searching backwards from the end of the string
	local last_start = start
	while start do
		last_start = start
		start = string.find(str, "<|assistant|>", start + 1, true)
	end

	-- Calculate the end of the last "<|assistant|>" tag
	local finish = string.len(str)

	-- Extract the response after the last "<|assistant|>"
	local response = string.sub(str, last_start + 13, finish)

	-- Return the response
	return response
end


function query(input, system_message,history)
	local system="<|system|>\n "..system_message.. "</s>\n"

	--	{
	if history==nil then
		history=""
	else history="<|user|>\n "..history	
	end

	local npcdata={
		inputs = system..history.."</s>\n<|user|>\n "..input.."</s>\n<|assistant|>\n",		
		max_new_tokens = 256,
		do_sample = true,
		temperature = 1.2,
		top_k = 30,
		top_p = 0.90
	}

	local response = HttpService:RequestAsync({
		Url = endpoint,
		Method = "POST",
		Headers = {
			["Content-Type"] = "application/json",
			["Authorization"] = apiKey
		},
		Body = HttpService:JSONEncode(npcdata),
	})
	local function format_history(str)
		-- find the assistant response in the string
		local start = string.find(str, "<|user|>")
		local finish = string.len(str)
		-- local finish = string.find(str, "</s>", start)
		local response = string.sub(str, start + 8, finish)
		-- return the response in a code block format
		return "" .. response .. ""
	end
	print(response)
	local result=HttpService:JSONDecode(HttpService:JSONEncode(npcdata))
	--	print(result)
	local response=format_response(result[1].generated_text)
	local history=format_history(result[1].generated_text)
	--print(response)
	print(history, result, response)
	--print(response)
	return response,history--HttpService:JSONDecode(format_response(response.Body),
end


task.delay(1, function()
	query("What color is an apple?", " You are a companion who's avatar has an AI that is beside the player as their party member in a fantasy RPG game. ")
end)

Different huggingface models use different chat templates. Phi is pretty much the same except you should include a end token at the end of the input. ```

<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>

As for where is the model located for Mistral 7b v.03 instruct is mistralai/Mistral-7B-Instruct-v0.3 · Hugging Face

But i am using mistral AI 7b, however the response table that i get in return says that the post async didn’t go well.

image

(i slightly edited the code in the meantime)

local HttpService = game:GetService("HttpService")
local endpoint = "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local apiKey = "Bearer ffaasfdasfaf" -- Replace with your actual API key

local function format_response(str)
	-- find the last assistant response in the string
	local start = string.find(str, "<|assistant|>", nil, true)
	if not start then return str end  -- If "<|assistant|>" is not found, return nil

	-- Find the last occurrence by searching backwards from the end of the string
	local last_start = start
	while start do
		last_start = start
		start = string.find(str, "<|assistant|>", start + 1, true)
	end

	-- Calculate the end of the last "<|assistant|>" tag
	local finish = string.len(str)

	-- Extract the response after the last "<|assistant|>"
	local response = string.sub(str, last_start + 13, finish)

	-- Return the response
	return response
end


function query(input, system_message,history)
	local system="<|system|>\n "..system_message.. "</s>\n"

	--	{
	if history==nil then
		history=""
	else history="<|user|>\n "..history	
	end

	local npcdata={
		inputs = system..history.."</s>\n<|user|>\n "..input.."</s>\n<|assistant|>\n",	
		max_new_tokens = 256,
		do_sample = true,
		temperature = 1.2,
		top_k = 30,
		top_p = 0.90
	}

	local response = HttpService:RequestAsync({
		Url = endpoint,
		Method = "POST",
		Headers = {
			["Content-Type"] = "application/json",
			["Authorization"] = apiKey
		},
		Body = HttpService:JSONEncode(npcdata),
	})
	print(response)
	local result=HttpService:JSONDecode(HttpService:JSONEncode(npcdata))

	print(response)
	
	return response,history--HttpService:JSONDecode(format_response(response.Body),
end


task.delay(1, function()
	query("What color is an apple?", "" ," You are a companion who's avatar has an AI that is beside the player as their party member in a fantasy RPG game. ")
end)

OOh you are asking for the api endpoint? I already left that somewhere on this thread, not to mention it’s in the source file.
I have a copy of that particular module in workflow right now. The endpoint is

local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint

Also quickly delete your post because you leaked you api key.
Also zephyr endpoint is

local endpoint= "https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta"

Zephyr has much shorter context window, but it’s a bit more charming than mistral.

In my custom code I 1 shot a answer from zephyr, then complete the response with mistral.

This endpoint dosent work either sadly… I still get the cannot post error.

1 Like

Mistral you have to go to the model page i linked and agree to the terms of use for that model. Some groups like google, facebook, and others request that you accept the terms of use before you can use the model.
Looking at the documentation endpoint urls are constructed like this

ENDPOINT = https://api-inference.huggingface.co/models/<MODEL_ID>

Model id being a placeholder for the components shown above. The code worked when it was made, and so far everyone else has had no issues.
To clarify the linked model is

Be sure to agree to the terms of use to use that particular model.

Ohh ok i must have used the wrong endpoint i presume. Now i get way less errors. Now i only get an authentication error saying that my token is invalid

1 Like

Yeah api’s are confusing at first but it gets better. I left instructions on how to get a new token in the 2nd post on this thread. if that helps you, good luck with your project, feel free to reach out.
if you are logged in to huggingface you can use this link as a shortcut to the tokens page.
https://huggingface.co/login?next=%2Fsettings%2Ftokens
Make sure your token is read only for this use case.

Hey! Have you ever made a AI fully in Roblox Studio?
I dont know if you have addresses that question before, sorry.

If so, what was It? And how did It works?