Wait are you using a model you trained or are you sending a request to the api and that is sending the information back.
What im attempting to do is just have a text box that sends a message to the thing and it sends back text and prints it
More like Apple’s recent release of a local model powering short form queries and a API handles long form queries while receiving a description the surroundings, their appearance, and their equipment, response from the local chatbot and their identity.
The neurons consist of weight matrixes (Projected Accuracy, Repitition, Emotion) and a loss function. Accuracy is computed from a layer that makes inferences based off a vector database while recognizing Synonyms, Antonyms, Reflections, and Nouns. While averaging the sum of all of the conversation context to and connecting related outputs by a 2 layered non repeating network connections that in real time learn the most relevant database. Outputs are averaged to only show the best outputs first. Then the outputs are transformed via a hashed database of synonyms and phrases according to the players chosen wording. For an entry like “You know what I think is awesome?” to associate awesome with something like great . to make that entry
“You know what I assume is great?” then emojis are inserted via a dataset consisting of emojis and their related words.
Context databases and functions that process text. Such as Eliza Algorithm is used to recognize search queries and playlist requests.
You can do that with the chatbox.
For a gui to type in make a textbox you should ask Bing or ChatGPT “In the context of Luau in ROBLOX, write a script that generates a text box that you can send messages with.”
The textbox is the part i understand. i can do that. I just need to know how to got the chatbot stuff working. I dont know what i need for it to work and how to use your examples.
To use a chatbot API use ChatGPT or a model from huggingface, go to the model page. click deploy and get your API key. Chatbot & LLM Artificial Intelligence Model API Code Documentation FREE (Open Source) and Other Useful APIs
To insert Emojis into a string you can use the small emoji model SmallEmoji V1.2 - Insert Emoji to Sentence Algorithm [Open Source] Update: Sigmoid - #16 by Magus_ArtStudios
This next one is a big more complicated, to use the animals you have to import them and republish them.
This module returns an animation that can be played. I run that on a per sentences basis to make the AI more expressive.
Require the Awareness Library and it will setup your workspace with the categories it observes. If you want to use those put your models in those directories or edit the name of the directory in the tablehandler location labeled like workspace.NPCS. Without those it will make some observations. I use this to inject into the system message during inference.
Finally if you were to decide to create your own local chatbot you can try your hand at the chat module. Lua Trainable Ai Chatbot Library, Emoji Generate ,Emotion Classifier Context Database,Math/Geometry Data Generator Geometry+Worded Math Solver RAG
It looks complicated but the main function to use for a chatbot is CompleteQuery(str,filter:bool(filters out non important words)),complete:bool(use synonyms antonyms and nouns to make assumptions) )
The complete query is deisgned to be used on a table of strings given a input it gives the most conversational response. An example to use for that is the awareness.GetSurroundingObjects() which will return a table of observations that can be queried with the CompleteQuery function.
This can be used for RAG to provide additional context to a LLM api. I use it for that and as an instant response to render while a API is processing output, and to handle short form queries.
So in short,
- Small Emojis takes (string,temperature) returns the modified string requires no setup just require and use the module.
- Intelligent Emotes from text. if you want to use it you have to import some of the animations that are not made by Roblox and reupload them. Tools to do this are provided in that post.
- API Code documentation, provides examples of different AI apis from huggingface. Huggingface requires a BearerKey which can be gotten for free by click Deploy → Inference Endpoint (Serverless) bearer key will be in the code provided.
- Awareness - sets up itself can be modified to suit your Categories, any that do not exist have placeholders made.
- Chatmodule - if you would like to use CompleteQuery to turn data into a chatbot. You can use it without weights which will make it run a bit faster. Or You can download the weights I use or create your own weights by using cm.PredictRun() and training it on a dataset. That will make the model more accurate in identifying the most important words in a sequence for use in retrieval augmented generation. Done by the measurement and classification of synonyms, antonyms, nouns and reflections and the accumulated sum of the inverse sigmoid of the text frequency, then the result is activated by math.log() and the highest value retrieved.
I use all of these in my project.
If you need any coding help to set those up, you can definitely use Bing or ChatGPT to help you set those up. Or send me some screenshots or script outputs and I can provide technical assistance or address any bugs.
Another resource is this code of the Eliza chatbot that I ported to Luau.
Eliza Chatbot Ported to Luau [Open-Source] - Resources / Community Resources - Developer Forum | Roblox
Ok, all i want to do is have the player say something and the ai respond. I dont really need the awarness or anything else. I tried using chatGPT for its own api but is said i had to pay when i set it up.
If that’s the case definitely look into this.
It demonstrates how to use huggingface apis (free access daily limit) Their are lots of open source models to check out. I am currently using my local Model I described, Zephyr 7b and ChatGPT-4 together.
Which one do you recommend for what i want to achieve.
I use Zephyr but i would recccomend that or Mistral. mistralai/Mistral-7B-Instruct-v0.3 · Hugging Face
Mistral is more recent and has a larger context window. IT is also a tool user, that can make function calls so I will likely be changing over to mistral since I have a bunch of tools already made for GPT-4 that Zephyr cannot use.
Ok how do i incorporate this into Roblox.
I’m having some issues figuring it out right now due to AI giving me a hard time. But the basic setup is this
local module = {}
local HttpService = game:GetService("HttpService")
local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local BearerKey="Get your key from huggingface"
local apiKey = "Bearer "..Bearerkey -- Replace with your actual API key
function module.query(input, system_message)
local messages = {
--{ role = "system", content = system_message },
{ role = "user", content = input }
}
local npcdata = {
inputs =input,
max_new_tokens = 512,
do_sample = true,
temperature = 0.7,
top_k = 50,
top_p = 0.95
}
local response = HttpService:RequestAsync({
Url = endpoint,
Method = "POST",
Headers = {
["Content-Type"] = "application/json",
["Authorization"] = apiKey
},
Body = HttpService:JSONEncode(npcdata),
})
print(response)
return HttpService:JSONDecode(response.Body)
end
return module
is “input” where all or what is sent to the api endpoint goes? Also, what is temperature top_k and top_p. Do i need that for what im trying to achive?
One moment, so I have figured it out and have potentially set it up so it can use tools, system message, and input,
local module = {}
local HttpService = game:GetService("HttpService")
local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local apiKey = "" -- Replace with your actual API key
local function format_response(str)
-- find the assistant response in the string
local start = string.find(str, "<|assistant|>")
local finish = string.len(str)
-- local finish = string.find(str, "</s>", start)
local response = string.sub(str, start + 13, finish)
-- return the response in a code block format
return "" .. response .. ""
end
local tools =[[{
["type"] = "function",
["function"] = {
name = "gotoLocation",
description = "Move to a location specified by coordinates, call it when instructed to move somewhere",
parameters = {
type = "object",
properties = {
x = {
type = "integer",
description = "The X coordinate"
},
y = {
type = "integer",
description = "The Y coordinate"
},
z = {
type = "integer",
description = "The Z coordinate"
}
},
required = {"x", "y", "z"}
}
}
},
{
["type"] = "function",
["function"] = {
name = "setFollow",
description = "Sets the 'follow' state to true or false as instructed",
parameters = {
type = "object",
properties = {
follow = {
type = "boolean",
description = "The follow state to set (true or false)"
}
},
required = {"follow"}
}
}
}
]]
function module.query(input, system_message)
local messages = {
--{ role = "system", content = system_message },
{ role = "user", content = input }
}
local npcdata = {
-- {
inputs = "<|system|>\n "..system_message.. "</s>\n<|tools|>\n "..tools.. "</s>\n<|user|>\n "..input.."</s>\n<|assistant|>",
max_new_tokens = 512,
do_sample = true,
temperature = 0.7,
top_k = 50,
top_p = 0.95
}
local response = HttpService:RequestAsync({
Url = endpoint,
Method = "POST",
Headers = {
["Content-Type"] = "application/json",
["Authorization"] = apiKey
},
Body = HttpService:JSONEncode(npcdata),
})
print(format_response(response.Body))
return HttpService:JSONDecode(response.Body)
end
return module
I had some code lying around from working with a zephyr chatbot and it appears to work great!.
" \n Yes, I have two tools available to me. The first one is "gotoLocation", it takes three arguments ‘x’, ‘y’, and ‘z’ which represent the coordinates of a specific location. This tool allows me to move to a specified location. The second tool is "setFollow", it takes a single argument ‘follow’ that can be set to either true or false. This tool allows me to enable or disable following another entity.“”
How do i edit this for what im trying to do with just chatting and thats it
temperature controls randomness top_k higher is less random top_p lower is more random, base value of 1 I think but you should really look that up.
- Top_p (Nucleus Sampling): It selects the most likely tokens from a probability distribution, considering the cumulative probability until it reaches a predefined threshold “p”. This limits the number of choices and helps avoid overly diverse or nonsensical outputs.
- Top_k (Top-k Sampling): It restricts the selection of tokens to the “k” most likely options, based on their probabilities. This prevents the model from considering tokens with very low probabilities, making the output more focused and coherent.
like what can i delete without the think breaking
you don’t need the tools. System_message is very important. But you can just set that to a static value. You can change the <|user|> to the players name <|ClientSide|> and sometimes it’s good idea to start the assistant prompt with the name of the character. :
Wait so what can i do to make it for my needs
local HttpService = game:GetService("HttpService")
local endpoint = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3" -- Replace with your actual endpoint
local BearerKey="Get your key from huggingface"
local apiKey = "Bearer "..Bearerkey -- Replace with your actual API key
local function format_response(str)
-- find the assistant response in the string
local start = string.find(str, "<|assistant|>")
local finish = string.len(str)
-- local finish = string.find(str, "</s>", start)
local response = string.sub(str, start + 13, finish)
-- return the response in a code block format
return "" .. response .. ""
end
function module.query(input, system_message)
local npcdata = {
inputs = "<|system|>\n "..system_message.. "</s>\n<|user|>\n "..input.."</s>\n<|assistant|>",
max_new_tokens = 512,
do_sample = true,
temperature = 0.7,--high more random
top_k = 50,-- only consider probabilities of this percent lower more random
top_p = 0.95--cumulative probability lower more random
}
local response = HttpService:RequestAsync({
Url = endpoint,
Method = "POST",
Headers = {
["Content-Type"] = "application/json",
["Authorization"] = apiKey
},
Body = HttpService:JSONEncode(npcdata),
})
print(format_response(response.Body))
return HttpService:JSONDecode(response.Body)
end
return module
place the code with your api key and the inference endpoint shown initially and insert that code.
Place it in a module and require( the module ) and call it with Chatbotresponse=query(input,system_message)
This call can only be done on the server. so you can connect to it via
Player.OnChatted:Connect(function(input)
--insert text filtering moderation
--insert additional chat logic here to customize
Chatbotresponse=query(input,system_message)
local TextChat=game:GetService("TextChatService")
local chatgroup=TextChat.TextChannels.RBXSystem
-- pcall(function() chatgroup:DisplaySystemMessage("<font color=\""..rgbcolor.."\">"..npc.Humanoid.DisplayName..": </font> <font color=\"rgb(255,255,255)\">"..str.."".."</font> ")end)
if Chatbotresponse then
chatgroup:DisplaySystemMessage(</font> <font color=\"rgb(255,255,255)\">"..Chatbotresponse.."".."</font> ")
end
end
Boom their you have it a very easy and simple system to create a chatbot using an LLM.
The chat message is shown in the players text box and message are sent via chat.
To make it only show up in the local players chatbox you could use a RemoteFunction and FireClient(player,Chatbotresponse)
and run
ClientInvoker.OnClientEvent:Connect(function(Chatbotresponse)
local TextChat=game:GetService("TextChatService")
local chatgroup=TextChat.TextChannels.RBXSystem
if Chatbotresponse then
chatgroup:DisplaySystemMessage(</font> <font color=\"rgb(255,255,255)\">"..Chatbotresponse.."".."</font> ")
end
end)