Chat GPT AI Implementation in roblox (open AI Chat GPT look-alike)

UPDATE: This topic is outdated, there is a new model specifically made to chat with, uses same API but just requires you to change some stuff in this topic up a bit, here is the article: OpenAI API

In this topic, I will discuss my personal experience with implementing a chat bot AI in Roblox using OpenAI’s API. OpenAI’s API is, in my opinion, the most cost-effective and reliable option currently available. Although this tutorial is not comprehensive, I can expand it if there is interest. For example, I can teach you how to analyze token usage, provide tips and tricks, discuss monetization strategies, and strategies for making your bot remember messages without using too many tokens. While it’s true that most of these concepts can be figured out on your own, organizing them in one resource could be useful. Kindly show your support to help me determine whether or not to expand this tutorial.

This resource is not revolutionary and only involves a basic HTTP request to interact with OpenAI’s API, and prompt crafting on my part. To follow this tutorial, you will need an account on Open AI and an Open AI API key (you can google how to get one). Additionally, you should have a basic understanding of Luau and the Roblox API. The full code is included, and the tutorial is in the comments. If you have any questions, please ask in the replies.

task.wait(3)

local Players = game:GetService("Players")
local TextService = game:GetService("TextService")
local HttpService = game:GetService("HttpService")
local Chat = game:GetService("Chat")


--This key you get from the website, look up how to get a open AI , API Key and paste yours here
--Your API key has to be in this format: Bearer[spacebar]pasteapikeyhere, example:Bearer sk-31UFEUFHUIAEAHEIA
local headers = {["Authorization"] = "Bearer API-KEY"}

--The AIs name
local botName = "AI"

--starting String
local startmessage = ""

--What the AI will see the player as, if you name this Bob the AI will know the players name is Bob
local PlayerName = "Player"

--AI "Backstory" makes the AI respond to your message in a certain way
local AIbackstory = botName.." is helpful, creative clever, and very friendly."

local ChatService = require(game:GetService("ServerScriptService"):WaitForChild("ChatServiceRunner").ChatService)
local systemMsg = ChatService:AddSpeaker(botName)
systemMsg:JoinChannel("All")

--Character Limits for
local characterLimitForRememberingAI = 100 -- 4 char 1 token
local characterLimitForRememberingPlayer = 80 -- 4 char 1 token

--Character limit, for reference a max roblox message is 200, keep this low
local messageCharLimit = 120

--Keeps track of how many tokens the current payload will consume
local currentTokens = 0

--init
local part = Instance.new("Part")
part.Name = "AI"
part.Parent = workspace

local function sendMessageBot(Msg)
	local server = ChatService:GetSpeaker(botName)
	Chat:Chat(workspace:FindFirstChild("AI", true), Msg, "Blue")
	server:SayMessage(Msg, "All")
end

--Connected player chatted for the demo change this to something else for your game
for i,player in pairs(Players) do
	player.Chatted:Connect(function(message, _recipient)
		--[[ Message should be filtered like this, but for the tutorial we're not doing it, also don't forget to filter the bot response
		local success,result = pcall(function()
			return TextService:FilterStringAsync(message, player.UserId, Enum.TextFilterContext.PrivateChat)
		end)]]

		--adds backstory infront
		startmessage = " "
		startmessage = AIbackstory..startmessage

		if #message > messageCharLimit then
			local botErrorText = "ERROR: Message Character Limit is... "..tostring(messageCharLimit)
			sendMessageBot(botErrorText)
			return false
		end

		local playerMessage = "\n\n"..PlayerName..": "..message
		local combinedmessage = startmessage..playerMessage.." \n "..botName..":"
		--[[Combined message looks something like this which is what we're sending to the API.
		
			AI is helpful, creative clever, and very friendly.\n\nPlayerName:Hello how are You?\nAI:

		
			Basically what we're doing here is crafting a prompt, this specific way of formatting your prompt
			will result in a accurate response most of the time, I recommend also checking and adding a period if there is nothing ending a sentence
			so the AI knows for sure not to auto complete the players message, I haven't done it here for the sake of brevity
		]]
		print(combinedmessage)
		local url = "https://api.openai.com/v1/completions"

		local data = {
			--[[text-davinci-003 Is the best model but curie is cheaper and still satisfactory depending on the prompt.
			 See what best suits you ]]

			["model"] = "text-curie-001",

			--the string were sending
			["prompt"] = combinedmessage,

			--Max amount of tokens it can take to generate a response, 1 token = 4 char, 35 tokens is pretty good for roblox
			["max_tokens"] = 35,

			--What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
			["temperature"] = 0.9,

			--An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
			["top_p"] = 1,

			--Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
			["presence_penalty"] = 0.3,

			--Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
			["frequency_penalty"] = 0.5,

			--Sending the user ID of the player aswell (Can only send strings) so that open AIs moderation can step in and block the user if they're abusing the service
			["user"] = tostring(player.UserId)}
		local response = HttpService:PostAsync(url, HttpService:JSONEncode(data), Enum.HttpContentType.ApplicationJson, false, headers)
		local decoded = HttpService:JSONDecode(response)

		--You can use this to check if your prompt is too big (Too much history is getting stored), and you can use it to wipe it, anything above 200 is a lot
		currentTokens = decoded["usage"]["total_tokens"]

		--response to the message we send
		local textResponse = decoded["choices"][1]["text"]
		print(textResponse)

		--The message you display to the player, make sure to filter
		--return textResponse
		sendMessageBot(textResponse)
	end)
end

If you encounter any issues with the code, please let me know and I’ll update the tutorial accordingly. Although the code is relatively simple and I believe I have explained it clearly, there is a possibility of small errors, so please don’t hesitate to notify me.

26 Likes

hello seems like im getting some problems with the script


i dont get why it does this

im guessing we replace the text with the key correctly?

2 Likes

Yes this is my expired key as a example
"Bearer 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",
you get the key from the open AI website

1 Like

does the “Bearer” need to be included?

Yes the actual key is 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 But bearer needs to be infront of it

i have a question
is it possible to act exacly like chat gpt’s answers and not a custom bot name?
or
is it possible to reply with lua code?

because really i dont know how to use opens ai sandbox

This is the playground settings that you need for a conversational AI which is already configured for the script, basically use this for more easy testing

EDIT: Where it says "Human: " that is the field where you write, then you press generate and It’ll give you a response

2 Likes

I keep getting this error HTTP 429 (Too Many Requests) Whys that?

The error is literal. You can only send a limited amount of requests to OpenAI within a certain time.

You will eventually be able to send more requests, you just have to wait a bit.

2 Likes

How long do you think it’s gonna take?

did you just leak your key when the site says not to LOL

anyway, im getting HTTP 429 (Too Many Requests) from my key, i’ve only made 2 requests and none of them was successful

Also your issue is this, since you apperantly spammed your API key
https://help.openai.com/en/articles/6891829-error-code-429-rate-limit-reached-for-requests

only 5 requests so far, please read next time

I’m most likely missing something very obvious, but I get this warning from the script:

Infinite yield possible on 'ServerScriptService:WaitForChild("ChatServiceRunner")'

Any way to fix this?

does this still work? because ive been trying to make this

2 Likes

It does still work from what I remember, if you copy the code, replace the API key, test the game with http service turned, you can chat in the game and it’ll respond to you

2 Likes

Sorry to bump this, still experiencing this problem. Any fix?

Edit: This is designed for the legacy chat service.

Are you running this code locally?

I fixed it! I have to change the TextChatService version to legacy. However, the HTTP 429 (Too Many Requests) stops me.

I have not yet had a single successful response from the AI because it says I’m generating “Too Many Requests” despite only sending one.