Roblox Development Assistant GPT

Hey everyone,

I’m really excited to share something I’ve been working on: a GPT customized for Roblox development. It’s a specialized version of ChatGPT specifically designed for Roblox developers. This isn’t just ChatGPT with some custom instructions- this GPT has access to the entirety of the Roblox documentation.

Here’s what makes this tool better than regular ChatGPT and Roblox’s own assistant:

  1. GPT-4o Powered: Built on GPT-4, which is more advanced and capable of handling complex queries better than Roblox’s built in AI assistant.
  2. Loaded with Roblox Docs: What sets this apart from regular ChatGPT is its direct integration with the entire Roblox documentation. This means it can pull up accurate, relevant info straight from the source, reducing the chances of incorrect or misleading code.
  3. Tailored for Roblox: This tool is fine-tuned for Roblox development. Whether it’s scripting, game design, or understanding platform-specific features, it’s got you covered.

I am currently working on some benchmarks in order to an idea of how much better this GPT is compared to other AI tools, but from my preliminary testing it appears to be significantly more capable.

Open to Feedback I’m really looking forward to your thoughts and feedback on this. It’s the first version, and I’m sure there’s room for improvement. I’ve got plans to add more features and make it even better, so your input would be invaluable.

What’s next? I’m working on an AI assistant that you can run locally on your own computer (a small, 7b parameter model). This way, you won’t need a subscription, and your code remains private and secure. I’m still fine-tuning it on Roblox’s code dataset, so stay tuned for updates on that front. I’m hoping to have it available within the next couple days.

Can’t wait to see how this tool helps in your Roblox game development. I’m happy to answer any questions or feedback. Thank you!

3 Likes

Have you considered using other APIs like those on Huggingface?
TinyLlama/TinyLlama-1.1B-Chat-v1.0 · Hugging Face
This model right here I tested before and it can write Lua code. It’s the smallest model and it also just came out two-weeks ago.

Some things to consider about this small but powerful model

Since it is a small model it is running on slower harderware thus has a slower inference time.

3 Likes

I have done some testing with TinyLlama and it certainly is impressive for being just 1B parameters.

The current model I am fine tuning with the luau dataset is mistralai/Mistral-7B-Instruct-v0.2 · Hugging Face which is a bit larger but can still be run on consumer hardware as long as you have a GPU with some VRAM. I will certainly look into fine tuning TinyLlama afterwards, if it works well that could be even more powerful given how little compute power it requires to run.

4 Likes

Idk if its just the infernence quality of the hardware running TinyLlama but it is much much slower than Zephyr or Mistral v.2… Zephyr and Mistral are just a lot faster for inference. This is likely due to it being a smaller model running on weaker hardware.

3 Likes

I have both Mistral and TinyLlama running locally on my Mac- Mistral 7b gets around 17 tokens/s while TinyLlama is closer to 64 tokens/s on my machine. I’m not exactly sure what safety mechanisms you are referring to (if you are running it via an API and not locally than maybe the service providing the API is adding a safety layer on top). Also I don’t think the TinyLlama model was created by Meta- it just is based on the same architecture and tokenizer as Llama 2 which is why they gave it the Llama name.

5 Likes

Thanks for clarifying! That is good and very interesting to hear. I always get excited when we have a new and more powerful small model such as this! I was testing this alongside Zephyr 7b which has a much shorter context window. Since Zephyr has the same format as this model they can technically be run concurrently to expand the context window of a a Zephyr generated response by further expanding the context window to 2048 tokens such as tinyllama is capable of, Zephyr I think is maxed out at 1024 or 512 tokens.

When I tested it with my chatbot that it really did quite well in completing the Zephyr response!

Example of TinyLlamas coding abilities

<|system|>
You are a ROBLOX studio Chatbot Specializing in Luau code.
<|user|>
Write me a function to calculate the first 10 digits of the fibonacci sequence in Lua and print it out to the CL.
<|assistant|>
Here’s a Lua function that calculates the first 10 digits of the Fibonacci sequence and prints them to the console:

function fibonacci(n)
  if n <= 1 then
    return n
  else
    return fibonacci(n-1) + fibonacci(n-2)
  end
end

-- Example usage
print(fibonacci(10)) -- Output: 55

API Documentation to use TinyLlama on inference.

local HttpService = game:GetService("HttpService")
function cm.TinyLlama(system,query)
-- Define the API URL and the authorization header
local API_URL = "https://api-inference.huggingface.co/models/TinyLlama/TinyLlama-1.1B-Chat-v1.0"
local headers = {Authorization = bearer}

-- Define a function to query the model
local function queryModel(input, temperature)
    -- Create a payload table with the input and the temperature
    local payload = {inputs = input, temperature = temperature}
    -- Encode the payload table into a JSON string
    local payloadJSON = HttpService:JSONEncode(payload)
    -- Send a POST request to the API URL with the header and the payload
    -- Use pcall to catch any errors
    local success, response = pcall(HttpService.PostAsync, HttpService, API_URL, payloadJSON, Enum.HttpContentType.ApplicationJson, false, headers)
    -- Check if the request was successful
    if success then
        -- Decode the response into a table
        -- Use pcall to catch any errors
        local success, responseTable = pcall(HttpService.JSONDecode, HttpService, response)
        -- Check if the decoding was successful
        if success then
            -- Return the response table
            return responseTable
        else
            -- Return nil and the error message
            return nil, responseTable
        end
    else
        -- Return nil and the error message
        return nil, response
    end
end

-- Define a function to format the response
local function formatResponse(str)
    -- Find the assistant response in the string
    local start = string.find(str, "<|assistant|>")
    local finish = string.len(str)
    -- Extract the response
    local response = string.sub(str, start + 13, finish)
    -- Return the response
    return response
end

-- Test the function with an example input
local input = "<|system|>\n"..system.."</s>\n<|user|>\n"..query.."</s>\n<|assistant|>"
local temperature = 2
local output = queryModel(input, temperature)
if output then
    output = formatResponse(output[1].generated_text)
 print(output)
   return output    

else
    return nil
    --print("Something went wrong.")
end
end

Also wanted to share this example of engineering the system message to cultivate self-awareness in addition to expanding the context window using this model in the text generation code.

function cm.ZephyrStory(person,personal,Playername,quer)
-- Define the API URL and the authorization header
local API_URL = "https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta"
local headers = {Authorization = bearer}

-- Define the HttpService
local HttpService = game:GetService("HttpService")

-- Define a function that takes an input and queries the model
local function queryModel(input, temperature)
    -- Create a payload table with the input and the temperature
  local payload={inputs = input, temperature = temperature,max_new_tokens=1000, min_tokens=250, top_k=100, top_p=0.11}
   -- local payload = {inputs = input, temperature = temperature}
    -- Encode the payload table into a JSON string
    local payloadJSON = HttpService:JSONEncode(payload)
    -- Send a POST request to the API URL with the header and the payload
    -- Use pcall to catch any errors
    local success, response = pcall(HttpService.PostAsync, HttpService, API_URL, payloadJSON, Enum.HttpContentType.ApplicationJson, false, headers)
    -- Check if the request was successful
    if success then
        -- Decode the response into a table
        -- Use pcall to catch any errors
        local success, responseTable = pcall(HttpService.JSONDecode, HttpService, response)
        -- Check if the decoding was successful
        if success then
            -- Return the response table
            return response--Table-- return json
        else
      --  print()
            -- Return nil and the error message
            return nil, response--Table
        end
    else
        -- Return nil and the error message
        return nil, response
    end
end

local personality=personal[1]
local awarobserve=personal[2]
--identify..timeod..awareobserve
--local Resulttable,speakers=cm.LocalZephyrStory(str,npcnam,{[1]=persona,[2]=awareobserve,[3]=identity,[4]=timeod},Player)
local identity=personal[3]
local timeod=personal[4]
local insight=personal[5]
local memory=personal[7]
local previousconversation=personal[6]
if previousconversation==nil then
previousconversation=""
else 
local function RebuildConversation(tbl,response)--reduce size of system message
--local tbl,speakers,str=cm.LocalZephyrDecode(response)
local sum="" for i,v in tbl do for t,o in v do
if t~="narrator" then

 sum=sum.." \n\n "..t..": "..o 
else
 sum=sum.." \n\n "..o 
end end print(sum)
end
sum=sum.." \n\n "..Playername..": "..response 
return sum
end
previousconversation=RebuildConversation(personal[6][1])
end
--awareobserve,timeod,identity
-- Test the function with an example input
--cachedconversation

local input = "<|system|>\n "..identity..timeod..insight..awarobserve..memory..". Parse dialogues with "..person..": and "..Playername..": .</s>\n<|"..Playername.."|>\n "..quer.." </s>\n<|assistant|>"..previousconversation
local temperature = 2
local output,Error = queryModel(input,temperature)
print(output)
local iterations=0
local function RebuildResponse(response)--reduce size of system message
local tbl,speakers,str=cm.LocalZephyrDecode(response)
local sum="" for i,v in tbl do for t,o in v do
if t~="narrator" then
 sum=sum.." \n\n "..t..": "..o 
else
 sum=sum.." \n\n "..o 
end end print(sum)
end

local input = "<|system|>\n"..identity..memory..awarobserve.." Parse dialogues with "..person..": and "..Playername..": . </s>\n<|"..Playername.."|>\n "..quer.."</s><|assistant|>"..sum
if iterations==2 then
input = "<|system|>\n"..memory..awarobserve..identity.." Parse dialogues with "..person..": and "..Playername..": .</s>\n<|"..Playername.."|>\n "..quer.."</s><|assistant|>"..sum
elseif iterations==3 then
input = "<|system|>\n"..identity.."\n Parse dialogues with "..person..": and "..Playername..": .</s>\n<|"..Playername.."|>\n "..quer.."</s>\n<|assistant|>"..sum
end

return input
end
if not Error then
local function iterateoutput(output)
local checkedinput
local loadoutput
local previnput
repeat
iterations+=1
previnput=HttpService:JSONDecode(output)[1].generated_text
local loadoutput = queryModel(RebuildResponse(output))
if loadoutput~=nil then
checkedinput=HttpService:JSONDecode(loadoutput)[1].generated_text
if checkedinput then--only update output if valid
output=loadoutput
print(output)
else
break 
end
else
break
end
until checkedinput==previnput or iterations>=3
return output
end
output=iterateoutput(output)
end
local function DecodeResponse(response)--reduce size of system message
local tbl,speakers,str=cm.LocalZephyrDecode(response)
local sum="" for i,v in tbl do for t,o in v do
if t~="narrator" then
 sum=sum.."\n\n"..t..": "..o 
else
 sum=sum.."\n\n"..o 
end end print(sum)
end
return sum
end
local function iterateoutputLLama(output)
local checkedinput
local loadoutput
local previnput
repeat
iterations+=1
previnput=HttpService:JSONDecode(output)[1].generated_text
local loadoutput =cm.TinyLlama(""..identity..timeod..insight..awarobserve..memory..". Parse dialogues with "..person..": and "..Playername..": ." ,quer,DecodeResponse(output))
if loadoutput~=nil then
checkedinput=HttpService:JSONDecode(loadoutput)[1].generated_text
if checkedinput then--only update output if valid
output=loadoutput
print(output)
else
break 
end
else
break
end
until checkedinput==previnput or iterations>=3
return output
end

local output2=cm.TinyLlama(""..identity..timeod..insight..awarobserve..memory..". Parse dialogues with "..person..": and "..Playername..": ." ,quer,DecodeResponse(output))--recieve generated_text
if output2 then
iterations=0
local output3=iterateoutputLLama(output2)
if output3~=nil then
output=output3
else
output=output2
end
end

--local str=format_response(output[1].generated_text)
--print(str)
--local outputtabl=cm.extractDialogue(str)
-- Print the output


return output--[1] --parseConversation(str)
end

This code takes advantage of both models utilizing Zephyrs strong abilities to generate high quality output and Llamas 2048 context window to create longer responses.

2 Likes

OpenAI is slowly rolling out GPT-4o to free users, meaning soon you will be able to use the Roblox Development Assistant GPT without having to pay for ChatGPT Plus!

The new GPT-4o beats the previous GPT-4 Turbo model in both speed (2x faster) and performance and I’m super excited that everyone will soon be able to use this GPT for free! I’m bringing some new updates soon so keep an eye out!

I’m still also planning on releasing an open-source fine-tuned model powered by Llama 3 for those who want to run a model locally.

UPDATE: GPTs have rolled out to all free users, feel free to check out the GPT for free here: https://chatgpt.com/g/g-XakZYqU7R-game-development-assistant

I’ve also updated the GPT and significantly enhanced the knowledge-base with hundreds of additional pages of Roblox information. This means Roblox Development Assistant is familiar with more than ever, including things like how to get verified, troubleshooting issues, questions about TOS, and more!

Also huge collaboration coming soon, stay tuned!