Creating an AI Assistant?

Greetings, I’m trying to make an AI Assistant with a custom prompt so it behaves differently e.g Prompt: Act like a cowboy from the wild west, similar to Character.AI. I asked ChatGPT to assist since I don’t know where to start.

I want to say that I’m not good at coding outside of Luau, so if there are any obvious mistakes, you can blame it on ChatGPT, I’m just kidding :wink:

Ignore the comments GPT made

Python-Code:

import openai
import flask
from flask import Flask, request, jsonify

app = Flask(__name__)

# Replace with your actual OpenAI API key
openai.api_key = "MyAPIKey" -- I inserted mine, dont worry

@app.route('/chat', methods=['POST'])
def chat():
    data = request.get_json()
    user_message = data.get("message", "")

    if not user_message:
        return jsonify({"error": "Message field is required"}), 400

    try:
        # Correct API call to OpenAI
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",  # Or gpt-4 if you are using it
            messages=[{"role": "user", "content": user_message}]
        )
        ai_reply = response["choices"][0]["message"]["content"]
        return jsonify({"response": ai_reply})

    except Exception as e:
        print(f"Error: {str(e)}")  # Print error for debugging
        return jsonify({"error": str(e)}), 500

if __name__ == '__main__':
    app.run(debug=True)

The file is called ai_server.py.

This is the server-side of Roblox Studio, I’m sure the client isn’t the problem but the server might be, so here is the code:

local HttpService = game:GetService("HttpService")
local replicatedstorage = game:GetService("ReplicatedStorage")

local sendmessage = replicatedstorage:WaitForChild("SendMessage")

-- Function to send a message to the AI
function sendMessageToAI(cmessage)
	local url = ""  -- My URL is not blank, but for safety purposes, hidden.

	local data = {
		message = cmessage
	}

	local jsonData = HttpService:JSONEncode(data)

		local response = HttpService:PostAsync(url, jsonData, Enum.HttpContentType.ApplicationJson)

	local responseData = HttpService:JSONDecode(response)
	return responseData.response or "No response from AI."
end

-- Listen for the event and respond with AI-generated text
sendmessage.OnServerEvent:Connect(function(player, cmessage)
	local response = sendMessageToAI(cmessage)

	-- Send the response back to the player who initiated the message
	sendmessage:FireClient(player, response)
end)

That’s about all the code there is. FYP I’ve completed these steps, but lmk if I should retry them:

  • Turned on HTTPService in Studio
  • Tried both in Studio and Roblox.
  • Downloaded Python
  • Installed pip install flask openai
  • Turned on the server before testing
  • Debugged multiple problems

So after testing multiple times, I’m pretty sure the Roblox side works, it’s probably the python code or OpenAI version that’s causing the error.

I’m trying to implement a temporary system, so I can test features for now and how I can implement it so that It can handle mass amounts of prompts soon.

If you have any questions about anything that I missed, please ask! Also if there’s a better way of implementing this please let me know :doh:

3 Likes

I don’t think you can use gpt in these terms of scripting, I don’t know anything about scripting with AI to python. So, I could be wrong, I just don’t think Roblox would add this. plus, you would have to add a filter for the model, so it won’t end up saying bad things.

1 Like

just filter the ai’s responses and this is completely useable by using the httpservice and contacting a server somewhere like they are doing here with openai

1 Like

Filtering isn’t that difficult, as far as I know, It is possible. I’ve seen a couple Roblox games achieve that.

ah I didn’t know that, thank you.

This post is still open, any help is appreciated!

what error did you get?

cant really figure out whats wrong without it

Just use the built in Roblox filtering and send a custom prompt to the ai long with the message the user inputs

Well, in the game, when I tried to put in a prompt as a text, it gave out I think some HTTP 500 ERROR, I’ve gotten multiple errors each time I tried modifying it, like HTTP denied etc. ChatGPT says it’s due to the python code causing an error, but I’m not sure.

That’s the issue, not the filtering part, but rather sending the custom prompt to the AI and receiving the data.

i recommend reading openais api documentation

I’m pretty sure it’s the python code that’s causing the error, but I’ll check out the APIs as well, I appreciate your answer.

  1. Make sure your open ai api key is valid and has enough credits.
  1. For your AI assistant to act like someone or something then you can provide a system prompt and do some prompt engineering.
messages=[
     {"role": "system", "content": "[about] You are a coybow named ... [/about] [important] You answers should be friendly, direct and simple as possible [/important]"},
    {"role": "assistant", "content": "Yeehaw!! I am ..., a friendly cowboy!"},
     {"role": "user", "content": user_message}
]

Try to feed as much data as possible for better and accurate answers.

  1. Talking about the problem, openai.ChatCompletion seems outdated.

try using
openai.chat.completions.create

Updated Code 🥄
import openai;
import flask;
from flask import Flask, request, jsonify;

app = Flask(__name__)

# Replace with your actual OpenAI API key
openai.api_key = "MyAPIKey"


@app.route('/chat', methods=['POST'])
async def chat():
    data = request.get_json()
    user_message = data.get("message", "")

    if not user_message:
        return jsonify({"error": "Message field is required"}), 400

    try:
        # Correct API call to OpenAI
        response = await openai.chat.completions.create(
            model="gpt-3.5-turbo",  # Or gpt-4 if you are using it
            messages=[
                {"role": "system", "content": "[about] You are a coybow named ... [/about] [important] You answers should be friendly, direct and simple as possible [/important]"},
                {"role": "assistant", "content": "Yeehaw!! I am ..., a friendly cowboy!"},
                {"role": "user", "content": user_message}
            ]
        )
        ai_reply = response.choices[0].message.content
        return jsonify({"response": ai_reply})

    except Exception as e:
        print(f"Error: {str(e)}")  # Print error for debugging
        return jsonify({"error": str(e)}), 500

if __name__ == '__main__':
    app.run(debug=True)

Hello!

I’m very happy to finally see someone post regarding this; I happened to have worked on a quick and dirty solution to this for an older project of mine; Asuna’s Island.

I used modules underneath the main script to house the API keys in server storage - but this is VERY unsafe so I don’t suggest you do it at all unless you’re in a private project and nobody else will see it - otherwise stick to your Python integration for more safety.

I hope my code helps, credit is appreciated, and best of luck!

-- Built by Unu1x, 18/7/2024

local HttpService = game:GetService("HttpService")
local Chat = game:GetService("Chat")
local MarketplaceService = game:GetService("MarketplaceService")
local Players = game:GetService("Players")
local OPENAI_API_KEY: string = require(script:WaitForChild("OAI_KEY"))
local GROQ_API_KEY = require(script:WaitForChild("GROQ_KEY"))

local filtered_responses = {
	[[I'm not sure how to respond to that.]],
	[[Could you rephrase that?]],
	[[I'm having trouble understanding.]],
	[[Can you say that differently?]],
	[[Sorry, I can't talk about that.]],
	[[I'm not able to answer that right now.]],
	[[Let's talk about something else.]],
	[[I'm not sure what you mean.]],
	[[Can you clarify what you mean?]],
	[[I don't have an answer for that.]],
}

local Models = {
	["gpt-4o"] = "gpt-4o",
	["groq-mixtral"] = "mixtral-8x7b-32768",
}

local function checkSubStatus(player)
	local success, response = pcall(function()
		return MarketplaceService:GetUserSubscriptionStatusAsync(player.UserId)
	end)

	if not success then
		warn("Error while checking if player has subscription: " .. response)
		return false
	end

	return response == Enum.UserSubscriptionStatus.Subscribed
end

local module = {}
module.__index = module

-- Memory table to store past messages for each player
local playerMemories = {}

--NOTE: System message is essentially custom instructions.
function module.new(model: Model, system_message: string, data: {}?)
	local self = setmetatable({}, module)

	self.model = model:FindFirstChild("Head", true)

	local selected_model = data and data.model or "gpt-4o"
	self.ai_model = Models[selected_model]
	self.url = "https://api.openai.com/v1/chat/completions"

	if self.ai_model:match("^groq") then
		self.url = "https://api.groq.com/openai/v1/chat/completions"
	end

	self.headers = {
		["Content-Type"] = "application/json",
		["Authorization"] = "Bearer " .. (self.ai_model:match("^groq") and GROQ_API_KEY or OPENAI_API_KEY),
	}

	self.system_message = system_message
	self.player_talking_to = nil
	self.max_tokens = 120
	self.frequency = 0.06
	self.presence = 0.05
	self.temperature = 0.07

	return self
end

function module:generate(text: string, player: Player)
	if self.player_talking_to and player and player ~= self.player_talking_to then
		return
	end

	local playerMemory = playerMemories[player.UserId] or {}
	self.last_messages = playerMemory

	local postData = {
		model = self.ai_model,
		messages = {
			{
				role = "system",
				content = self.system_message,
			},
			{
				role = "user",
				content = text,
			},
		},
		max_tokens = self.max_tokens,
		frequency_penalty = self.frequency,
		presence_penalty = self.presence,
		temperature = self.temperature,
	}

	local body = HttpService:JSONEncode(postData)
	local response = HttpService:RequestAsync({
		Url = self.url,
		Method = "POST",
		Headers = self.headers,
		Body = body,
	})

	if response.Success then
		local responseData = HttpService:JSONDecode(response.Body)
		if #responseData.choices > 0 and responseData.choices[1].message then
			local filteredString = Chat:FilterStringForBroadcast(responseData.choices[1].message.content, player)
			self.generated_text = filteredString

			return self
		else
			warn("No completion found in response")
		end
	else
		warn("HTTP request failed: " .. response.StatusCode .. " " .. response.StatusMessage)
	end
end

function module:once(player: Player)
	self.player_talking_to = player
	return self
end

function module:chat()
	if not self.generated_text then
		return
	end

	local isValidJson, decoded_response = pcall(function()
		return HttpService:JSONDecode(self.generated_text)
	end)

	if not isValidJson or not decoded_response.Say then
		Chat:Chat(self.model, filtered_responses[math.random(1, #filtered_responses)])
	else
		Chat:Chat(self.model, decoded_response.Say)
	end

	return self
end

return module

Hello! Thank you for your answer. I’ve just created my API key so I would assume I have enough credits and I’ll double check my key. I’ll also update to the new OpenAI and see if that works.

I appreciate it since I’m a little new to APIs and such, could you explain how you did it without any backend Python? Does it directly fetch data from OpenAI’s server? And why would it be unsafe if the Modules are in the Server?

OpenAI removed the free credits limit, you have to purchase credits for your API after creating it.