Steps :
-Go to your game’s security settings and switch Allow HTTP Requests to on
-Replace the apiKey with your Gemini API key; Get your Gemini API key here (must be 18+ and be in availible countries, you’ll be redirected here if you’re not eligible)
-Change prompt string to your prompt
-Do whatever you want with msgfinal as it’s what Gemini retuened
Recommendation :
-I recommend filtering the results using TextService:FilterStringAsync() as you can’t guarantee the results are up to ROBLOX TOS standards.
-Get trustworthy developers as you have to trust your developers with your Gemini API key.
Code down below :
local HttpService = game:GetService("HttpService")
local apiKey = "insert ur api key here"
local url = "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key="..apiKey
local prompt = "insert ur prompt here"
local requestBody = HttpService:JSONEncode({
contents = {
{
parts = {
{ text = prompt }
}
}
}
})
local success, response = pcall(function()
return HttpService:PostAsync(url, requestBody, Enum.HttpContentType.ApplicationJson, false)
end)
-- Handle the response
if success then
local decodedResponse = HttpService:JSONDecode(response)
local msgfinal
for index,text in pairs(decodedResponse["candidates"][1]["content"]["parts"][1]) do
msgfinal = text
end
print(msgfinal)
else
print("error: "..response)
end
I’ve got a question, having not used HTTP service much before and considering the pings to external sources, how long does it take to recieve the response? Is it something to be improved in the future and not yet usable in real time or is it regular latency for real time usage?
If the creator of this post doesn’t update with how to include context in the model conversation history I will take a look when I get around to incorporating this api to my AI stack.
It has a unique structure which will take some testing to figure out