Hello,
I’ve been exploring the idea of running a Transformer-based Language Model entirely within Roblox, and I’d like to share my project: RoLLM (Roblox Language Model). It’s an experimental codebase that uses OOP Lua modules to build a simplified GPT-like Transformer—complete with tokenizers, embedding layers, attention blocks, and more.
Why RoLLM?
I wanted to see if it was feasible to load a large (or even a smaller) LLM into Roblox Studio and have it generate text on the fly. The theoretical part works: you can define all the layers in Lua, manage tokenization, and even incorporate something like a GPT-2–style subword vocabulary. However…
The Problem
Unfortunately, performance and memory constraints in Roblox make it really tough to run a large-scale LLM in-engine. You need huge matrices, lots of floating-point operations, and robust memory management—none of which is trivial in Roblox’s sandbox. Even after optimizing, RoLLM still struggles with anything beyond tiny toy experiments (like a 2-layer, 32-dimensional Transformer).
In other words: RoLLM currently doesn’t produce coherent text the way GPT-2 or GPT-3.5 would, because we can’t realistically load or train big models in Lua. Right now, the code mostly yields random or semi-random outputs if you try anything large.
Sample Code
local HttpService = game:GetService("HttpService")
local LLM = require(game.ReplicatedStorage.Packages.RoLLM)
local config = {
dModel = 32,
numHeads = 4,
dFF = 64,
numLayers = 2,
maxSeqLen = 128,
tokenizerMode = "external",
externalVocabURL = "https://huggingface.co/openai-community/gpt2/raw/main/vocab.json",
}
local myLLM = LLM.new({}, config) -- first argument is textData, in this case we don't use textData but externalData
print("LLM created with GPT-2 external vocab. Vocab size:", myLLM._tokenizer:getVocabSize())
local nextTok = myLLM:predict("Hello")
print("Argmax next token (external approach):", nextTok)
local genText = myLLM:generate("Hello", 5)
print("Argmax generated text:", genText)
local nextTokTemp = myLLM:predictTemperature("Hello", 1.0)
print("Temp=1 next token:", nextTokTemp)
local genTextTemp = myLLM:generateTemperature("Hello", 10, 1.0)
print("Temp=1 generated text:", genTextTemp)
Why Release It Open Source?
I’m open-sourcing RoLLM in the hope that someone in the community might find clever workarounds—or at least learn from the code. Maybe you’ll figure out how to:
- Import smaller pretrained weights more efficiently,
- Use external GPU-accelerated services for big matrix ops,
- Leverage a specialized approach to partial model loading, or
- Train a super tiny model offline and run it in Roblox.
What’s Included?
- Tokenizer Modules: Both character-based and a placeholder for subword-based (BPE-like) approaches.
- Transformer Blocks: Multi-head attention, feed-forward, layer norms, etc. in OOP Lua.
-
Examples: Scripts showing how to chunk large text data, do basic inference, and unify everything under one
init.lua
.
What Next?
- Check Out the Code: I’ve posted the entire project’s source on Github Repo.
- Contribute: If you manage to optimize it or find a novel approach to GPU offloading, I’d love to see a pull request or a fork.
- Use an External API: If you just want high-quality text, an HTTP integration with OpenAI, Hugging Face, or another service is often simpler. But if you’re curious about the internals of a Transformer, RoLLM can be an educational toy.
Final Thoughts
I know it’s not working perfectly in-engine on large models, but I believe there’s value in seeing how Transformers can be structured in Luau. If you’re the adventurous type who loves experimentation, feel free to pick up RoLLM and tinker—maybe you’ll be the one to conquer the Roblox LLM frontier!
Start contributing
Clone & Init ROJO with default.project.json :
git clone https://github.com/rustyspottedcatt/RoLLM
.rbxm:
RoLLM.rbxm (54.4 KB)