RoTorch - Machine Learning Library for Roblox

RoTorch

RoTorch

Summary

RoTorch is a tensor and neural network library for Roblox, it has torchlike syntax and gives users the power to create their own neural network layers and differentiable functions without having to dive into the source code of the module. RoTorch also comes with a data loader plugin that allows you to load datasets directly from your file system allowing.

Documentation

torch functions:

--[[
	Creates a one dimensional tensor of size steps whose values are evenly spaced from start_num to end_num, inclusive.
]]
function torch.linspace(start_num: number, end_num: number, steps: number, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Constructs a tensor with input <strong>data</strong>.
]]
function torch.tensor(data: data, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of inputted dimensions filled with the value 0.
]]
function torch.zeros(size: {number}, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of the same size as input, except filled with 0s.
]]
function torch.zeros_like(input: types.Tensor, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of inputted dimensions filled with the value 1.
]]
function torch.ones(size: {number}, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of inputted dimensions filled with random numbers from a uniform distribution on the interval [0,1).
]]
function torch.rand(size: {number}, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of inputted dimensions filled with numbers from a normal distribution with mean 0 and variance 1.
]]
function torch.randn(size: {number}, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a tensor of inputted dimensions filled with fill value.
]]
function torch.full(size: {number}, fill: number, kwargs: kwargs.requires_grad): types.Tensor
--[[
	Returns a new tensor with values that equal the sum of (input + other).
]]
function torch.add(input: types.Tensor, other: types.Tensor | number): types.Tensor
--[[
	Returns a new tensor with values that equal the difference of the (input - other).
]]
function torch.sub(input: types.Tensor, other: types.Tensor | number): types.Tensor
--[[
	Returns a new tensor with values that equal the product of (input * other).
]]
function torch.mul(input: types.Tensor, other: types.Tensor | number): types.Tensor
--[[
	Returns a new tensor with values that equal the quotient of (input / other).
]]
function torch.div(input: types.Tensor, other: types.Tensor | number): types.Tensor
--[[
	Returns a new tensor with values that equal the quotient of (input ^ other).
]]
function torch.pow(input: types.Tensor, other: types.Tensor | number): types.Tensor
--[[
	Returns a new tensor with the same data as the input tensor but of a different shape.
]]
function torch.view(input: types.Tensor, ...: number): types.Tensor
--[[
	Performs the element-wise division of tensor1 by tensor2, multiplies the result by the scalar value and adds it to input.
]]
function torch.addcdiv(input: types.Tensor, tensor1: types.Tensor, tensor2: types.Tensor, value: number?): types.Tensor
--[[
	Returns a scalar tensor consisting of the maximum value in the input tensor.
]]
function torch.max(input: types.Tensor): types.Tensor
--[[
	Performs a matrix multiplication of the matrices input and mat2.
	If input is a(n×m) tensor, mat2 is a (m×p) tensor, out will be a (n×p) tensor.
]]
function torch.mm(input: types.Tensor, mat2: types.Tensor): types.Tensor
--[[
	Returns the value of a scalar tensor as a number.
]]
function torch.item(input: types.Tensor): number
--[[
	Performs an elementwise square root operation on the inputted tensor.
]]
function torch.sqrt(input: types.Tensor): types.Tensor
--[[
	Run a piece of code in the no_grad context (gradients will not be calculated here, and all tensors created in the no_grad context will have requires_grad=false)
]]
function torch.no_grad(fn: () -> ())
--[[
	Saves a tensor or table of tensors to a folder of module scripts with the inputted name
	
	[MUST BE RAN IN PLUGIN CONTEXT]
]]
function torch.save(obj: types.Tensor | {types.Tensor}, name: string?)
--[[
	Loads tensor(s) from a folder of _rdata modules and returns a table of the loaded data as tensors.
]]
function torch.load(location: Folder, kwargs: kwargs.requires_grad)

torch.nn functions:

--[[
	Holds submodules in a list that can be indexed like a regular luau table.
]]
function nn.ModuleList(modules: {Module.Module}): {Module.Module}
--[[
	Holds submodules in a sequential container that when called will automatically chain and call all submodules, returning the output of the final module.
]]
function nn.Sequential(modules: {Module.Module})
--[[
	Applies an affine linear transformation to the incoming data: y = xA^T + b
]]
function nn.Linear(in_features: number, out_features: number, kwargs: kwargs.bias)
--[[
	Applies the Softmax function to a 2d input Tensor.
]]
function nn.Softmax()
--[[
	Applies the rectified linear unit function element-wise y = max(0, x)
]]
function nn.ReLU()
--[[
	Randomly zeroes some of the elements of the input tensor with probability p (default: 0.5)
]]
function nn.Dropout(p: number?, training: boolean?)
--[[
	Randomly zero out entire channels based on p (default: 0.5)
]]
function nn.Dropout2d(p: number?, training: boolean?)
--[[
	Randomly zero out entire channels based on p (default: 0.5)
]]
function nn.Dropout1d(p: number?, training: boolean?)
--[[
	Applies a 1D max pooling over an input signal composed of several input planes.
]]
function nn.MaxPool1d(kernel_size: number, kwargs: kwargs.stride & kwargs.padding & kwargs.dilation & kwargs.return_indices & kwargs.ceil_mode) 
--[[
	Applies a 2D max pooling over an input signal composed of several input planes.
]]
function nn.MaxPool2d(kernel_size: {number}, kwargs: kwargs.stride & kwargs.padding & kwargs.dilation & kwargs.return_indices & kwargs.ceil_mode) 
--[[
	Applies a 1D convolution over an input signal composed of several input planes.
]]
function nn.Conv1d(in_channels: number, out_channels: number, kernel_size: number, kwargs: kwargs.stride & kwargs.padding & kwargs.dilation & kwargs.groups & kwargs.bias)
--[[
	Applies a 2D convolution over an input signal composed of several input planes.
]]
function nn.Conv2d(in_channels: number, out_channels: number, kernel_size: {number}, kwargs: kwargs.stride & kwargs.padding & kwargs.dilation & kwargs.groups & kwargs.bias)
--[[
	Computes the cross entropy loss between the input logits and target
]]
function nn.CrossEntropyLoss(weight: types.Tensor?, ignore_index: number?, reduction: "mean" | "none" | "sum")
--[[
	Converts a tensor to a trainable module parameter, any tensor that is a parameter and stored in a module will also appear in the module:parameters() iterator function.
]]
function nn.Parameter(data: types.Tensor, kwargs: kwargs.requires_grad)

torch.nn.Module functions:

--[[
	Create a neural network layer with the name, constructor, and forward pass specified in model_info.
]]
function Module.create(model_info: model_info): ModuleConstructor

torch.nn.functional functions:

--[[
	Stateless version of nn.Linear
]]
function functional.linear(input: types.Tensor, weight: types.Tensor, bias: types.Tensor?): types.Tensor
--[[
	Stateless version of nn.ReLU
]]
function functional.relu(input: types.Tensor): types.Tensor
--[[
	Stateless version of nn.Conv1d
]]
function functional.conv1d(input: types.Tensor, weight: types.Tensor, bias: types.Tensor?, stride: number?, padding: number?, dilation: number?, groups: number?): types.Tensor
--[[
	Stateless version of nn.Conv2d
]]
function functional.conv2d(input: types.Tensor, weight: types.Tensor, bias: types.Tensor?, stride: {number}?, padding: {number}?, dilation: {number}?, groups: number?): types.Tensor
--[[
	Stateless version of nn.Dropout2d
]]
function functional.dropout2d(input: types.Tensor, p: number?, training: boolean?): types.Tensor
--[[
	Stateless version of nn.Dropout1d
]]
function functional.dropout1d(input: types.Tensor, p: number?, training: boolean?): types.Tensor
--[[
	Stateless version of nn.Dropout
]]
function functional.dropout(input: types.Tensor, p: number?, training: boolean?): types.Tensor
--[[
	Stateless version of nn.MaxPool1d
]]
function functional.max_pool1d(input: types.Tensor, kernel_size: number, stride: number?, padding: number?, dilation: number?, return_indices: boolean?, ceil_mode: boolean?): types.Tensor
--[[
	Stateless version of nn.MaxPool2d
]]
function functional.max_pool2d(input: types.Tensor, kernel_size: {number}, stride: {number}?, padding: {number}?, dilation: {number}?, return_indices: boolean?, ceil_mode: boolean?): types.Tensor
--[[
	Stateless version of nn.Softmax
]]
function functional.softmax(input: types.Tensor)
--[[
	Stateless version of nn.CrossEntropyLoss
]]
function functional.cross_entropy(input: types.Tensor, target: types.Tensor, weight: types.Tensor?, ignore_index: number?, reduction: "mean" | "none" | "sum")

torch.nn.init functions:

--[[
	Returns the recommended gain value for the given nonlinearity function
]]
function init.calculate_gain(nonlinearity: "leaky_relu" | "tanh" | "relu" | "selu" | "sigmoid" | "linear" | "conv1d" | "conv2d" | "conv3d" | "conv_transpose1d" | "conv_transpose2d" | "conv_transpose3d", param: number?)
--[[
	Calculate the correct fan for the inputted tensor
]]
function init._calculate_correct_fan(tensor: types.Tensor, mode: "fan_in" | "fan_out")
--[[
	Calculate both the fan_in and fan_out for the inputted tensor
]]
function init._calculate_fan_in_and_fan_out(tensor: types.Tensor): (number, number)
--[[ 
	Fill the input Tensor with values drawn from the uniform distribution U(a,b).
	
	defaults:
	a = 0
	b = 1
]]
function init.uniform_(tensor: types.Tensor, kwargs: kwargs.a & kwargs.b)
--[[
	Fill the input Tensor with values using a Kaiming uniform distribution. The resulting tensor will have values sampled from U(−bound,bound) where
	bound = gain * sqrt(3 / fan_mode)
	
	defaults:
	a = 0
	mode = "fan_in"
	nonlinearity = "leaky_relu"
]]
function init.kaiming_uniform_(tensor: types.Tensor, kwargs: kwargs.a & kwargs.mode & kwargs.nonlinearity)

torch.optim functions:

--[[
	Initializes an Adam optimizer
]]
function optim.Adam(params: iterable, kwargs: kwargs.lr & kwargs.betas & kwargs.eps & kwargs.weight_decay)

torch.autograd functions:

--[[
	Initializes an autograd function with the specified forward pass, backward pass, and name.
]]
function Function.create(function_info: function_info): Function
Usage Example

model.rbxl (4.1 MB)

Install Library:

Install Data Loader Plugin:

Thank you for reading, if you have any questions feel free to ask, I'll try my best to respond as quickly as I can.

5 Likes