Example of Compression Rendering Imposter Models Efficiently while Using AI to Create 3-D models

I had a case where I have all the amazing wall tiles made from images,

My custom renderer compresses the models to references of the source object in the game’s library for maximum memory management. Then it creates imposter models with the images used to create the models.

I used photoshop and Stablefast 3D to create the models in addition to an ai image generator
e Fast 3D - a Hugging Face Space by stabilityai

Then I used the images i used to create the wall tiles to create imposter models of high quality.
This project started off using synty dungeon tilesets then i Created 11 of my own using AI.

If you wish to have a template of an example tileset, I have open sourced the Egyptian themed one here.
Egyptian Tile Set Mesh Pack (Original) Low Poly [OPEN-SOURCE] 21 Models Now with HD Textures - Resources / Community Resources - Developer Forum | Roblox

The algorithm is that I wrote to handle optimized rendering of the imposter omdels is this.

local distance=require(game.ReplicatedStorage.Zone.RenderDistance).WallRender*3.5
local fakewalls=workspace.FakeWalls:GetChildren()
workspace.FakeWalls.ChildAdded:ConnectParallel(function(v)
	table.insert(fakewalls,v)
end)
local constructs=game.ServerStorage.ConstructedPlaceholders
while true do 
	task.desynchronize()
	local players=game.Players:GetPlayers()
	local metaobjects={}
	for t,o in players do
		metaobjects[o.Name]=o.Character and o.Character.HumanoidRootPart.CFrame.Position
	end
	for i,v in fakewalls do 
		--task.wait()
		if v and v.CFrame then
		local pos2=v.CFrame.Position
		local check=false 
		for t,pos in metaobjects do 
			if (pos-pos2).Magnitude<distance then
				check=true				
				break
			end
		end
		if check==false then
			if v~=nil and typeof(v)~="table" and v.Parent then
				local sourcewall=v.SourceWall.Value
				if sourcewall and sourcewall.Parent then
				local dataconstruct={CFrame=v.CFrame,SourceWall=sourcewall,Texture=v.Name}	
				task.synchronize()
				v:Destroy()
				task.desynchronize()
				fakewalls[i]=dataconstruct
				else 
				fakewalls[i]=nil	
				end
				--task.wait()
			end
		elseif check==true  then 
			if typeof(v)=="table" and v.SourceWall and v.SourceWall:FindFirstChild("LowPoly") then
			task.synchronize()
			local replacement=constructs:FindFirstChild(v.Texture):Clone()
			replacement.CFrame=v.CFrame
			replacement.SourceWall.Value=v.SourceWall
			v.SourceWall.LowPoly.Value=replacement
			fakewalls[i]=nil--replacement--nil
			replacement.Parent=workspace.FakeWalls
			task.wait()	
			task.desynchronize()
			end
			end
		else 
			fakewalls[i]=nil
		end
	end
	task.wait(.1)
end

This compresses the object to a reference of their source object and storing it in a table. This is done to reduce the memory load of all the textures.
This is a running example of this in action, the walls in view of the players camera are rendered in mesh form, while the rest have imposter models that are parts with just the textures.

On average these models have ~1200 polygons while a part consists 12 polygons! So this technique has increased my framerate significantly.

Player’s Perspective

The objects in the background are images and the objects in the foreground are not! This allows more objects in the game! More detailed scenes! If you compress it like I am in this example your memory management is optimized.

In my game everything is rendered this way so I can create a massively scaleable game using procedural generation. But the imposter model are only there for the structures like this, It was unintentional or rather it was in the back of my mind to be able to do this while implementing 3-D tilesets.
The goal is running complex games on edge hardware!
This code is for memory optimization of rendering imposter models.

Some key things left out of this post;

  1. The AI Image Generator I used to create these models,
  2. The prompts I used to generate tilesets.
6 Likes

hey you shouldn’t use AI

the creators have said stuff like they NEED to steal work for it to be “viable” (not even true)

1 Like

Are we still using this argument? I am an artist so I could either make these myself which takes 2-6 hours a piece or I can generate them with AI which takes 12 seconds. Which for this project it was appropriate due to it’s scale and nature, I created 300 models in 2 weeks. Using this method and it’s super cool! But I should probably keep it a secret.
But on the brightside, no one seems to care because the link has 0 clicks.

3 Likes

In your view, every artist has stolen from others. Since when an artist sees a picture of another artist then something changes in his brain, like with AI.

4 Likes

That argument doesn’t apply when the OP themselves created and trained their AI on their own assets. What you’re referring to are public generative AI made by companies that have scrapped the internet and stolen. HuggingFace is a website full of collections of indie-trained AIs, usually running an old offline version of Stable Diffusion XL that can’t even scrape the internet as it doesn’t have access to it.

2 Likes

Great work, thank you for the information/resources it will come in handy

This is AWESOME. I’m a big fan of AI and the only drawback has been the poly count so thanks so much!

1 Like