Renderstepped improvements for Custom Audio system

Provide an overview of:

  • What does the code do and what are you not satisfied with?
    I am making a sound system, it sets the value of an equalizer and rever based on the distance between the local player and any of the speakers in the tables. it uses renderstepped to calculate what the sound levels should be. and I am curious if it can shave off any calculation time for performance improvement
    I am happy with the sound levels btw

at the beginning of the script, the code makes a table with all the speakers and checks if it`s a bass speaker or mid or high

local H = {} -- put these here so it does not create a new one every time
local M = {}
local L = {}
local HD = 0
local MD = 0
local LD = 0
local DD = 0

RunService.RenderStepped:Connect(function() --adjust sound
--resets all tables
	for k in pairs (H) do
		H[k] = nil
	end
	for k in pairs (M) do
		M[k] = nil
	end
	for k in pairs (L) do
		L[k] = nil
	end	
--sets eq
	for i,v in pairs(SpHigh) do
		table.insert(H,#H+1,plr:DistanceFromCharacter(v.Position))
	end
	HD =  math.min(math.min(table.unpack(H)),2000)
	if HD<90 then Eq.HighGain = -40+(40*math.cos((math.pi*(HD-90))/400)) else
		Eq.HighGain = -40+(40*math.cos((math.pi*(HD-90))/1910))
	end
	
	for i,v in pairs(SpMid) do
		table.insert(M,#M+1,plr:DistanceFromCharacter(v.Position))
	end
	MD = math.min(math.min(table.unpack(M)),2000)
	Eq.MidGain = (-3)-(37*math.sin((math.pi*MD)/4000))

	for i,v in pairs(SpLow) do
		table.insert(L,#L+1,plr:DistanceFromCharacter(v.Position))
	end
	LD =  math.min(math.min(table.unpack(L)),2000)
	if LD<10 then Eq.LowGain =6-((6/10)*LD)else
		Eq.LowGain = 0-(5*math.sin((math.pi*(LD-10))/4000))
	end
	--sets reverb
	DD = plr:DistanceFromCharacter(orig)
	if DD>500 then Rev.WetLevel = -10 else
		Rev.WetLevel = -80+ (DD*70/500)
	end
end)

What’s the relationship between sound and rendering that made you go for renderstepped?

I’d have probably gone with Heartbeat to not hold up any of the rendering or physics pipeline.

2 Likes

RenderStepped fires before each frame is rendered and Heartbeat fires after

Correct. That doesn’t answer my question relating to why sound equalising would need to be done before the render pipeline, rather than after it and before physics, or after physics.

It will run at roughly the same frequency either way, but one delays rendering whilst the other does not.

You haven’t mentioned any sort of GUI display or anything that would explain the necessity for it to be done before, hence my question.

The distance between the player and the sound source will be done in the physics pipeline, so naturally I would put this system after the physics update, i.e. heartbeat. Another option could be to hook into the CFrame changed signal on the player of interest. My question is why you chose the method you chose.

1 Like

there is a set of faders in the script to show how far the song is progressing
but those can be split, eq for heartbeat and renderstepped for the sliders

“why you chose the method you chose.”
I saw that renderstepped was calculated before every frame. therefor I wanted the updated position sound in my next frame (thought it would be more realistic at the time).
I did not know it held back the other functions

Okay, no worries. I just didn’t want to make a bunch of recommendations before understanding the reasoning behind it.

As a rule of thumb, use renderstepped for anything that needs updating for that frame of rendering (e.g. camera position, GUI updates, etc.).

Use stepped for anything that doesn’t need to be rendered this frame, but is needed for the physics update - anything that was collision dependent or something I’d probably put here to ensure it is included in the physics calcs.

And finally heartbeat for anything else, including things that you want to update between the physics step and the next frame’s render step without delaying that render step. Your eq levels are a great example of this - they want to react to the physical change of the player or the sound emitter’s position, but you don’t want to delay other things unecessarily for it.

However, to cache in the benefit of standing still, I would personally use GetPropertyChangedSignal on the character and on the sound emitter for their CFrame property, and I would call the function as part of those events.

That way if both are still, you aren’t doing any processing at all!

1 Like

that`s interesting. still playing with this.
but if both are moving, won’t it do the function 2ce?

running into the next issue btw

local plr = game:GetService("Players").LocalPlayer
local char = plr.Character or plr.CharacterAdded:wait()

char:GetPropertyChangedSignal("Position")

does not fire (Need help with GetPropertyChangedSignal)

It will potentially do it twice if they are both moving, so there’s that to consider. However it will be happening in the same part of the frame as Heartbeat so less of a concern in terms of whether it’ll affect performance.

And you’ll need to grab a part, probably HumanoidRootPart as the others move with idle animations.

char:WaitForChild('HumanoidRootPart'):GetPropertyChangedSignal(...)

Whether you prefer this over a Heartbeat connection is probably more on how frequently you expect both to be moving and whether there are long periods of both doing no movement.