Globals lose their highlight when localized

The globals, when localized lose their selected highlight color and are treated as normal locals.
To me, this looks extremely ugly, and to my knowledge there is no setting to reverse this.

image

also, type & typeof when called are no longer red, though this is just a pet peeve of mine.

image

Expected behavior

The globals, when unlocalized retain their selected highlight color.

image

2 Likes

As someone who would never override a global identifier with a local identifier, I’m curious about why you are doing that? That seems like a bad practice to me. This syntax highlighting that you dislike helps make it clearer what’s going on when somebody is doing this kind of override, which is good in my view. I wouldn’t want this issue fixed.

Localizing globals gives the script a very slight boost in performance because it skips the indexing of the global environment for the global every time the global is retrieved. Same with localizing members of libraries except it skips the indexing of the library for when you want to use the function (table.insert > insert). It’s actually pretty good practice.

I should not be punished for wanting to macro optimize with no highlighting if I want to localize a library and no members of it because I don’t want to take up local scope space.

It never did this before. And I don’t like it.

1 Like

These are both incorrect due to global access optimizations and fastcalls -

It’s always possible to “localize” the global accesses by using local max = math.max , but this is cumbersome - in practice it’s easy to forget to apply this optimization. To avoid relying on programmers remembering to do this, Luau implements a special optimization called “imports”, where most global chains such as math.max are resolved when the script is loaded instead of when the script is executed.

From: Optimization - Luau

4 Likes

This is not true in Luau for built-in libraries as far as I am aware. Have you confirmed this yourself using a library like boatbomber’s Benchmarker or through analysis of the Luau bytecode?

I do believe localizing is useful for things like Vector3 and CFrame methods as I have observed performance differences with those. But they are not built-in to Luau. Things like table and print are.

You can check this for built-in libraries by looking at the bytecode produced by the Luau compiler (downloaded from github and compiled with luau-compile --text <input_file>). In this case, it appears that the built-in functions calls are replaced with faster bytecode instructions when used “globally” rather than when they are localized.

Luau code & Bytecode
-- Test A
local function example()
   local t = {}
   table.insert(t, "test")
   print(t[1])
end
example()
-- Test B
local table = table
local print = print
local function example()
   local t = {}
   table.insert(t, "test")
   print(t[1])
end
example()
-- Bytecode A
Function 0 (example):
REMARK allocation: table hash 0
    2:    local t = {}
NEWTABLE R0 0 0
REMARK builtin table.insert/2
    3:    table.insert(t, "test")
FASTCALL2K 52 R0 K0 L0 ['test']
MOVE R2 R0
LOADK R3 K0 ['test']
GETIMPORT R1 3 [table.insert]
CALL R1 2 0
    4:    print(t[1])
L0: GETIMPORT R1 5 [print]
GETTABLEN R2 R0 1
CALL R1 1 0
    5: end
RETURN R0 0

Function 1 (??):
    1: local function example()
DUPCLOSURE R0 K0 ['example']
    6: example()
MOVE R1 R0
CALL R1 0 0
RETURN R0 0
-- Bytecode B
Function 0 (example):
REMARK allocation: table hash 0
    4:    local t = {}
NEWTABLE R0 0 0
    5:    table.insert(t, "test")
GETUPVAL R2 0
GETTABLEKS R1 R2 K0 ['insert']
MOVE R2 R0
LOADK R3 K1 ['test']
CALL R1 2 0
    6:    print(t[1])
GETUPVAL R1 1
GETTABLEN R2 R0 1
CALL R1 1 0
    7: end
RETURN R0 0

Function 1 (??):
    1: local table = table
GETIMPORT R0 1 [table]
    2: local print = print
GETIMPORT R1 3 [print]
    3: local function example()
DUPCLOSURE R2 K4 ['example']
CAPTURE VAL R0
CAPTURE VAL R1
    8: example()
MOVE R3 R2
CALL R3 0 0
RETURN R0 0

Moreover, boatbomber’s Benchmarker plugin also reveals the “global” way to be more efficient.

Benchmarked code
--[[
This file is for use by Benchmarker (https://boatbomber.itch.io/benchmarker)

|WARNING| THIS RUNS IN YOUR REAL ENVIRONMENT. |WARNING|
--]]

local table_local = table

return {
	ParameterGenerator = function()
		return
	end,

	BeforeAll = function() end,
	AfterAll = function() end,
	BeforeEach = function() end,
	AfterEach = function() end,

	Functions = {
		["A"] = function(Profiler)
			
			local t = {}
			for Index = 1, 1000 do
				table.insert(t, Index)
			end
			
		end,

		["B"] = function(Profiler)
			
			local t = {}
			for Index = 1, 1000 do
				table_local.insert(t, Index)
			end
			
		end,
	},
}

If you are micro-optimizing (this stuff is not called “macro”-optimizing), you should be measuring these things yourself and making sure that you are actually getting the desired results :slight_smile: If you are not doing that then all your effort is wasted.

2 Likes

I don’t appreciate all of your passive aggressive replies. Run this for yourselves and change os_clock to os.clock Localized will mostly always beat unlocalized soley because it skips the global environment & library indexing. Test multiple times to be sure. :slightly_smiling_face:

local os_clock = os.clock
local clock = os_clock()
for i = 1, 1e7 do 
	os_clock() -- replace the _ in both of these with . for testing
	os_clock()
end
print(os_clock() - clock)

image

1 Like

So I tried running the benchmark you sent multiple times in Studio’s command bar with os.clock() and os_clock() like your comments suggested. I got values in the range 0.41-0.43 seconds for both of them, which means if there is a difference in performance it seems to be less than the general amount of variance in doing the work (to the human eye).

I also tried putting your benchmark into the Benchmarker plugin, and adjusting the number of iterations (I was able to do 1e2, 1e3, 1e4, and 1e5). Again, I got no clear pattern for which one is better (e.g. for 1e3 what the plugin reports as the fastest version is constantly changing with difference < 1 microsecond).

Benchmark code
--[[
This file is for use by Benchmarker (https://boatbomber.itch.io/benchmarker)

|WARNING| THIS RUNS IN YOUR REAL ENVIRONMENT. |WARNING|
--]]

local os_clock = os.clock

return {
	ParameterGenerator = function()
		return
	end,

	BeforeAll = function() end,
	AfterAll = function() end,
	BeforeEach = function() end,
	AfterEach = function() end,

	Functions = {
		["A"] = function(Profiler)
			
			for i = 1, 1e3 do 
				os.clock() -- replace the _ in both of these with . for testing
				os.clock()
			end
			
		end,

		["B"] = function(Profiler)
			
			for i = 1, 1e3 do 
				os_clock() -- replace the _ in both of these with . for testing
				os_clock()
			end
			
		end,
	},
}

The bytecode for the two versions is different, but it appears they have identical performance.

It may be possible that you are getting different results on your machine, but if that is the case you have failed to provide any detailed information that we can work with, such as measurements.

Both the measurements (provided by myself) and the official documentation (provided by @Judgy_Oreo) disagree with you here. We are not being passive-aggressive – we are politely correcting you because you are wrong.

Please read this part of the documentation again:

Luau implements a special optimization called “imports”, where most global chains such as math.max are resolved when the script is loaded instead of when the script is executed

That is all I will say on this. Good luck on your projects :+1:

2 Likes

In nearly all relevant cases localizing is better than the lack thereof. In the scope of roblox anyways. Replacing os.clock with CFrame.new, Color3.new or similar makes the difference far more noticeable.

Regardless, the lack of highlighting on localized globals is disgusting to me, and there’s no way to make it go back to normal. Optimization is not the main focus of the bug report, even though one of the only reasons I do this is because of efficiency/simplicity. I should not have to endure the highlight disappearing simply because I localize the library.

1 Like

Thanks for your report. We’ve identified this as a duplicate of a previously reported bug that is triggered by a recent improvement to our syntax highlighter.

For updates, check out the existing thread here.

1 Like