Release Notes for 435

Notes for Release 435


Client Difference Log

API Changes

Added Class UICorner : UIComponent
	Added Property UDim UICorner.CornerRadius

Added Property bool Studio.Enable Internal Features
Added Property bool Studio.Show CorePackages
Added Property bool Studio.Show FileSyncService

Added Function void WorldRoot:BulkMoveTo(Objects partList, Array cframeList, Enum<BulkMoveMode> eventMode = "FireAllEvents")

Added Event GuiService.NativeClose() {RobloxScriptSecurity}

Added Enum BulkMoveMode
	Added EnumItem BulkMoveMode.FireAllEvents : 0
	Added EnumItem BulkMoveMode.FireCFrameChanged : 1
	Added EnumItem BulkMoveMode.FireNoEvents : 2

Added EnumItem HttpError.SslVerificationFail : 11

Changed the value of EnumItem HttpError.Unknown from 11 to 12

(Click here for a syntax highlighted version!)


YES! No more having to add 50 more characters to my super long table indexing operation!

You may or may not be telling the truth, zeuxcg!


The link isn’t working for release notes?


Fixed, link was malformed.


Yeah, I love the BulkMoveTo method of WorldRoot! Now my scripts can be much shorter!

But I don’t know what this means, exactly:

20% faster, perhaps?


Really, REALLY excited to have compound assignments, but I’m more curious about BulkMoveTo. Is there any particular use case this is designed for beyond moving parts with great performance? Lack of undo/not firing changes/not replicating to client seem like some pretty major tradeoffs.


Class is visible and insertable in Studio but it doesn’t seem to have any effect yet. Anyone know what flag I need to flip to enable this?


I think it means hashtable elements take up less space now. So a larger table with tons of key-value pairs would be about 20% smaller in memory than before.


Yeah, previously the size of a table with N hash keys (assuming N is a power of two) was ~40N+80, and now it’s ~32N+72.


With compound operators coming could we perhaps in the future expect new operators or even backport some like floor division // and bitwise operators? Lua 5.3 has them.


So, plans can change but right now we don’t have a desire to introduce either. There’s an occasionally updated summary of the features that exist in later Lua version; we’ve taken most library features but as far as language features are concerned, no recent addition feels truly worthwhile.


The big reason why Lua has both is that it has first class integers. First class integers sound nice, but they can actually slow the interpreter down due to the need to handle multiple core numeric types (this cost can be recovered to a large extent through some duplication in the opcode space, but overall this doesn’t seem very interesting). The only place where our interpreter right now takes a tiny performance hit due to absence of integers is table lookup by index.

Integers are also a bit open question wrt compatibility. I don’t recall what Lua semantics is for them but I’d have a lot of questions around behavior of existing code, handling of various overflow conditions, etc.

Integers probably make sense for “small Lua” when you run it on microcontrollers with no good floating point environment, or so little memory that you use a 32-bit floating point, which only gives 23 bits of integer precision. But we effectively have 53-bit integers for “free”.

Once you introduce integers, of course you need two division operators (unless you want to be like C-derived languages and just say that 1/2 == 0, which is obviously wrong but somehow we’ve accepted this for decades), and it’s tempting to introduce bit manipulation as operators.

For us though, given the lack of integer support, it’s not clear what the purpose is that these operators will serve. Sometimes you might want to get the integer part of the fraction, but math.floor(a/b) works just as well. Bit operators are used very rarely in Lua code (justifiably) - you should use them when you need to manipulate bits, for example for cryptography or binary data manipulation, but this doesn’t come up very often.

For each operator like this we pay the cost of extending the parser, extending the type system, extending the VM with new bytecodes and adding new metamethods. This isn’t that big, but it all adds up.

For bit operations in particular, not all common operations are even representable cleanly without function calls! For example, bit32.rrotate is a very common operation, so what do we do - do what Lua 5.3 did and just remove the bit manipulation library, and require users to synthesize it as something like (v << 7) | (v >> 25)?

So - I don’t expect that either of these operators will be added any time soon. Of course time will tell, and if there’s a lot of demand, we can always reevaluate. Compound assignments have been a feature that comes up all the time, and it’s ubiquitous in basically all modern programming languages, so we felt like it is the right thing to do, but with these other operators it doesn’t seem as clear-cut.


BulkMoveTo will be extremely useful for me. Looking forward to getting some sweet performance improvements in my placement code. I’m interested in how replication will work… Obviously Anchored parts can probably be desynced entirely but this already is possible by moving the part client-side. How will constrained parts behave? And will the part’s CFrame update when pushed server side? What happens if the parts position is locked on Heartbeat? Can the client continue to simulate the part if they have network ownership? Can a client override this behaviour?


I think that you’ve given a great explanation and a great justification for a lack of a need for bitwise operators in lua. Personally though, the main reason I would prefer bitwise operators over the bit32 library, even though I hardly use them anywhere, is mainly 1. Character count (e.g. 12 > 2 characters for a right shift), and 2. Being pretty used to languages which offer bitwise operators (js, C-like languages, etc), I think far better when reading a >> b vs bit32.rshift(a, b). It feels far more natural to me to read it like an operator vs a function call because function calls are very “meaty” and it takes me a minute to put everything in the call together such as the actual order of the shift.

That is actually one of the biggest reasons I have yet to fully port my compression algorithm (which is now about a year in progress and looking very neat, yay!) to use bitwise vs string manipulation… Which is a shame because even with my currently very well optimized system, porting to use bitwise operators is (give or take) looking to be roughly 100-200x faster (My almost kinda sorta working code was taking ~100ms > ~0.5ms-1ms for a megabyte of gibberish to be compressed, and roughly 60-70% of that for decompression due to one less loop).

Edit: Speaking of the algo, I think that you had asked if I might submit a version of the algorithm to go into luau testing at one point… I’ve sort of hit a stand still with my optimization/organization so far, so now might be a decent time for me to actually submit it.


I don’t disagree, but we need to balance the language simplicity (will a novice programmer know what a >> b is, and how to google for it?), the implementation simplicity (parser/typechecker/compiler/bytecode/VM - the entire stack), performance and readability of the resulting code.

So the question isn’t “is a >> b more readable”, but “what’s the cost of making this change versus the benefit that it provides”.

I don’t know OTOH why Lua introduced these in later versions but I’m somewhat surprised they did. It could be that for Lua the motivation was “function calls are slow”, but we don’t have this limitation - we routinely run bit manipulation code much faster than Lua does last time I checked, and there’s also a small optimization for bit32 in the backlog that’ll make bitwise integer manipulation code a bit (ha) faster still.


Wanted to add to the bitwise talk here. I would prefer the operators over functions because of character count and readability. Sometimes when unpacking it might be needed to use multiple things in a row.
Here’s an example used on Roblox a bit:

bit.lshift(, 0x7F), 4) + bit.rshift(f7, 4)


((f8 & 0x7F) << 4) + (f7 >> 4)

At least for me, personally, it’s significantly harder to see what’s going on in the function example.


I agree with your example especially… This is actually the exact situation I’ve gotten myself into atm. One call isn’t as bad, but the nested ones are the worst. I have a “maybe not really working” bit32 version of my algo, but something is just so very very slightly off in my implementation that extra 0 bits (and the amount are basically arbitrary) are being placed at the end of the compressed segment, which means that somewhere the compression code is generating the right data but improperly escaping the segment, and I just can’t see where because everything looks like nested function calls and gibberish to me.

1 Like

Perhaps, but:

bit32.extract(f8 * 256 + f7, 4, 11)

says “extract an 11-bit exponent starting from bit position 4”. (out of the top 16 bits of the double-precision value; would be more convenient if the double came as fewer larger bits, perhaps two 32-bit chunks, but presumably that’s harder on calling code)


Sorry I’m running late today, I’ve been busy with work related stuff.
The API changes should be live and viewable now.


Why not replicate to the client? This seems kind of unprecedented, in that usually it’s the client that has to do something for there to be a difference from the server, not the other way around.

Of course, one can always implement replication themselves, but it seems kind of weird that this would not replicate to clients.


Looking at the API changes it appears that you can control this behaviour with a third argument, the BulkMoveMode enum. The default is “FireAllEvents” another is “FireCFrameChanger” and the last is “FireNoEvents.” I’d assume first is most expensive, second less so, and the last one won’t replicate/fire anything so least expensive.