Good (ir) bitcode/bytecode instruction set for optimization&obfuscation?

Currently I’m trying to create an rbx lua optimizer and obfuscator. After doing a little bit of research, it seems that this is most commonly done at the intermediate representation. I’ve already made a parse tree, so converting to ir shouldn’t be too difficult, but right now I’m stuck on which operations to include. I mention bitcode because I want this to be somewhat low level, as that is where optimizing is mainly done(I haven’t taken a look at solutions such as facebook’s prepack yet so I may be wrong) (Atm I am pandering towards something similar to LLVM: particularly an SSA registry paradigm).

Apparently I suck at googling though because I cannot find a list of LLVM’s bitcodes. Also, I’ve looked at lua 5.1’s bytecodes (why are they called opcodes?) (page 4 of this pdf), but I’m worried they may be too high level (especially NEW*TABLE and the actual lua table ← one thing I would especially like to do is optimize the table usage)

So is lua’s instruction set sufficient? Does a list of LLVM’s bitcodes even exist? Should I spend more time researching AST based optimizers such as facebook’s prepack?

Thanks

2 Likes

LLVM is usually used as a backend to compile to native, such as x86. Here’s a list of their operations.
https://www.felixcloutier.com/x86/

I think you might be reading it wrong. Lua’s bytecode is higher level but that’s because of the nature of the language and its compactness; you might also be referring to NEWTABLE but that on its own is not considerably high level.

Also, yes, they’re called opcodes everywhere. The opcode is simply the “operation code”, such as a Lua LOADK or an x86 mov, and their operands are usually just also called operands. Together, they form an instruction.

You might want to look into making your own Lua implementation, because as far as I know there aren’t many toolsets already in place for what you’re doing. Also, extensive research on how languages are made/built might come in handy.

2 Likes

I know, (from what I’ve seen) I really like their intermediate representation though and I want to copy their optimizations

Yea I meant NEWTABLE; how is the table type in lua not high level? Its an untyped vector+hash structure with metamethods

My own implementation of bitcodes? I’m just looking for a (implementation independent) list of opcodes to define my intermediate representation

edit:

thats not LLVM’s is it?(for one, its platform independent) their intermediate representation is very similar to assembly code but it isn’t identical
gtg :frowning:

It’s just how Lua is designed. Having independent instructions for allocating memory and destructing would most certainly bloat bytecode size and prove irrelevant because of how often they’d need to be used anyways, as opposed to functionality coupling.

You might have a hard time finding this; if you’re looking for Lua functionality then Lua bytecode is the best we have (short of LuaJIT’s bytecode format @ The LuaJIT Project).

LLVM is often used to compile to native code such as x86. As for IL/IR you’ll want to research more on that yourself because there are many ways of implementing.

1 Like

I mean I want to optimize table objects themselves and try to minimize indexing and usage of them as much as possible (and thus have the appropriate opcodes for this)
e.g: I want to optimize {{exp0},{exp1}} to {exp0,exp1}
so I want to make sure I don’t need any extra opcodes for these kinds of things (also I should note I haven’t reached the point where I start having my own thoughts about this-I’m still researching and thus may be asking silly questions lol)

luajit opcodes is a good idea, i’ll look into it thanks

xd i asked about the specific LLVM opcodes and you said that those were them
i know what LLVM is used for

Not quite sure what you mean by this as an optimization. These look like two very different operations.

A little more complex of an optimization but perhaps clearer:
If I had an array of ‘objects’ such as bullets to render/whatever (say each one was of the form: {Instance=workspace.Bullet,InceptionTime=1,InitialDirection=Vector3.new(0,0,-1),Type='light'}) then I want to first optimize the individual object hashes into arrays and afterwards remove the actual object table and merge it into the base array of bullets:
{Instance0,InceptionTime0,InitialDirection0,Type0,Instance1,InceptionTime1,InitialDirection1,...}

edit: after looking a little at luajit (not done yet) its representation seems perfect because its both ssa and typed(which I hadn’t given much thought before but now I realize its amazing for optimizing commutativity and associativity (since it isnt safe to assume those for metatable-attached tables))
edit 2: after looking a little more (specifically at the ssa ir page) I think I want something even more explicit (especially since the doc itself states ‘To preserve higher-level semantics and to simplify alias analysis they are not decomposed into lower-level operations’)

That’s not an optimization, at least on the Lua end. Allocations are often more expensive than indexing so it’s beneficial you simply use Lua’s bytecode format.

Also, even with a tracing JIT there’s only so much you can do while still retaining functional Lua. Lua is simply too volatile for what you’re trying to do, not to mention performance might actually be negatively affected since this is being implemented from the Lua end itself.

I don’t understand: the optimized version does less allocations, has better cache locality& uses the array not the hash… it only more operations on delete (swapping last object with this is 4 ops instead of 1 on lua end(gc ends up having more work on c end though))

I don’t see how you can’t do everything traditional compilers do (except for hardware specific optimizations such as taking advantage of a multiply 8 times module)

Mobile isn’t letting me quote so: what do you mean by lua is volatile? It’s a programming language that by definition is designed to be predictable

Also I would probably end up making the IR convert lua back into itself and not build an interpreter although idk yet

Lua is 99% unpredictable. Roblox Lua is a bit more predictable, but Lua altogether can do vastly different things from running the same exact script in different environments. Also, what you mentioned is an optimization not possible due to this volatility, not to mention it’s 2 completely different operations that aren’t interchangeable.

Cache and other things also don’t work well with Lua because of how the VM is made.

It’s probably a lot of work for less than expected results.

He’s talking about high-level optimizations that can be done with Lua source code. This is more relevant to Roblox Lua, which doesn’t use LuaJIT and doesn’t allow bytecode loading.

The example is switching all uses of a hash table to an array, which is generally faster. A thing developers will write manually at the cost of reduced readability.

-- Original
object = {foo = 'bar'}
print(object.foo)

-- Faster
object = {'bar'}
print(object[1])

As long as the operations are contained, then there’s no logical difference. The trouble comes in when deciding whether the object “escapes”. That is, if this is a library being optimized, then the compiler has to either avoid optimizing the object if it is passed to external code, or do some kind of reflection to ensure that the library’s API remains correct, which may end up being less performant, depending on the usage. It’s a challenging problem.

1 Like

I’m not sure how you can claim that lua is unpredictable when you’re guaranteed execution of your source code in a particular way (probably part of the definition of computer programming/scripting languages and/or Turing machines)

Sure there may be external factors such as meta tables being attached (which can be accounted for) but these in no way affect the deterministic structure of a programming language

The mention of LLVM and bytecode threw me off then.

But yes syncing with external code and other dynamic operations would mess with this.

Nope. Lua is completely impossible to fully predict, in terms of what the user is trying to do. Here’s what to watch out for:

  • Environments can have metatables, which means the code can be stopped or redirected any time

  • More vanilla oriented, the debug library exists, which can edit core parts of current execution

  • Code that is obfuscated or unclear might not be possible to analyze in a clear manner and may cause ambiguity

  • A variable or register can be any given value type at any given type during execution

  • Lua’s primitives can have metatables in vanilla, which permits overloading of things like arithmetic on strings

  • Lua is weakly typed, which makes most optimizations done by strongly typed language compilers mostly impossible.

You can optimize, but you can’t go too far. LuaJIT is a prime example of a good implementation, but it’s not as powerful as it could have been due to Lua’s style.

1 Like

Complete semantic analysis (like type systems, constant propagation and folding, dead code elimination) are undecidable in languages with recursion and/or loops. The crux of the issue is that in those cases, data can manipulate code execution in infinite ways, causing slight differences in value that can propagate and make very large differences in program execution down the road. Most languages handle this by either requiring annotations from the programmer (like the explicit types in C, Java; programmers know more about the use cases than the compiler does), using runtime analysis of execution (JIT for scripting languages), or not analyzing the code. Optimizing or analyzing recursive functions and loops without additional information is undecidable and ultimately turns into a version of the halting problem.

Other features of Lua, mentioned by @ANSI_C have the same effect as the more general, above conditions. To make matters worse type inference is difficult in Lua because of operator overloading (val_1 + val_2 doesn’t tell you much about what the values are). The one advantage RBXLua has is that if you analyze an entire Lua program at once, it exists in a defined environment with types and expected values (userdata properties and methods).

I’ve thought a lot about Lua optimization and typing. The more I do, the more I want to write another language. But here is the kicker, even if you add type annotations to Lua or even design a whole new language, you still can only perform high level optimizations on Roblox. Eventually everything must be turned back into Lua source code to run, or be run by an interpreter written in Lua. Indeed, the future isn’t very bright for those of us interested in doing low level work like this on the Roblox platform.

As for outside Roblox optimization, I really prefer ASTs because they can be easily extended to hold additional information. The fundamental difference is pointers, you can edit one node without modifying the positions of the others or even create graphs which cannot be represented in a instruction based form without including “references” which would basically be pointers. ASTs are closer to what should actually exist in RAM while processing, while instruction based forms require interpretation before manipulation. But if you are optimizing outside Roblox, I’d reuse an existing format so that you can also reuse the optimization engine.

I mentioned what I think the future of Roblox Lua should be here: Get user's script font size? - #6 by IdiomicLanguage
and again here Proper Way To Preload Assets - #15 by IdiomicLanguage where Saranok said, “Completely disagree. […] This is a core tenant of how we operate.” The future for work like this is very dim indeed, or at least it is to me.

All of what you point out stems from having an incomplete model of lua
I still don’t understand where you get volatility from

I’m not sure why’s this turned into a debate about what optimizations can be done in lua

The op says that I am looking for a good instruction set for my intermediate representation, not what can and cannot be done to optimize lua

Also i will look at Facebook’s prepack tonight to learn more about ast based optimization

The original post says this is the end goal for the IR:

To which @ANSI_C superbly summarized the responses:

I would add that some basic optimizations can be done, but they would all be minor and have minimal impact.

Trust me, I don’t like the situation that causes this either…

Not to mention that you’ll probably be hard pressed to find someone with experience with multiple IRs on this forum.

1 Like

I think it’s as much about the journey as it is the goal… I’ve learned a lot already, and there is definitely more
I only do mention the (perhaps an?) end goal because I don’t want replies asking what I’m trying to do

Yea it sucks that Roblox is such a limited platform, but that forces me to try these cool things myself (even for marginal gains) instead of copy pasting some existing library/not worrying about this at all

On finding people with experience: this post has already been helpful for me because I’ve been referenced to luajit, and I’m sure there’s more value to be found here too

1 Like