Alright, so there are a few things you need to do then.
First is functionNames. This will be easiest if you add every valid function to a list by hand (otherwise you’ll need a more robust implementation anyways to generate them from your source code)
Now when trying to understand the string as a segment of code, you need to be able to split it into individual components with meaning. This is called tokenizing. Things like function names, strings, parenthesis and commas and operators and stuff. A very simple example that technically handles a basic subset of running functions is below, but not feature complete. (it doesn’t handle math operators or some more complex stuff).
Tokenizer example
Note I generated this with GPT, but it has been tested and works for it’s subset that it handles
function tokenize(code)
local tokens = {}
local token = ""
local in_string = false
local escape = false
local string_char = ""
for i = 1, #code do
local char = code:sub(i, i)
if char == "\\" and in_string and not escape then
escape = true
token = token .. char
elseif (char == "'" or char == '"') and not escape then
if in_string and char == string_char then
in_string = false
token = token .. char
table.insert(tokens, token:match("^%s*(.-)%s*$")) -- Insert token and trim whitespace
token = ""
elseif not in_string then
in_string = true
string_char = char
token = token .. char
end
elseif not in_string and (char == '(' or char == ')' or char == ',') then
if #token > 0 then
table.insert(tokens, token:match("^%s*(.-)%s*$")) -- Trim whitespace and add token
token = ""
end
table.insert(tokens, char)
else
if escape then escape = false end
token = token .. char
end
end
if #token > 0 then
table.insert(tokens, token:match("^%s*(.-)%s*$")) -- Trim whitespace from the last token
end
return tokens
end
-- Example usage
local code = "func('arg1', 'arg2')"
local parsed_tokens = tokenize(code)
for _, token in ipairs(parsed_tokens) do
print("'" .. token .. "'") -- Print tokens with quotes to show trimmed whitespace
end
This next step is somewhat optional and somewhat not. You need to generate an Abstract Syntax Tree. This basically is more information about what every token is (like is it a function name, a string?) and it also includes scope so you can tell what order the stuff runs in. This isn’t technically entirely necessary, but the barebones of the concept is. You need to be able to have it run functions in the correct order passing the correct types. An AST is practically just precalculating that data so you don’t have to do extensive parsing for every line of code you try to run.
I would go into a bit more detail about AST, but that link I sent about interpreters probably does already and I ran out of time.
Basically if libraries exist that handle these things, you really should be looking for them. It’s very likely some or all of this is handled by something somewhere, but you could build it yourself too.