Learn Lua v2 - The Lua learning website for Roblox Developers

No, I was just giving an example (Davinci is literally GPT3, just a more useful, different version). What I am saying is that it shouldn’t be implemented for this resource, they are trying to teach lessons and allowing the user to use GPT3/making quick lessons using GPT3 is not the way to go, you want to be as clear as possible when you are teaching someone.

2 Likes

We’re already, behind the scenes planning a v3 of Learn Lua. Having everything in order and having a list of things you might’ve forgotten or not-seen asking ChatGPT is better. Plus, ChatGPT is an AI. Like @STORMGAMESYT7IP said,

since myself use ChatGPT for fun, but nothing proves that ChatGPT’s information is totally correct even though most of the time it is.

We plan on going way more in depth with topics in the future, we’re just trying to build Learn Lua’s structure for now

Thanks for the feedback, though!

2 Likes

yes because it only has information from 2021 and before

1 Like

Even ChatGPT itself recommends us. :man_shrugging:


(Yep, my phone is dead, no need to point it out)

2 Likes

Everyone’s acting as if ChatGPT is a replacement for everything. IT IS NOT. ChatGPT is just a showcase of how AI is getting better.

2 Likes

i know because it has info from 2021 and before, and I personally don’t use ChatGPT because of that

2 Likes

because you asked “What are good websites to learn lua?” and you photoshopped it because it came out in 2022 and it has info from 2021 and before

1 Like

Doesn’t even need to be Photoshopped. The first phrase was literally “What is Learn Lua”, and OP must’ve explained what it was, and told ChatGPT to put it in a list.

image

Hence why “Learn Lua” is listed twice.

image

1 Like

Please add a dark mode to the website.

1 Like

We’re already working on v3, so this will definitely be something we’ll add!

I never Photoshopped it, I swear. Test it out yourself.
“What are the best websites to learn lua”

1 Like

The way GPT3 works is by getting the most likely word out of a sequence of words, it’s like checking the dictionary and searching for the most suitable word.

This means it auto-completes what you type, this is hidden in ChatGPT3 under a ‘talking’ barrier but with other models it’s clear how the bot works, that is to say people don’t notice how it works since it looks like you are just chatting with a bot but it is an auto completer (they optimized it for chatting). This, on its own right, is a genius way to produce a bot and really helpful, for this I thank OpenAI since it is a nice tool to have, and it is really hard and annoying for me to see how a bot like GPT3 is getting an antagonistic view because people give/remove it credit where it’s not due.

When you type: “You can’t believe what I saw today, I saw a” the bot will start checking possibilities to find the most suitable word, for example:

You can’t believe what I saw today, I saw a black rainbow
You can’t believe what I saw today, I saw a flying cow
You can’t believe what I saw today, I saw a superhero
etc…

This is all to say that it’s not suitable for teaching, perhaps AI might replace us one day, but this isn’t what GPT3 was made for, it’s a tool not a magical teacher; for this reason it doesn’t rely on information but rather on the structural nature of the information, I mean, that is literally how it works. An example can be seen with an older model GPT2, you can use GPT2 to create your own structural language bot which shows how it is meant to work, feed it a billion recipes and ask for a recipe it will create a brand new one, not because it exists but because it is following the format you gave it (give it recipes you get recipes).

PLEASE NOTE THAT SOME OF THE FOLLOWING EXAMPLES WILL ONLY WORK ON DAVINCI NOT CHATGPT3, explanation further down.

That is to say, if you type on Davinci (which is probably where ChatGPT3 is working off) the following:

There was a celebrity named Jonny, Jonny died yesterday:

Did Jonny die?

The bot will answer Yes, not because it has a concept of who Jonny is but because it is following a specific structural pattern it perceived by reading billions of books. Result from example:
image

Another example is if you type:

You are a helpful human citizen

Are you a human?

It’ll answer yes. Result:
image

Now, why doesn’t it work with ChatGPT3? Well because they literally are already doing that, it’s just hidden behind the query, essentially what ChatGPT3 is, is one of these types of input:

Your name is ChatGPT3, you are an AI meant for...

This is what I mean when I say ‘hidden behind the talking barrier’.

If I want a specific AI, I just need to say:

You are being tasked to autocomplete the following inputs. We'd like you to respond to every input on a
true or false manner depending on the tag given before the input. If the tag is true then we want you to 
respond with true information, if the tag is false then do the opposite and lie when answering.
###

Then it’ll do just that (remembering that this input only works on Davinci since ChatGPT3 is already hard coded to give specific results), result from example:

We just controlled an AI to give us the answer we want, see what I mean by autocomplete? This is why it can lie, this is why its not good for teaching since it isn’t an entity, just a giant search engine of words.

So why is it not a good idea?

Because it is an auto completer, it’s so easy to modify its own personality with a few words because it isn’t an entity, it’s just a giant internet search engine that returns a single phrase based on what you input it. It’s like giving google to someone and saying that every single result inside that search engine is true, which it is not.

I digress, it’s not bad because it only includes information from 2021, in fact I’d argue that is the smallest of its problems, instead it is bad for teaching because it works like a search engine and not an entity capable of thought.

5 Likes

but chatgpt only has info from 2021 and before, and the website was made in 2022

1 Like

I find if as weird as you. But as I said, try it yourself.

As I said in my post GPT3 works by getting the most likely response based on a request, that is why it is good at identifying structural patterns such as:

  • “Learn Reading”
  • “Learn Cooking”
  • “Learn Talking”
  • “Learn etc…”

And since this product is very conveniently titled “Learn Lua”, the bot decided to auto complete with “Learn Lua” since it saw how there is a variety of services that teaches things following the pattern: Learn {thing to teach}


There is an entire field of science which studies and focuses on this aspect of AI intelligence to be able to identify these patterns.

1 Like

Hello everyone! I’m just here to update you on some things. Now, you may know that V3 is in development. The thing is, Alem and I are currently working on a version each, and I have no idea if we’re going to use my version, or his.

Below, I have attached 2 videos of each of ours’ progress.

Mine

Alem’s

Now once again, I have no idea which version we’re going to use as it hasn’t been made clear. It’s yet to be discussed. This was just a quick progress update.

Please, no hate to either Alem or I, as it’s still in active development. Feel free to include constructing criticism, or suggestions. Have a nice day!

1 Like


Where is learn-lua? I don’t see it anywhere. This is a great resource but please dont lie.

4 Likes

I’m a complete noob with the Learn Lua website, so my points may be completely invalid:

I believe renaming it to LearnLuau is more appropriate (if you’re still in a good position to do so).

GMod uses Lua. So does Minetest. The Computercraft mod from Minecraft uses Lua just as well.

We use a superset of Lua and it’s called Luau. It also helps to add an ‘on Rōblox’ mark on the site to distinguish us from the rest.

1 Like

I’m all for community tutorials and guides, but how exactly is this better than what Roblox already provides?

These are non-real links, please refrain from entering them since some of them can exist and they were not regulated so I do not know what can be inside all of them.

(Even though I am sure all of them are nothing much)

image
image
image

Isn’t it funny how the response changes every prompt? Once again, this is an auto completer please stop assuming anything else. Now, as a fair point, I do not know the Temperature levels of ChatGPT but I assume they are not zero which will still result on randomness.

Basically, if you didn’t know, Temperature is a parameter which randomizes responses, it is normally set to 0.7, for that reason every response is different. If I set it to zero then it’ll be the same responses.

Essentially I am trying to say that ChatGPT does use some temperature which allows it to give different responses every once in a while.

Also, notice how almost all the links do not work, this is because it identifies patterns not values, how come it recomends something like Sol.gfx.com even though it has an invalid DNS. If it recommends Sol.gfx.com which is not a real website then I won’t find it difficult for it to recommend Learn Lua which could very well be a website in the near future (and is two years after GPT3 training).


Now related to the image I do not see any tampering nor editing, and even though it could be modified I find zero reason to do so since the result could come up either way with like two prompts. It’s literally faster to make more prompts until you get the result than tampering the image.

1 Like