I don’t know if this is a bug, But its appears that the AI doesn’t work within a scope. If i have for example 100 Lines, and i want to implement something, I have to put it outside every scope and it just doesn’t help out alot.
Just wondering, how did you get the script AI to work. I was under the impression it was a prompt based thing, but others said it would predict code as you wrote it. Even still, while writing up a simple kill brick, nothing related to the AI happened.
The code writing aspect is cool and scary and junk! It doesn’t work out of the beta box! I want a refund! Just keeps outputting commented variations of the comments that are already in script. I went from excited to furious in 5 minutes! Junk!
It’s marketing video. Companies and developers do this all the time. They show of an example of what their end goal is like, not what their product can currently do. This is mainly to gain investor support or community “hype”.
I wouldn’t say it’s a bad thing, but it can definitely be misleading quite often. Game developers, especially indie, have to do this as well. It’s called a vertical slice demo. Although, in the case of this video, none of it was “real” yet.
I would like to mention a few issues about the system and some suggestions about how it could possibly be improved.
For code completion:
It would be nice if the daily limit was changed, since it does not provide a consistent workflow when interacting with the AI. I’d suggest it should be an hourly limit or something similar.
Additionally, the tool doesn’t let us see the limit. A widget dedicated specifically for this tool to change the AI’s settings, view statistics, and rate the job it does would be nice.
The tool also sometimes doesn’t complete what it’s about to generate, with missing end statements and missing code in general. Could this be changed?
This tool also puts in bad coding practices, such as using the 2nd parameter for Instance.new. It would be nice if it was trained specifically not to do that, rather than making us put it into the prompt each time.
For material generation:
Server errors happen consistently. The tool doesn’t let us easily access the image for further editing (a built-in editor to offset the images would be nice). The images are also usually not top-down, which is likely a problem with the data the AI is trained on. I find this AI much less usable than the code completion AI, since I’ve only gotten around two materials that actually might suit my needs.
The limits of materials in general, such as lack of transparency and depth, are also a problem. Maybe an AI that can actually build structures given a prompt would be more useful?
Blame the people who actually made the free models have that in their script.
Code completion is a great addition, but it’s kind of misleading and way too creative, going around and creating new commands i’ve never wanted to create at all.
The only thing I wanted was local targetplayer = args[1]
It’s also inventing new services, such as banservice with it’s own new functions… Which concerns me to say the least.
I’ve done literally the same thing. I’ve found the AI is capable of learning the template command and it can create the command for you by using it for reference. This allows it to create more commands with ease, and with a higher confidence level it will likely actually produce more.
Dunno how ifeel about this, On one hand, it help’s people with less scripting knowledge get a boost. Bu on the other hand, It beats the point of scripters and scripting in general
Having used it, I can say that it provides no more help than the dev hub does, at least in my experience. The only difference is you don’t have to search for the information on the website. At the rate it’s going, it will be years before this could replace a human, and by then the large AI companies like OpenAI will likely already be doing that.