This is an excellent initiative, congratulations! But unfortunately this method only shows changes within the lines of code. In practice, even the programmers themselves cannot quickly explain what was done, after some time.
As I mentioned in my previous post, the best form of a release note would be with two texts created by the programmers who made the adjustments, that is, a text with internal technical details and another text explaining to the end users what has been changed. This will help both the Roblox company internally and the end users.
The tool does show API changes, look at API-dump.txt
You may be looking at a comparison which doesnāt have any API changes
Compare 445 to 446
Scroll down to API-Dump.txt if you want readable changes to the API
Iām with you there. Whether admitted or not, there are a lot of problems with how they communicate whatās being changed. Itās all JIRA driven and they havenāt put nearly as many resources into coordinating documentation with engineers as Iād like to see. It has definitely improved over the years, but youāve made some good points in where itās lacking.
They do the best they can given the resources they have, but I think it could be a lot better if more time and resources were invested into it. Their team was small when I was an intern, and I canāt imagine it has grown much since. Its on us to voice to leadership that we collectively want more resources put into these things. No one employee alone has influence over these decisions, they can only provide context given the status quo they work within.
This is part of the reason why I created my client tracker. Roblox on its own is very opaque in the changes that theyāre making to the engine. My goal was to fill that information void with useful clues for power users to tinker with the engine (assuming youāve received enough information to tamper with Roblox Studio, my mod manager is a good catalyst for this). Through a combination of automated data mining and GitHubās diffing tools, Iām able to provide enough information for power users to tamper with code changes that Roblox has shipped.
Because Roblox ships to multiple varying codebases at differing intervals (due to app store approval delays), they ship new changes in a disabled state where they are enabled on the fly using flags they toggle from a remote endpoint. These flags are the source of truth to whether Roblox has code changes enabled or not, and they can be patched locally. They usually have names that provide context to what changes they make, and Iāve been able to make pretty good inferences over the years to what theyāre changing simply through these flags.
This is only the tip of the iceberg, I encourage you to dig deeper if youāre interested. But in summary, yes I agree with you wholeheartedly, and Iāve had a stance like this for many years now. But unless more people show support for these causes, its unlikely to change soon.
I asked about it internally - it looks like release note tracking isnāt integrated into JIRA workflows for the team that works on script editor, debugger and output window improvements. On the list to fix.
This may be a misconception - actually most companies that work on live products donāt share full release notes on a per-commit basis; e.g. check Safari or WIndows
But we do have a process for this - the release notes youāre seeing arenāt a single person going around and asking people āwhat has changedā. Itās not driven from source control history - the commit messages there are for engineers who write the code and read the code, not for outside consumers and tend to be more detailed and often contain internal information thatās not relevant / not meaningful / not safe to share outside of the company.
Hereās an example of the change commit that maps to the āstring.subā change from todayās release:
This change is motivated by improving performance of a Lua JSON parser
and improving VM to be friendlier for first-class vectors.
Specifically, this is trying to solve two problems:
- namecall (foo:bar) was only specialized for tables and userdatas
- string.sub didn't have a fastcall variant
JSON parser hits these issues because it uses str:sub(..) to extract
individual characters from a string - which isn't great for performance,
but the fixes are pretty generic so might as well make them.
First, this change restructures the OP_NAMECALL implementation - the
table path is exactly the same as before (but it got deindented so
disable whitespace to confirm), and the path for other types now handles
__index lookup as well as __namecall in a generic fashion.
This helps in a few ways:
- str:foo() now gets resolved inside the VM, which makes json bench ~10%
faster
- :foo() can now work on other types, notably vectors, and supports both
__namecall and __index fast paths, which will make vector:Dot() faster
once vector becomes builtin.
Of course this still requires a full function call; eliminating that via
fastcall provides more performance gains if the code is changed to use
string.sub directly, resulting in further ~20% gains.
While Iām sure some of you would appreciate a full log like this, itās actually hard to digest and often not safe to share. Because of this we have a separate mechanism to attach notes to changes - this is not done in version control, because there are often multiple changes corresponding to a single release, we may want to edit the messages post-factum which Git (our internal version control system) doesnāt allow, and we need to track āPendingā status which in our pipeline is done through JIRA.
Thanks for bringing this up though because we discovered that we had several JIRA projects where workflow didnāt properly include release notes as I mentioned above. Weāre going to fix that so that release notes can come from all teams, but of course itās still the case that itās at the engineersā discretion to know when to write a release note, and to write it in a way that explains the change well.
Changed the way Humanoid state replicates from the network owner of the Humanoid. Currently uses physics replication, will use property replication. This may change the timing of state change events for remotely owned Humanoids.
Iām concerned that this will break my current gameās behavior;
Can we have a run down of the differences between before and after this change?
The changes to the LocalizationTable API have been reverted as of an update today (0.445.1.410643
), so thatās that I guess.
Anyone at Roblox want to give us a rundown of how API got removed on accident?
This reminded meā¦ Often times I hear (and often advise) not to use Magnitude in resource-heavy loops since it uses square root which is one of the slowest mathematical operators. Over time this has certainly changed, but, I am sort of wondering, would you say that this would actually noticeably impact performance or not when youāre doing it a lot?
For example, say you have a game where you have, say, roughly a few thousand entities, maybe 1-2k within the game and you are using .Magnitude
for each one on Heartbeat. Is it worth it to implement the pythagorean theorem yourself? And is it worth it to implement a rootless distance check? (e.g. dx^2 + dy^2 + dz^2 <= distSqr
)
Iām curious because Iāve heard that rootless checks have positively impacted peopleās code before, but, personally Iāve never bothered to really look into the specifics too much, and, given that there are optimizations to Vector objects coming, Iām also curious to know how the situations might compare before and after.
This is certainly the case, Iāve been able to produce early releases of my games several months before a feature is released, and even allow the game to automatically put the feature into use once its enabled, and solely due to your tools.
A great example of something like this would be attributes. Attributes still arenāt released yet (Iām not actually sure why at this point, as far as I know they havenāt been updated so I suppose theyāre probably backlogged as a low priority feature) however I have fully functional attribute code already utilized in one of my projects, it simply uses a fallback if the feature isnāt enabled, and, if functionality somehow changes or the feature is cancelled I can continue utilizing the fallback until I repair the behaviour.
And, with your tool, it was entirely possible for me to reverse engineer the attribute format and write code to process and produce attribute contents. Without that sort of tool in existence itād be unlikely for me to produce updates like that, and, there is so much useful information in FFlags and new features. Sometimes a couple older or internally utilized FFlags are even useful for me simply for use within studio. Itās quite awesome.
What is your use case for attributes?
Iād just like to take the time to echo what you said. I too have a similar use-case with attributes, and I have done similar works since the advent of TweenService.
Although Iād like to note you need to be careful when implementing these systems. Due to the fact these features have yet to be released, they can be subject to sudden and drastic API changes which may break your games in the long run. I personally prefer to leave my code ready but manually enable as these changes come in.
Well, probably what would be expected Iād assume. I use them to link metadata to instances, for example, I can store entity data such as health or storage capacity. And since they support tables I can also store complex data on my entities.
This also allows for āprefabā entities in my game which isnāt easy to do otherwise, i.e. I can add settings to an entity in studio and since attributes serialize with the instance when the instance is loaded into the game it can be treated exactly like I intend it to. This is a lot less finicky than using Configuration instances with Value objects in them for example (which I find to be bulky and annoying) but also it simplifies pretty much everything about it, it just makes it less visual.
Another way I might end up using this is to store data on scripts for example. I might have a single ModuleScript or Script and I can dispatch a copy of it with custom attributes and the script can modify and store a āstateā on its instance. This is a pretty performant (and nice) way to do this too since storing metadata on a script isnāt exactly easy to do, you end up needing to create lots and lots of instance objects or using holder modules both of which to me feel bulky/hacky (and arenāt nearly as performant, what if I am accessing that stuff hundreds of time a frame)
Thatās 100% true. My safety mechanism tends to firstly use a pcall to catch errors, and if an error occurs in the API call I assume it isnāt enabled or it was changed (which I tend to differentiate between with the message for debugging purposes and notify me if it isnāt due to the feature being disabled, i.e. its broken). Then I have additional sanity checks after groups of API calls to ensure that they behaved as intended, and, if they didnāt I automatically switch to using a fallback globally.
This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.