Release Notes for 659

I get that the release page ‘pending’ means “code/fix is deployed but feature flag is turned off”. Does the release page Live/Pending box update if a FFlag is switched after the release notes are posted?

2 Likes

Any idea if the terrain editor will be improved? The old one was better for our developers… ever since they were forced to migrate to the new one, it’s been a lot harder to create advanced terrain.

3 Likes

well as i know there should be an improvements to the terrain editor but it is on hold sadly

1 Like

What would be the use case of editable sounds, I can’t think of anything for it that you wouldn’t be able to do with normal sounds

4 Likes

Ask yourself the same question for a similar thing. What is the usecase of editable meshes? A shared usecase is the idea of procedurally generating meshes, images, and potentially audio. Based on how the others work it would also allow you to modify existing audio. That last point can be done with audio effect instances to a degree, but not full flexibility.

3 Likes

I have asked myself that question before posting, didn’t really lead me anywhere.

Editable meshes are actually useful because they have abilities that normal meshes don’t have, such as real time vertex creation and removal. And the same goes with images with its abilities to blend different images onto one another or move pixels around or even create something completely new.

I just can’t think of a single ability the editable audio would have that isn’t possible with normal audios. Anytime I think I have something that could be an editable audio feature, I then instantly think how I could just do that with normal audios.

To me I think it would just be a waste of time making an update for something you can already do perfectly fine, especially when there is better stuff to be working on.

2 Likes

These use cases translate to audio as well.

Sure, you can upload hundreds of sfx or tones to synthesize audio by playing them over one another while hoping it sounds okay. This inefficient workflow can also be said for EditableImage or EditableMesh. You don’t need EditableImage, technically you can just render a bunch of Frame GuiObjects. You also don’t need EditableMesh because you can use a bunch of triangle parts.

Obviously the points I made above are silly in regards to EditableMesh and EditableImage. These instances allow you to do something that is already possible, but more efficiently and with a clean API.


An immediate application I can think of for an EditableAudio instance would be any game with a focus on creating music dynamically. It would be either extremely boring to always have the same music possibilities or extremely tedious for the developers to upload hundreds of different tracks.

A simpler example of the above application would be if you wanted to create a piano. This has already been achieved, yes, but it requires statically uploading a sound for each note.

3 Likes

If you know you want to combine sound effects together, why would you ever upload them as different sounds? Why not go into a SFX editor and just combine them there?

Not really, editable images still have other usecases like reading pixel data from other images such as image ids and images from capture service which would not be possible without it.

Editable meshes also have other uses that wouldnt be possible without it, such as UV data, Normal data and the ability to publish custom meshes and the ability to read other mesh data.

Or instead of uploading a new sound for each note you could just upload all the notes as one sound and jump to the note that was played; sound.TimePosition = time

Maybe, but I don’t really see this being used anywhere. Not only would trying to procedurally make music that sounds good be really difficult, but it just wouldn’t beat uploading a few human made music tracks that would better fit the game and probably sound better and just transition the music for specific situations.

2 Likes

What if you make a game centered around creating your own music? What if the player wants to create his own sounds to be used with the music? It would be cool to synthesize audio in game,.

What if the way the sounds are mixed is dynamic? Then it wouldnt be possible to make just one sound for it.


I actually have thought of making a game where you can create your own music tracks. I would have to add all the sounds for all the instruments, meaning I could need 100s of samples uploaded to roblox. It would be significantly easier to just store all of the sound data into a file and then load it into editable sounds when the game runs.

Another potential use case would be for making emulators of systems that have their own sound chips. Although this would require making the audio generate in real time.

6 Likes

For more clarification, let’s say you have a dynamic sound system for exploring and fighting. You have 3 exploring music tracks and 2 fighting music tracks. For the exploring music, you could add them all up into one sound, same can be done with the fighting tracks. When the player is not fighting, the exploring music will be playing and when the player starts attacking, you fade out the exploring music and fade in the fighting music. And I know this isn’t “one” sound but that is also still doable, just a little more work for it. You can start by adding all your music tracks into one sound and when the time comes to change the music, clone the sound and time skip to the new track and fade that track in whilst fading out the old track.

Simple, that’s basically the same as the piano example. Upload all your sounds as one and time skip to it. sound.TimePosition = time It would probably be easier to store the positions of the sounds than the actual sounds too

Or you could just have it as one sound and use TimePosition

Or you could just download those sound effects instead of trying to recreate them ??? Thats like me trying to create my own water.

That wouldn’t work though. The “sound effects” arent predetermined. They are generated live.

No, I’m talking about generating new samples in game.

4 Likes

when editable meshes will be fully replicated to the client?

2 Likes

To give you some examples :

  • The MS Surface is a Windows device that may or may not have a keyboard&mouse
  • ROG Ally is a Windows device with a (main input) gamepad and touch screen
  • An iPad that is set up on a desktop with a mouse and keyboard attached should not use Touch …

Please don’t try to “guess the platform”.
The goal is to support capabilities for input and output, no matter what device you run on.

7 Likes

I understand and agree that it is important and much better to adjust UI, controls, and whatnot based on input type, and that this will result in a better user experience. However, I don’t see there being any harm in allowing developers to tell what device type a user is on.

It is also easier for developers who don’t have the time or knowledge to set up a system to change everything based on input type, and because Roblox aims to be beginner friendly this would really help everyone.
There can also be other reasons to know what device a user is on beyond just controls, such as graphics and UI size and layout. It would open up great opportunities for developers, and there is really no reason not to allow it.

5 Likes

As an example: When we added Quest VR support, a large number of experiences didn’t work because of platform assumptions. A Quest VR headset is an Android mobile device with a “tracked gamepad” emulating a “mouse pointer”. Any assumption about specific devices goes out of the window and we will continue to add support for new devices in the future.

set up a system to change everything based on input type,

There are much less input types than device types, so I would flip this around : no developer has the time to change everything based on all device types.

such as graphics and UI size and layout.

We are thinking this through from multiple angles, e.g. expanding of FlexLayout, but also possible DPI API.

5 Likes

The main issue is the screen size on say mobile vs desktop devices. Mobile has a tiny resolution and my game was designed for desktop and has many UI features that simply wouldn’t work or fit mobile or tablet. My game has a separate mobile version of the UI to make it simpler and more ergonomic for mobile users.

There are many other games that do this too.

It would be incredibly useful and a very simple solution for us developers to be able to simply detect these device types. This can even be useful for outside of UI work

6 Likes

The solution here is to not consider platform, only viewport size. You can still have different versions of the UI which is chosen based on the viewport size.

Platform should have nothing to do with this. Even on desktop, the resolution is not guaranteed to be large because the window size can be changed.

Side note: You can still also choose UI based on selected/last input type. It’s just not a good idea to assume stuff like ‘TouchEnabled’ always means the device is mobile. These are actually unrelated.

8 Likes

I almost replied earlier asking for precisely this, it’s been requested multiple times. This would be great!

1 Like

The DPI option would be great. Being able to query the real size of a gui object on screen would be the best option for touch inputs, so we can make buttons with consistent sizes (and size limits) on all touch enabled platforms

5 Likes

Oh, yes, it’s so much better, i love simplicity and efficiency.