Release Notes for 659

This is also the reason why the desktop app from roblox.com doesn’t support touch input at all (thus requires the user to use Microsoft Store version for touch support) - it’s a bunch of UI devs still relying on TouchEnabled to determine which UI will be used for their experiences.

The mentioned method & event have more accurate results since it keeps changing each time the user change their input device, including mouse, keyboard, touch, and even gamepad.

5 Likes

Sorry if this doesn’t help much, but I personally recommend using GetLastInputType and/or LastInputTypeChanged either as an additional check or as a whole replacement, since using TouchEnabled alone isn’t really 100% reliable due to some touch-enabled Windows device users using the touch-supported Microsoft Store version of Roblox, and the increasing trend of using tablets as a replacement to laptops due to modern tablet SoCs higher processing efficiency and more competing with average laptops in general - while being as compact as they usually are.

5 Likes

Appreciate your response. We got in touch with some Roblox staff members through messages and they’ve got us situated.

It seems like this issue wasn’t directly caused by UserInputService.TouchEnabled, but by a shadow patch / unmentioned change from this patch, which causes GuiObject.TouchTap to break.

This patch doesn’t mention anything about the TouchTap being changed, but we’ve resolved our UI issue by replacing the event with GuiObject.Activated.

I’d like Roblox to avoid shadow patching their changes, unless this was a mistake by them. We’ve spent the past couple of hours trying to fix this issue, just to realize that one of the listeners is no longer working. We’d rather spend the time on working on new features and updates that benefit both the platform and our experience.

We’re working on reporting this bug formally, but at this point of time, we are exhausted of working.

6 Likes

You aren’t supposed to. This is the problem with using things they aren’t meant to be used for.

This can most likely be fixed with a check though.

3 Likes


I had no clue there is support to Chromebook at all (first time reading realese notes)

2 Likes

I agree, if they do this we should get access to other methods to reliably get the device type. Such as UserInputService:GetDeviceType() I don’t see why we aren’t be allowed to use this?

8 Likes

If it’s not documented, or it’s not functional, then it’s assumed to be released in an upcoming update, or in rare cases may be pulled before release.

However, it doesn’t make for great user experience at all, and I experienced this with trying math.lerp before having it flagged to me that this is an upcoming feature, but wouldn’t be implemented for at least 2 weeks, so other folks relying on Intellisense will also run into the same issue as I did.

2 Likes

I get that the release page ‘pending’ means “code/fix is deployed but feature flag is turned off”. Does the release page Live/Pending box update if a FFlag is switched after the release notes are posted?

1 Like

Any idea if the terrain editor will be improved? The old one was better for our developers… ever since they were forced to migrate to the new one, it’s been a lot harder to create advanced terrain.

3 Likes

well as i know there should be an improvements to the terrain editor but it is on hold sadly

1 Like

What would be the use case of editable sounds, I can’t think of anything for it that you wouldn’t be able to do with normal sounds

4 Likes

Ask yourself the same question for a similar thing. What is the usecase of editable meshes? A shared usecase is the idea of procedurally generating meshes, images, and potentially audio. Based on how the others work it would also allow you to modify existing audio. That last point can be done with audio effect instances to a degree, but not full flexibility.

3 Likes

I have asked myself that question before posting, didn’t really lead me anywhere.

Editable meshes are actually useful because they have abilities that normal meshes don’t have, such as real time vertex creation and removal. And the same goes with images with its abilities to blend different images onto one another or move pixels around or even create something completely new.

I just can’t think of a single ability the editable audio would have that isn’t possible with normal audios. Anytime I think I have something that could be an editable audio feature, I then instantly think how I could just do that with normal audios.

To me I think it would just be a waste of time making an update for something you can already do perfectly fine, especially when there is better stuff to be working on.

2 Likes

These use cases translate to audio as well.

Sure, you can upload hundreds of sfx or tones to synthesize audio by playing them over one another while hoping it sounds okay. This inefficient workflow can also be said for EditableImage or EditableMesh. You don’t need EditableImage, technically you can just render a bunch of Frame GuiObjects. You also don’t need EditableMesh because you can use a bunch of triangle parts.

Obviously the points I made above are silly in regards to EditableMesh and EditableImage. These instances allow you to do something that is already possible, but more efficiently and with a clean API.


An immediate application I can think of for an EditableAudio instance would be any game with a focus on creating music dynamically. It would be either extremely boring to always have the same music possibilities or extremely tedious for the developers to upload hundreds of different tracks.

A simpler example of the above application would be if you wanted to create a piano. This has already been achieved, yes, but it requires statically uploading a sound for each note.

2 Likes

If you know you want to combine sound effects together, why would you ever upload them as different sounds? Why not go into a SFX editor and just combine them there?

Not really, editable images still have other usecases like reading pixel data from other images such as image ids and images from capture service which would not be possible without it.

Editable meshes also have other uses that wouldnt be possible without it, such as UV data, Normal data and the ability to publish custom meshes and the ability to read other mesh data.

Or instead of uploading a new sound for each note you could just upload all the notes as one sound and jump to the note that was played; sound.TimePosition = time

Maybe, but I don’t really see this being used anywhere. Not only would trying to procedurally make music that sounds good be really difficult, but it just wouldn’t beat uploading a few human made music tracks that would better fit the game and probably sound better and just transition the music for specific situations.

2 Likes

What if you make a game centered around creating your own music? What if the player wants to create his own sounds to be used with the music? It would be cool to synthesize audio in game,.

What if the way the sounds are mixed is dynamic? Then it wouldnt be possible to make just one sound for it.


I actually have thought of making a game where you can create your own music tracks. I would have to add all the sounds for all the instruments, meaning I could need 100s of samples uploaded to roblox. It would be significantly easier to just store all of the sound data into a file and then load it into editable sounds when the game runs.

Another potential use case would be for making emulators of systems that have their own sound chips. Although this would require making the audio generate in real time.

4 Likes

For more clarification, let’s say you have a dynamic sound system for exploring and fighting. You have 3 exploring music tracks and 2 fighting music tracks. For the exploring music, you could add them all up into one sound, same can be done with the fighting tracks. When the player is not fighting, the exploring music will be playing and when the player starts attacking, you fade out the exploring music and fade in the fighting music. And I know this isn’t “one” sound but that is also still doable, just a little more work for it. You can start by adding all your music tracks into one sound and when the time comes to change the music, clone the sound and time skip to the new track and fade that track in whilst fading out the old track.

Simple, that’s basically the same as the piano example. Upload all your sounds as one and time skip to it. sound.TimePosition = time It would probably be easier to store the positions of the sounds than the actual sounds too

Or you could just have it as one sound and use TimePosition

Or you could just download those sound effects instead of trying to recreate them ??? Thats like me trying to create my own water.

That wouldn’t work though. The “sound effects” arent predetermined. They are generated live.

No, I’m talking about generating new samples in game.

2 Likes

when editable meshes will be fully replicated to the client?

2 Likes

To give you some examples :

  • The MS Surface is a Windows device that may or may not have a keyboard&mouse
  • ROG Ally is a Windows device with a (main input) gamepad and touch screen
  • An iPad that is set up on a desktop with a mouse and keyboard attached should not use Touch …

Please don’t try to “guess the platform”.
The goal is to support capabilities for input and output, no matter what device you run on.

2 Likes