Here’s how NextBot NPCs from Valve’s Source Engine work, and I see this same structure used in Roblox AI frequently:
(replace NextBot with NPC here:)
Locomotion
This factor handles how a NextBot moves around in its environment. For example, if a NextBot was programmed to flee after being injured, it would rely on this factor to move to a different position in the map.
Body
This factor handles the animations of a NextBot. With the “oninjured” example, a NextBot would rely on this factor to play a flinching animation.
Vision
This factor handles how a NextBot sees certain entities in its environment. The field-of-view and line-of-sight functions mainly reside in this factor.
Keep in mind that this factor is not required for NextBots to work. A Skeleton in Team Fortress 2, for example, will find and attack enemies regardless of whether or not it sees them.
Intention
This factor is where the actual AI of a NextBot resides. The Intention factor manages the different behaviors a NextBot might have, and this factor is responsible for changing these behaviors depending on the event.
Behavior
A Behavior contains a series of Actions, which it will perform when the Intention factor chooses it.
This function can be considered to be the NextBot equivalent of a schedule, since they are both lists of certain actions the AI needs to perform.
Action
This features the actual AI code of a NextBot, which will run when its parent Behavior is run by the Intention factor. Actions can have an additional child Action, which will run at the same time as its parent Action.
This function can be considered to be the NextBot equivalent of a task, since they both contain the core programming which drives the AI itself.
Source: NextBot - Valve Developer Community