How-to-Make-Control-Rigs-Smart---Unreal-

https://www.youtube.com/watch?v=7nbVJzjiPfw

Frame at 0.00s

Hey guys, given the competition, I'm really happy how full this room is, so thank you for coming. That's great. I'm Helge Mathieu, Principal Animation Engineer. I'm the dev owner of Control Rig, Unreal's and Android Engine's solution to character rigging. Dev owner in this case means I oversee all the design and architecture decisions for the system and work very closely with the product management team, the user experience team, but I don't manage. So I'm still coding on the tool every day. So we'll talk about smart control rigs today. And now, sort of to address the elephant in the room, I've been addressed with this multiple times already at the convention. This is not about AI for once. It's really about rigs that come with their logic. So they come with custom logic on top of the posing. and we'll offer additional smart tooling. So let's talk agenda. We'll cover why we're even here today, what is this about, what is Control Rig in a nutshell. I'll try to cover that quickly. And especially, I'm going to spend a bit of time covering how it's different from other systems. We'll take a look at some hopefully smart-looking examples. That's the idea. And then we'll identify benefits. We'll look at challenges and issues, because there are some. And we'll also discuss what a potential future for the system looks like. So no promises in this talk, but to give you an idea of the agenda and sort of the vision where we're headed. All right, so let's take a step back for a second and talk about why I'm even doing this talk. So the rigging animation workflow hasn't changed much in sort of commercial offerings looking back the last 25 years. There's some exceptions to this, obviously. But in general, we'll say that the way that you create animation is still very, very similar to what you've done 25 years ago. Execution models such as dependency graphs and something like Maya or Houdini still define what character rigging means for most people today. And even more so, that's how people actually think about rigging in general. Tooling tends to be built on top of Python in DCCs. And because of that, it's difficult for companies to switch because they have all this investment of time and money in the Python stack they built. And of course, there's progress. So proprietary tools that you may have seen from large visual effects or CG companies will have evolved way beyond the commercial DCC offerings. And then on top of that, of course, there's smart individuals in the industry. Just look at what RefnZovin is doing with ephemeral rigging. So there's some of these verticals that are quite extreme. But in the general case, the off-the-shelf case, that's not available to you guys. So the folks at Epic, this group I'm in, have worked on many of the well-known systems in the industry. We have a bunch of experience there. So things like Mgear we've contributed to, Xeno, Primo, many others come to mind. And so when we looked at this and started this endeavor about seven years ago, we realized that the number one thing that's a limiting factor to these systems is how they run. So it seems odd, but the graphs and things aren't the problem. It's sort of the underlying backend and what's running, right? And on top of that, when we started the design, we also knew that we needed to cater to very different groups of people. with their respective needs. Games have very different requirements than film. You sort of want to offer uncompromised performance while also being okay with supporting really complex characters. Let's take a moment to discuss what Control Rig is and how it's different. Control Rig is first and foremost, of course, Epic's answer to character rigging. It was originally developed because the animation graph at the time, animation Blueprint didn't provide granular enough access to build custom character rigs. And Blueprint, as an alternative, wasn't fast enough as a VM. Control Rig has multiple stacks of technology, from a graph that may be familiar with some of you, all the way to a high-level stack for modular rigging that allows you to build characters out of large pieces, like an arm and a leg. Before I show you anything, I really want to make sure you hear that there's tons of demos on the floor. At the UE5 booth, you can take a look at all the things I'm going to show. We also have the Dev Hub where I'm going to be. There's a schedule at the end of the talk where you can come by and try out these demos ask questions about it and so on So please come by and say hi So Controic offers a graph right for building out character rigs but it does not use the graph at runtime So this is something that's really important to understand. So it means that you can work in a familiar environment that you worked in before, but we take care of making things fast. So we compile the graphs down. And by the way, I want to call out, there's a common misconception that you can't build slow rigs. You absolutely can't build slow characters with this. It doesn't protect you from that. But we do do a lot of things behind the curtain to make it faster. So in this really simple example, we can see a control rig that drives this mech. As a very first introduction, this talk to control rig. This mech, by the way, is available on Fab, so you can download this and try it. I'm only showing this to highlight that there's an authored graph that a user has built, which then gets compiled into what we call a register-based bytecode VM. You can see the instructions that's running on the bottom left here. You can see these lines there saying all the operations I need to run. And there's a correspondence. So each one of the nodes will correspond to one or more of the instructions in the bytecode. So once you have this, you can do things like performance profiling. So I'm going to go ahead and enable the profiler. And there's a couple of different tools here. I'm not going to cover this in too deep, but there's some heat map profiling that helps you understand the cost. There's also the microseconds that are shown for each of the instructions. and then you can see at roughly a total of one millisecond, a bit above, this rig runs. So it's not yet quite complex, but we're trying to highlight here that reaching something like 120 Hz, for most characters, is not a problem at all for linear content creation scenarios. Right, so next, before we get into the pretty, hopefully visually impressive demos, I want to highlight one more thing that's really important to understand, which is that in controlling the hierarchy, so the bones that you have and the controls that you have, they're a separate piece of data from the computation of the rig. They're completely divorced. So think about it like this. The hierarchy flows through the graph and can be changed by the graph. So changes as in change a transform or something, but also changes in things like parent relationships, space switching, and things like that. So if you're used to something like a DCC, you'll see that being quite different. So in this example, what we're looking at is sort of a video of one of the modules that we ship with, a small arm rig. on purpose, really starting out really simple here. What we are doing is basically running the same rig on different Characters. The idea being that as long as you can map the rig onto the Character, you can then run the same logic. What I am trying to highlight is that the hierarchy that this Character has, the type of bones it has, and the bone naming it has, is not part of the rig. We can change the structure it is running on. I am just trying to highlight how many bones are in the spine and things like that. I am switching through a bunch of different examples. The Bungee Man that we ship as part of the examples It's quite different again. It's more of a cartoony setup. Yeah, and then at the end, you'll see this running on a mech, which has a completely different set of bones. The other thing to note, sort of for the performance-interested guys in the room, is that if you have four of these characters running in your level sequence or in your game, there's really only one rig in memory ever. So the way we design it is that the rig is stateless, and all the data gets pushed through. So it's cheap. If you have a lot of characters, If you have 50 characters sharing the same rig, it's cheap to run. In something like Maya or Blender, DependencyGrav combines both of those things. You'll have the elements like bones and things as nodes, and you'll also have the operations as nodes, and the wires, I always say the wires are fixed. It means that you can't decide one frame to run IK and then FK easily. You have to build everything twice. I'm sure some of you have done this many, many times where you build an FK arm and an IK arm and yet another one that blends between the two. And so that works, of course, but it limits flexibility and performance because you have to have everything there all the time. And also introduces, as I'm sure you have, lots of management, lots of Python scripting, lots of things you have to do to even get started. So ControRIC offers many, many ways now to think about this, right? Which is great. It's also problematic because it's so different. You can have parts of the logic run only sometimes without introducing cost when you do not need it. Rather than building the same logic over and over again, you can put things in a loop or a function and run over multiple things in the rig They can change the hierarchy multiple times through the frame So you can do things like do a basic solve then do a full body IK then do physics and vice versa You can even build rigs that only do a run at interaction time, so when the animator touches a control and does certain things. So it is quite different in that sense and gives you lots of new areas to play. So in this rig, I will cover these briefly. I am just trying to highlight that this is a rig that runs IK first and then runs FK and blends between the two. There's no duplicated hierarchy. It just runs two poses and blends the FK onto the IK after. And clearly, there's more elaborate ways of doing this. I tried to look for a really simple example that explains that independent of the hierarchy at hand, you can run these and decide what's going on. And if you have a branch or a weight on it, you can avoid the cost of either of these paths completely. And you even have things like looping. So this is where we get into data-driven rigging, where you don't have to change the rig to solve three or five fingers. It's the same thing. You just give a slightly different list of things to operate on, and it will solve it. So that is kind of challenging sometimes mentally, because it's quite different from environments that don't have looping. And yeah, it's one of the, I guess, challenges in learning the system. So let's take a look at a couple examples, which add logic. So first of all, let me quickly say what we mean by smarts again. So we can build tooling into rigs themselves, helping you with posing, generation of motion, so automatic motion, locomotion, things like breathing, automatic idle animation, things like that. We can build physics into the rig, both for runtime simulation as well as posing. So for example, as posing tools, use physics only to compute the FK pose of fingers grabbing an object, things like that, for trajectories and many more things. And then we can use the smarts across characters. I can already tell you we're not going to see a live demo. However, we have the booth. Like I said, I was planning to do a live demo and figured the booth is like 30 meters over there, so you can come by and see it. So the first example of smarts I'd like to talk about are proxies. Proxy controls, if you haven't used them at all, are controls that drive other controls. So there's sort of a rig for the rig. What this means is that they're not going to get keyed. They're not part of the animation data set. they're just there to help you pose. And they are free from any relationships in the rig so they can do things that are really challenging in other systems. Yeah, and they don't add any cost to the playback speed of your character. So in this example, it's the startup Hello World simple example I was going to go for. We have pretty straightforward finger posing, all FK, all the way up to the metacarpals. And what we can do here is we can enable a proxy feature that allows you to do a spread on the fingers. So as you pose one of the fingers, we'll automatically distribute the spread across. So I'm currently just posing single metacarpal as it is. Now I'm sort of going off screen. Unfortunately, you can't see this. I forgot to capture this. I'm enabling the feature of the proxy, and now it's doing this. The important thing to get, though, is that there's no rig running when this gets played back. It's just a helper tool for you to pose this, and when you key, you can key all the fingers. So this may or may not be the right tool for the job, depending on your case. The reason I bring it up is that you sort of can offer more things that you want to pose, that you want to interact with, than what you need to key. And that's an important thing to understand. The second example I've shown this quite a bit, I keep bringing it up, is arguably the most extreme example we have for proxies, is a rolling coin animation. So in this animation, if you've ever done something like this, you know that it's relatively hard, and you'll end up with really deep hierarchies. And so this is a super high-level proxy. So I'll show you what the rig looks like. So here you can see right now there's only two controls. There's an FK control on the coin itself, and then there's this green proxy that's animated. And what the green proxy is doing is just driving the FK. What it's also doing is projecting a second proxy. I don't know if you can see this well, but there's sort of this light red circle on the ground where the coin is touching it. And you can do that to do additional offsets. We'll go through this again in the rig itself. I'll show the rig briefly. Arguably, it's not a tiny graph. If you've done this, though, this is doing all the low level. It's not using functions. So all the low level math is in this graph. So given that it sort of I guess OK for what it is So you have an FK rig here that I posing posing up Pre word no magic nothing just FK Then you have this green circle which is the proxy What that is able to do is compute where the coin is standing up if you are touching the boundary of the circle. It is also projecting that other little circle to the contact point. I can pull this around to do this animation. The way it is built is that when the proxy leaves its boundary, it will project the next one, so you can do multiple flips. So again, it's sort of arguably a really extreme example of this. Well, what's cool about this is that this is independent of how many flips you're going to do, and if you have a commercial or something to do, or a game where you know this is going to happen a lot of times, the investment in a rig like this can sort of pay off really quickly, right? So the key takeaway is proxies are not hardwired. They can perform things that are really hard to do in other dependency graph-based systems. you can overcome limits and rethink rigging approaches with this and build proxies for certain posing tasks that maybe require really complex and large structures otherwise. And you have to ask yourself, what do I want to have as a capability for posing? What do I want to keyframe? And those two things don't necessarily have to be the same set of things. And of course, it's fast, so you tend to get always real-time feedback and what you see and what you get. The second example of rig smarts I want to take a look at today is a node that's added to 5.6. So the Locomotor plugin by Kieran Ritchie offers a new eponymous node called the Locomotor. And the node can compute footsteps for characters. So it has a lot to offer, and I'd recommend you to check it out. It's been in 5.6, I think, as experimental. It's still experimental now in 5.7. However, we're sort of closing in on beta. We fixed a couple of bugs on it. And a lot of content on this show is using this. In fact, at the keynote, I don't know if you noticed, there was a bipedal robot that's walking around that was also using that. So this example showcases a character from, I think, a 12-year-old demo we did called Infiltrator. This character is called the Death Blossom, which I really like. And it offers locomotion as part of its smarts, as well as flying. And this scene, again, is available at the booth if you want to try that out. Feel free to hit me up if you want to know more. So let's take a look at the rig real quick. Let's play for a second again. Yeah, so as I'm zooming into the character, I'll try to find the rig here so I can show you. The one thing also, I think it was not seen there, but I can show you at the booth necessarily, is how little keyframes this has, right? So the rig is really simple. We have this main control. It's sort of the body, I guess you call it. There's some locates for the weapons, and then there's two sliders at the top. It also uses some subtle linear integration to give it this extra little jiggle, just a bit, to make it a bit more lively. And then there's sort of a pelvis offset in the center that you can do on top, as with many rigs, right? Now, as we go to the sliders at the top, we have sort of high-level controls. So one is sort of the folding of the character from this capsule shape to its unfolded state. You can also use a horizontal offset to do this folding. So it's really fast to make it go up and unfold it. In the other slider is the blend between flying and locomotion. So I'll just bring it to the ground. You notice these circles on the ground, and they're the footstep prediction. So they show you where the feet are going to be. So as I bring it in closer, they're going to run really, really fast down there. So I get it closer, and then eventually we'll blend towards locomotion. And you'll see all the legs will go down, and this is it, right? So once it's on the ground, you can now touch the pelvis control, and it will walk. The takeaway here again is that this can be tuned, of course, to something that you like and give it more control over what the strides are, what the walking styles are, all these things. But the way I look at it sometimes is like motion capture. It's a tool that you can use from the game to generate motion that you may then use for linear content creation or, of course, in the games. It depends on what you're after. So as easy as that, like we've now created this animation, you can see the keyframes there. Luckily I've been showing them. So it's about like 15 keyframes or something to do this animation. And the main effort was, of course, in the rig itself. I want to show a bit of the rig before we move on. Clearly, I'm not going to explain the whole structure to you. But there are some nodes I want to focus on, which is the locomotive node, first of all. So this is a node that takes in the body, the information about the feet and the foot sets, and then it feeds this directly into full body IK. So it just says, given this pelvis location and given this feet location, solve all the IK for me. That's sort of the general idea. So I've been using these two high-level nodes to drive this character. So you may argue that tech-like locomotion is very game-centric. And what I'm trying to convey, I hope this is obvious, is that you can absolutely rely on game technology for linear content creation and cinematics, of course, as well. So linear content creation, lots of cinematic works for games, make use of this, and rigs and games can provide procedural animation as well. At runtime, I'm sorry, I was making sure you're reading this. And the key takeaway as well is that this can have the same fidelity and functionality. So if you have multiple deliverables on your project, like if you need to deliver cinematics and a game or, say, a short series and games alongside, you can have the same kind of auto-motion on these characters and have the same fidelity. And, of course, you can also convert the animation back, baking it onto sort of hero rigs or baking it to FK. You have all these features available to you in Sequencer. So for the next example, I'd like to look at a custom motion generator I built for UEFest because I love robots. Sorry, I'm going to stick to that theme for a while. So, saccades, which is sort of, I guess not a very scientific description would be, it's sort of the micro motions we do when we scan objects and when we look around the room or when you look and scan a person, all the micro motions we do. And they tend to be, if you want that to be realistic, they're really hard to animate. It's really annoying, especially if the director tells you, actually, no, I didn't want it to look here, I wanted to look there, and all your eye targets are animated in another place, right? So, they're gnarly. And I was asking myself, what are saccades on a robot? It's an interesting topic. So for this robot, I've created a module that has a very, very basic IK, as you can see. So first and foremost, it's just a look at. I'll zoom in a bit so you can see it a bit better. Really straightforward. And what I've done then, I've added sort of a micro simulation that says I'm going to record the input animation of the look at, and then I'm going to create random offset rotations on the eye to give it like this zoomy or sort of interested look on what's going on. So you can see as I'm moving it, it sort of records the motion, translates that into secondary, but not springs in this case. It's really just a random offset that gives us this idea of being a camera. And on top of that, we can then add, I guess, what you would call a saccade, which is offsets around the target area just to look around. So you can have it, you can direct it somewhere. It will sort of give the impression of zooming in on something, and it'll add saccades around it and sort of look around. And this, of course, then can be extended to sort of metadata-based tagging, looking specific objects, following things along, and all this stuff. So I haven't done this in this character. But for the Witcher demo that we did, we had crowds looking at specific things. And as she's going through the crowd, for example, there's a lot of this going on in all the eyes. It's a lot of overdesign on the eye motion. It's hard to see in the demo, but there's a lot going on with all the eyes of the characters. Yeah, I want to focus on these nodes real briefly. Again, the nodes that have the reddish, brownish title, they're these mini simulation nodes. So they're able to measure stuff between frames, accumulate over time, and so on. So a simulation doesn't always have to be full chaos physics. It can be something really small, like measure the input animation for this target, and then create motion from it. And so because it's a module, you can build this once, and then put it in multiple places on the characters. I've put them on this top eye here with some slightly different behavior on the lookat. In this case, I think I've configured the lookat to drive this different bone. And then there's these two eyes and sort of the chest area that have the same functionality. So to recap, RIGs can perform massive simulations, such as Chaos Physics, but you can also build really small custom things for auto motion. So this could drive sort of harmonic space breathing, but also those kind of very art-directed things, as you can see there. And you can then use these also to drive additional layers, like forward inverse or full body inverse kinematics, as you've seen there. And I'd also like to note, and it's kind of obvious, I hope by now is that you can add viewport, in viewport drawing to your setups. So for the look at, I was drawing extra lines and things that help you to understand what's going on in the rig. You can use that for debugging or as visual cues for the animators as well So the last example I want to focus on for smarts and rigs rely on chaos physics So for 5.6, we started to offer a plugin called Control Rig Physics, which is a low level exposure, I say, of physics functionality in Control Rig. And with that, you sort of have access to physics nodes directly in Control Rig. And you can build physics bodies, constraints, physics controls, and much more. so it gives you really low-level access to all the stuff for art-directed animation. So for the final performance of this guy, I've created two more modules. One is called the Physics Eye, so it's this one. It's basically a single control that has a single rigid body associated with it, and sort of purposefully simple, right? But it's not sort of a realistic simulation. It's really art-directed, so it just takes in this control, and the body at the bottom is going to try to reach this pose, given the limits and forces it has. And as you can see, as I drive the eye around, we rely on physics collision, friction, and it's fairly free in its motion right now. But then I can sort of start to align it more, so I can drive up the angular here, and it will start to try to reach the pose I'm giving it. So I have more control now over what it's supposed to look like. And sort of the way I see it is I'm sort of increasing the strength of the eye direction versus realistic simulation. And then if you increase the linear even further, eventually it's going to fly towards the target, which is now completely unrealistic against gravity. But I guess I'm making a point is that you can control how much art direction you get. Dani was calling out that I show all these nodes always expanded, which makes them look complex. But I keep forgetting to collapse into the graph. So clearly, they have a lot of settings and they give you all low level access. But when you spawn them, they're functional as they are and something you can get on top of. Also important to note is that those are the same exact sort of primitives and settings you'll find in the physics control component and that you will find in the rigid body with control node in the animation blueprint. So at least the way I look at it is that this is for the physics terminology and functionality that once you learn it, it will be the same everywhere, at least, right? And you don't have differences there. So what I'm doing with the body is similar to what I'm doing with the eyes. Just in this case, the collision shape is a composition of lots of different spheres, as you can see here on this guy. And I've added some sort of interactive functionality to fold in the legs as I'm in physics mode. So I can move this guy around the way I want, and then put this module on the character later on. I think I'm going to hone in and focus in on the shapes themselves. So I'm just showing here there's a bunch of shapes and spheres I've created to describe this body. One of the things to note as well is that you can absolutely import physics assets into this as well, and then change them procedurally. So you could say, I'm going to start off with the physics that I already have, and then I do some changes to it using nodes programmatically based on the case. So going back to Sequencer now, I'll show you all this in action. So what I've done for this is we sort of start in physics mode. Everything's dismembered, if you will. Things are lying around. And then I animate towards the common pose and have him walk off the stage. So let me just go ahead and play this using the cinematic camera. Yeah, so you can see there's our direction towards the pose in the center. Have the eyes fly up with the things I've been showing you, sort of increasing linear strengths. Animate the arms using space switching onto the body, and then use the locomotor to have them leave the screen. Sort of an artificial, I guess, scenario where I'm trying to sort of push in all the different things I've been showing through this talk. What's cool here is that you could do takes with this. You could have basic mocap connected to this, or even mouse motion capture, go through takes, bake this back down, do hero animation on top of it. And yes, this requires a couple more keyframes than the example before, but it's still very little keyframing. So like I said before, this is extremely fun. It's really fun to play with the physics. You can cut down the posing time dramatically, relying on physics, get really interesting results. With the new Controic Physics plugin, you can build proxy controls even that you use physics only for posing, such as finger collisions, as I mentioned before. You can do ragdolls and all sorts of things. So it sort of your playground Danny Chapman is also here So if you want to talk to him he be at the end of this talk and he has his own talk tomorrow Well he going to dive deep into how ControlX Physics works and how to set it up. And one thing that's cool, I guess, is for the film guys in the room, that's cool, but this is Chaos Physics really fast. So there is no caching and waiting. As you've seen, you play in Sequencer, and you can see the results in real time. Here's another example. You may have seen him in the keynote. I built this for the 5.7 feature material. Again, using it as very similar techniques. So there's a locomotion setup, procedural idle animation on the pelvis, and even idle animation on the locomotor itself. So I'll show that in a second. So as we enable the animation controls in Sequencer, you can see that there's not a whole lot of animation control. In fact, the rig is fairly straightforward. There's a locomotor control at the bottom, sort of a pelvis control, and then we have offset controls for the feet, so you can keyframe them as well if you need to. He does all sorts of idle stuff. I'll bring him in this idle pose in a second. Yep, went too far, go back. Now he's in the idle state, and I've animated up the whole idle animation. He's doing sidestepping by himself. You can see the antenna at the top doing reactions to being in alert or being in relaxed mode, so when he's relaxed, till the antenna goes up. We're getting really close to game-centric animation here. What's nice is that you can tune in how much you want of this. You can use it as a tool to generate motion, and then you can take the exact same character and put them in a rake. So at the booth, we have a demo where we're using him as a companion. He's just following you through the level, using the exact same setup and with the same fidelity. And of course, you can bake it. As I said before, I just want to make sure I show this at least once. This is a baked version of the same animation that you can then iterate on within Sequencer, or in fact bring to other environments as well if you want to. Another really cool example I would like to sort of slid into the deck last minute from Danny Shepman from the Contoric Physics talk. So in this case, physics is driving the character completely, as you can see, but the locomotor is sort of providing art direction, if you will, to the physics. So it's saying, I want these feet to go here, and physics solves it in the end to make it coherent. I don't know if I'm butchering this, Danny, I'm sorry. Well, that approach is somewhat easier to create a realistic-looking motion, like in this case sort of a rhino-esque tank, I guess. And you may notice there's intercharacter collision as well, so the turret with the legs and all these things. So again, go see Danny's talk tomorrow at 4 p.m. in room 5. He's going to cover this stuff in depth. All right, 30 minutes. Phew, this was a lot. Let's take a step back. So clearly, right, I hope it's clear anyway, is that control rig is quite different from other rigging systems you may have encountered in your career. The execution model is very flexible and fast. And sort of by divorcing the hierarchy from the execution, we get a lot of benefits. You can build once, deploy them everywhere. You can offer animator-centric workflows that don't slow down your characters. It's all great. However, because Control Rig is so different, it's not that easy to pick it up. The learning curve is quite steep still, and we sort of all have to unlearn things for your rigging approaches to relearn it on the other side, unfortunately. So we're all working on better processes there, working on training material. And while Control Week is a solid framework now, there's still a lot for us to cover, particularly in the area of debuggability, discoverability, sort of what tools exist, what nodes are there, and so on. Documentation, I'll say carefully, and training. There's just a lot there. So it's a good problem to have, though. We see it's been picked up. People are really interested, and now we sort of make sure that you can get on top of it. So what's next? Well, there's a bunch of new things in 5.7. I'm not going to do a 5.7 new features kind of part of the deck. This is a light release on the rigging side. There's a slight bit later with a QR code to the roadmap, so you can go see what's planned there. But I still wanted to sort of let you know about a couple of things that are coming in future releases. So one of the things that's highly anticipated is surface interactions. So we want to add an alternative to 3D shape-based controls to reduce visual clutter to make rigs cleaner. And you may have seen this approach in proprietary systems. And it just makes like when you set up something like this it makes the rig look very professional out of the box and it improves accessibility So the point workflow is really much faster and much more intuitive than having to select a 3D control first, and then picking the translate mode, and then posing up the characters. It takes a longer time. So we're not trying to remove 3D controls completely, by the way. We just think this is a nice addition to the tool belt, and if you can decide what you need from parts of the character, especially for facial setups, it's really, really nice to just be able to click on the mesh and pull on the mesh directly. As I mentioned in this talk, Control Rig Physics is already available now. But we want to put forward in this space, move it to beta, and build more high-level workflows on top of this. So this will make it even more accessible and easier to use. Yeah, we're just interested in your feedback, really. So if there's things that you're interested in sort of merging your physics setup with rigging animation have us come see us. Really happy to hear. Another big sort of butter and bread feature, I guess, is not that sexy, but super important, is integration in the Rewind debugger. So what we're planning on doing is, this work is already ongoing, is have really deep per frame collection of all the things going on in the character. So basically, as a high level simplification, all the values going through all the pins of the rig, all relationship changes, anything that's going on to be recorded, and then being able to go back and play it back and debug. So questions like, why in this after seven seconds my upper arm is flipping like that? Like being able to go back and look exactly what led to that decision in the rig, being able to debug on top of that. Yeah, and we'll provide sophisticated tools on top. The main goal also being Rewind Debugger will be integrated with the asset editors so that you can scrub time directly when looking at a rig or a rig module. Another tool that we've recently added to 5.7 that we can demo at the booth as well is a dependency viewer. So even though Control Rig does not use a dependency graph for execution, there are still, of course, dependencies between things, like what is driving the neck bone kind of thing, or a more complex case for a full body IK solver that may affect lots of things. And so to understand what is affecting something, we added a new piece of UI that's called the dependency viewer that sort of allows you to follow the breadcrumbs on a running rig all the way to the node that caused the change. So I'm happy to demo this. Here's a quick video showing it. So this is one of the default rigs I use for debugging. It's a bit older, but it has a lot of nodes in one graph. We use this for de-validation a lot. So in this case, I sort of want to find out what's going on in the neck. So I'll click on the neck bone, bring up the dependency viewer, click on the neck bone, bring up the UI with the pre-populated auto layout.graph that shows me there's a set transform node here that drives all these bones. I can click on it and get to the graph where that's happening. And then I can grow this dependency viewer to show me more things, show me the whole character. One of the things that's cool is that if you have a rig that changes behaviors over frames, those wires will change. So it's a really nice explanation of how this is different from normal dependency graphs, is that our dependency viewer is a debugger. It's not what you use for building the rigs. So I think I'm just going to go through. For some reason, I did something where I scrolled a real long time here and show the whole left limb. And then there's features here that I can demo around filtering and finding the right thing and so on. You can imagine these can be quite big. So they're quite sophisticated and turned out to be, the final UI is a bit different than what you see here, and turned out to be a really nice, slick, sophisticated tool. If you want to know more about what's coming, here's the QR code for you. This will lead you to the roadmap. Or, like I said, please come by. This is what UEFest is for. Here's the times when I'll be available at the booth or the Dev Hub. By experience, I'll be there much more than this, but we'll see. And of course, there's more. So Stefan Biava has an awesome talk scheduled to talk about the entire animation workflow in Unreal Engine. If you want to learn about animation setup from start to finish, make sure to go see that. And as I mentioned earlier, Danny Chapman's talk is on Controlling Physics tomorrow at 4 PM. So make sure to go check that out if you're into physics. That's it. Thank you.