https://m.youtube.com/watch?v=IamXWkYibZk

Howdy, folks. Before I get started, please remember to silence your cell phones. Don't worry about taking pictures of the slides. I will have a QR code to my link tree with a link to an EDC article that explains all of this so you can do your homework when you get home. And finally, please remember to fill out your surveys in the Cvent app. We use those to make the show better and better for you every year. With that, welcome to A Tech Artist's Guide to Automated Performance Testing in the Unreal Engine. My name is Matt Ostile. I am your tech artist and guide. And along with the help of a lot of amazing people at Epic Games, I have been working to lower the barrier of entry to automated performance testing in the Unreal Engine. That's what we're going to talk about today. But before I get started, I do want to, I've prepared a short introduction for those of you I haven't met yet, and a quick refresher for those you have. So roll a clip, Vinny. A lot about performance. If you were checking your performance early and often testing your performance in consistent ways, Performance is everyone's responsibility. Testing early, often, and on device. But I love talking about performance. How you should be testing early and often. To the surprise of nobody, I have more to say about performance. But that's a whole separate talk. Welcome to the whole separate talk. So today, first, I'm going to get up on my soapbox and talk about performance. and then I want to talk about the problem space of automating our performance testing, why it's important to automate, why we needed to lower the barrier of entry. We're going to go on a journey. We're going to go on the journey that I went on. Maybe you'll find some useful tidbits along the way. And then finally, I will talk about this new framework that we've been developing. So I don't have the prop, but let's talk about performance. performance is a feature you can't leave it to the end of production like ui and vfx and audio you also shouldn't leave ui vfx and audio to the end of production but we have to think about performance as a feature if you don't check it you don't know how bad or good it might be and if you don't know how bad or good it is you can't fix it you might say oh i went in and optimized the game but like, did you know what you needed to optimize? Adi is always talking about this. You have to profile first, then target your optimizations. And I need you to be testing early because as we go through the game, things are going to change. Somebody's going to check something in and you want to catch that stuff on the early side. And you want to be testing often. It's not enough to do this every month, every milestone, every quarter. We have to be checking this kind of stuff every day because if we, so two charts, starts, one is, hey, we're checking often, we caught a regression really quickly, and then we can bring it back down. It's, you know, quarter of a millisecond, a millisecond, we got to get into range. But if you wait until the end of production, there's a lot more area that you have to cover, and a lot of things that you have to cut or get rid of or work really hard to change. And that is time and money that has been wasted if we had been checking and fixing things along the way. and we need to be testing consistently because if I make a change and I say, look, this saved five milliseconds, that's great, but if nobody can reproduce that result, does it actually count? Probably not. We have to test in ways that are comparable to each other. One of the ways that I think we can do that is by testing on device. If we are developing for a specific platform, a specific hardware, then if I'm always testing on that machine, I know that my results are going to be comparable from the last time that I tested. For example, if I'm developing a game for PC, I should probably have a min and recommended spec PC that I'm running my tests on because I know that my development machine is not representative of commercially available hardware. And that is a lot, right? Testing early, testing often, testing consistently, testing on device. We also need to be setting budgets. And this is the thing that I think is really important when we are talking about our performance, because it's not just, hey, I want to hit 16 milliseconds. Like, I want to hit 16 milliseconds with motion blur and TSR and nanite lumen, megalites, what have you. And we can, that has to be part of our production process. We have to, I like to just turn on all the features that I really want at the start of production, do a beautiful corner, see how it looks, see where the numbers are. and then I can figure out what's most important to me, what's most important to the game that I'm working on. Do I need a lot of translucency or maybe I can say my translucency budget is one millisecond. Now, you know, when translucency goes over a millisecond, you have to optimize your translucency, right? It's not just, oh, we're 17 milliseconds, I have to optimize. A big thing with performance this is sort of a soapbox on a soapbox on a soapbox The way I look at performance is not necessarily ah I can have 10 skeletal meshes with a hundred joints a piece. And that is my budget. It's like, well, if you want 11 skeletal meshes with 150 bones, we can budget for that. That's a negotiation, but we're going to have to give something else up along the way. We have 16 milliseconds to do this whole thing. You, you get to decide how to spend it. so going back to performance as a feature I know that it is hard in production to get the time to focus on these things to do the work of automation and running these tests so I have a little trick for you we can use the language of the producers against them watch this so we know user stories right? as a player I want to have fun using this feature at at 60 frames per second. Ah, now it's part of our acceptance criteria. This feature should be fun and it cannot consume more than one millisecond of the GPU. Ooh, now we don't get to close this Jira ticket until we've met all of our acceptance criteria. Congratulations, now it's part of your production process. So one neat trick. So let's talk about the sort of problem space that we're dealing with, right? You know, Ari and I were talking about profiling earlier, and that's good to identify the specific things that you need to maybe optimize, but the automation of it is kind of a new, a separate topic altogether, which is why I'm up here. So where are my tech artists? Please raise your hand. Who here considers themselves a tech artist or is sort of adjacent to tech art? It's about, I'd say, eighth quarter of the room. um so who here is responsible for the performance of their project if something goes wrong if it's not running good who do they go to so that was all of the tech artists and a few more hands okay now here's the fun one who automates their performance already one two three four five six okay that was a good chunk i i can count them on all my all my digits and who's doing like a manual? You have a process, but you have to do it by hand. You know, it's not automated, but like I know what I can do to do consistently. Okay, so another decent chunk of the room. And I'm guessing the rest of you have nothing, right? We just, we're vibe perfing. Okay, it's not, I'm not judging you. I get it. But of course, you know, we all like automation, right? clap for automation. Because we don't want to do things by hand. I'm a tech artist. I am an inherently lazy person. I make Python do all the work for me. But the tricky thing about it, if we want to look at the broad strokes of one button press, fully automated performance testing, we need to do things like make the build. We need to put the build on the device. We need to run tests on that device, then we need to get the results of the test from that device, process those results, share those results with our colleagues, and we have to do it all on the build machine. And so tech artists, like I said, I use Python. We love using Python to automate things. It's easy. It's dynamically typed, right? But the thing about Python is I would not necessarily call it... the epic way of automating things. So we actually do have a bunch of systems to do all of these things. So if I want to make the build, we have something called Build Graph. It's based in XML. It's a script-based build automation system with graphs of building blocks common to UE. It doesn't actually have a graph interface. It's all built in XML. Then for putting the build on the device, we have Unreal Automation Tool and Gauntlet, which is a framework to run sessions of UE projects and perform tests, validate results, and all that fun stuff. The test on the device is going to be something called a Gauntlet Test Controller which is going to be in C++ driving automated tasks outside of the automation test framework for runtime functional tests, especially with networking. And then to get the results back off the device, we're back at Gauntlet. We have some command line tools for processing results from both the CSV profiler and Unreal Insights. sharing the results. It's kind of up to you. We'll talk about that later. And then we want to do this all on the build machine. So we're going to use Horde and our Horde job templates are written in JSON. Tech artists, raise your hand if C++ or C Sharp was part of your job description. You had to know C or C Sharp to get a job as a tech artist One two interesting Two hands Okay So tech artists like to automate things We largely responsible for performance but the skills that are required to do the automation the Epic way are not part of our normal skillset. And then you're like, okay, but I don't want to do this all by hand. I want to do more of this, this automation. I want to get more of these tests done, but I can't, I don't know how to do it. Or I'm at a studio that doesn't have the resources. We don't have automation engineers, assets, anything like that. And that's tough. I get it. I have been where you are. So I want to take you on that journey. So flashback to a couple of years ago, I told Jack Condon, I wanted to do this talk. And I was feeling pretty good about where I had gotten on my journey of automating performance the epic way. And I was feeling pretty good. And I was starting to think about how am I going to present this to people to get them up to speed? And the talk started turning into like, okay, so there's going to be this build graph file and you're going to find and replace your name here because, you know, in order to get to the next step, you need this. And then we get to the next step and, oh yeah, you also have to make an automation, a C-sharp automation project. And you got to include a couple of files in there and set that up and change your project name there. And then in those automation, okay, you know, find and replace on this just to give you a starting point. And that felt bad. I didn't want to have a talk that was just go to my EDC article, copy down the template files and find and replace. That felt wrong. So let's go on the journey of what it looks like to start automating testing. So I want you to know where I started so that it will hopefully contextualize where we got to. So this is what I call the full manual process. Stop me if this sounds familiar. So, and this is actually a process I used in production on multiple games. So eight o'clock in the morning, QA shows up, they download the last known good build, they deploy it to their local dev kit, and then for each map, for each camera in each map, type in a console variable to jump to a position in the world and record the average GPU timings at that point in the world, and then write that down in a spreadsheet. Then take that spreadsheet, Once you've finished it all, email it to the stakeholders. 10 o'clock in the morning, TechArt arrives because he had kind of a rough night. He goes and gets his coffee. He reads his email. He downloads the, he reads his email, figures out where the regressions are, downloads the last known good build, deploys that to the device, investigates the regressions, goes and get lunch. And then he figures out how to resolve those regressions. Does that sound familiar to anybody? is anybody doing that i did um you know it works but like that's that's at least a staff day worth of work right there um so let's look at the scorecard let's see how it did um i might give it partial credit for make the build because it kind of assumes that there is a process that has like the last known good build but you had to put the build on the device manually you had to run the test manually you had to get the results from the device manually uh you had to process those results manually. I'm not going to count like conditional formatting in Google Sheets as processing. Sharing the results, you had to manually send that email. Nothing was happening on a build machine. So that's not great. So we fast forward to 2020. I'm helping out the Quixel special projects team with the medieval game environment. And they are like, hey, can you help us with performance? I'm like, yeah, that sounds great. And I really didn't want to have to do the full manual. So I started looking into ways I could do this with Blueprint. And so what I ended up doing was putting camera actors in the world, and I tagged them with perf camera because I thought get all actors of class was super expensive. So I didn't want to do that. And then for each camera, set view target with blend, go to that point of the camera. And then I would use the Blueprint nodes for Unreal Insights to resume tracing and then pause tracing, which in the insights file would give me these kind of chunks. And I knew that the first chunk was the first camera, the second chunk was the second camera, the third chunk was the so on and so forth. And then I would go into the Unreal Insights interface, look at the average inclusive millisecond time for the unaccounted call, which was then what you could use to figure out how long the GPU was working. And I'd write that down in a spreadsheet. Great. So let's take a look at that Scorecard, how'd we do? Well, I still had to make the build. I still had to put the build on the device. I'll give myself partial credit for running the build on the device, but you get it. Still a lot of manual work. But hey, I could run the test with a, I'd type in like a ke star run perf, right? Ha ha easy But I wanted to get a little bit more advanced because as we were upgrading the Medieval Game Environment project to UE5 we wanted to test a bunch of different configurations of console variables and see how they all stacked up for performance which meant instead of running one test a day I run 10 or 20 And that doesn't scale. We got to start automating that kind of stuff. So what can we do to make that a little easier? Well, I learned that you can open a map with a specific game mode. If you launch your project with the map path and then you can alias the game mode, you can say, hey, run my performance analysis game mode. So now, instead of having to manually boot the build, and so I didn't have to manually boot the build, I could get the performance test running from the command line. So that saves me a couple of steps. And then the Unreal Insights team did me a really, really big favor, and they added regions, which are arbitrary events that can span multiple frames. and there are console commands, there are C++ macros, and blueprint nodes for starting and ending regions. And regions are great because I've got this whole chunk of time and now I don't have to pause and resume tracing. I can also stack a bunch of regions on top of each other. These things are incredible, especially when we're looking at the rainbow barf of Unreal Insights. This gives us insights into what's happening as we're looking at our trace. So that's really cool. And then the Insights team also did me another solid. They added a command line interface for Unreal Insights so that I could export timer statistics from a specific region or a group of regions. So now I can do a command line call to Unreal Insights to get a CSV of the results out. That saves me a bunch of time. Here's what that looks like. So here's a little bit of boilerplate. Then we open the trace file. And then it's this exec on analysis complete command, timing insights.export timer statistics, path to the CSV with wildcard. And then you can specify which regions you want, and it'll do a wildcard search. So you could say maybe you want to get the timer statistics of all of your boss fights. And you've got your boss fights tagged as boss fight underscore blank. So now it'll go through and get the timer statistics of all of the boss fight regions. Really handy. And then we can scope that to the GPU thread. So because I'm a tech artist, I only care about GPU. I'd never do anything wrong on the CPU, right? And that would be really cool because that would then give me the CSV and I could, you know, go back to Python and kind of automate the processing of that. So let's take a look at that. Still kind of had to make the build, put the build on the device, but I could really quickly run the test on the device. I'd have to pull the results, you know, I'd have to go onto the dev kit and pull the insights trace out of the saved folder, but I could really quickly process the results. and I was sharing the results on Google Sheets and none of this was running on the build machine. And at this point, you are thinking the thought that defines tech art, which is, there has got to be a better way, right? There's still a lot of manual stuff. We can do better. I know we can do better. And then you start digging and you start pulling on the threads. All right, how do I make the build? Also, don't worry about reading the code. If I wanted you to read the code, I would have made it legible. That's what the EDC article is for. So you find out about this thing called build graph. And that's the thing that you can use to automate setting up the build, right? Oh, I need to make the build configuration file. I need to compile a bunch of tools before I do that. Okay, great. And then I want to run tests. And it's going to set all that up for you. That's really cool. So you find out about build graph. And you start digging around. And this wraps together a whole bunch of really powerful stuff. and you're like, oh, that's really cool. And then as you're digging around build graph, the build graph for maybe a sample project, you find this target test list here and you're like, hmm, I wonder what that is. And then you astrogrep on the target test list and you find these things called gauntlet test nodes. And you're like, huh, what's that? You find these, I'm sorry, unreal test nodes and these are gonna be part of a C-sharp automation project. And you're like, huh, what's going on over here? And then you figure out, oh, this is the thing that launches the game and sets up all of that. And now I can just pass in my map override here. I say, open this map with my performance analysis game mode. That sounds really cool. But you know that the game mode still isn't quite the right thing to do. It feels heckarty, to put it mildly. Because what if the thing that you want to test the performance of is your game mode itself? You can't rely on using a game mode as your performance test. And then you find the real cool stuff, which are gauntlet test controllers. and these are really powerful. They persist between level loads. They're not the game mode, but they can interact with the game mode, and all you have to do to set one of these things up is just pass in dash gauntlet equals the name of the gauntlet test controller class. Ooh, now we're getting somewhere, because we can also, instead of doing the map override, add controllers to our client that gauntlet is going to launch for us, and now it's starting to come together. So now with build graph and gauntlet and gauntlet test controllers, I can set up a batch file to run Unreal Automation Tool, run the build graph, which is going to run all of the tests for me. And in that gauntlet test node, I'm also going to process the results. I can call the Unreal Insights command line interface from there, and it's automatically going to, as part of the C Sharp script, process those results for me. So that's really cool. And then, of course, because it's build graph, we can run it on horde with a job template that looks basically the same as that batch file I was showing earlier. So we're going to call this the hard epic way automation. We kind of cracked that code. So now we can make the build. We can put the build on the device. We can run the test on the device. We can get the results from the device, process those results. and I was sharing those results up to a little MongoDB Atlas instance because I had the CSV and I just send that up as an object to the database that I could then make graphs and charts from later. And now it's running every night. Woo-hoo! And at this point in the presentation, you're probably wondering, why is he not telling me how to set all of this stuff up? I thought that's what I was here for. Let me tell you. because it turns out this is a really common problem. And we have sample projects that need testing too. And CitySample did this slightly differently. Titan, our newer, larger open world project that was a little lighter weight, and Lyra, they all, I actually copied off some of their homework to get to this point of like, oh, Lyra's got this little automation and CitySample has this gauntlet test controller, but nobody can see it. That's fun. We can take a look at that. And it turns out we also had a bunch of internal sample projects that needed it too. And it turns out that the use cases as you talk to people, how do you want to test performance, these are pretty common, right? I loved using static cameras for GPU testing. Other people really like using sequence fly-throughs. Or if you've got a multiplayer game, maybe you want to do replay runs, which record network traffic and play it back so you can basically emulate a played session. And it turns out this kind of covers it. And if a bunch of our internal projects need it and the use cases are pretty common, all right, I don't think a boilerplate filler inner is the way to go. So I'm not an engineer, but if you have to copy and paste the same code, you probably need to write a function, right? Or in this case, a framework. And it sounds like what we really need to do is define the epic way. Now you can start taking notes. So being the epic guy who cares a lot about performance, I did it. And I'm calling it, I refer to it as a framework because it's not just a plugin with gauntlet test controllers. It's a contract from when we make the build all the way to sharing the results and on the build machine, because it turns out, okay, if I need to get a console, a command line argument from build graph where I initialized the test all the way down to the gauntlet test controller, we kind of have to have a couple of guarantees at the various layers in between. So that's what we did. So we have this plugin. It is first experimentally usable, usably experimental in 5.6. And we've added a few things in 5.7, which I'm really excited about. And you see that little thing right there? Experimental. I really want to emphasize this. So the systems that this is built on, CSV profiler, build graph, gauntlet, gauntlet test controllers, all of those things are totally fine. But we're still kind of trying to figure out how we need to develop this framework and fill in a couple of different pieces along the way. So I think you can start using it now. I've got a couple of people that are like, oh, yeah, we found it and we started poking around. Great. There's still some stuff we got to work through, right? There are not a lot of guarantees. There are a lot of assumptions that are getting made about how you want to do things. The APIs as always with anything experimental are subject to change without notice The builds so it turns out with 5 we can do content projects but you must be building from source We still can do it with launcher builds but we'll talk about some of that in a second. We can also talk about that after class. Extensibility, I'm still trying to figure out. I was chatting with somebody earlier about how you might be able to extend this. The documentation, hi, you're looking at it, and the EDC article. Examples, though, we do have examples. So we've implemented this in Lyra, Titan, and CitySample from top to bottom. And the version, so the code existed in 5.5, but I wouldn't start looking at it until 5.6. Because again, usably experimental. But what comes in the box is a system that makes the build, puts the build on the device, runs the test on the device, gets the results from the device, does a little bit of processing on the results that you can then share, and we can do it all on Horde or whatever build machine you want to use. But here is where you come in. I need you to run a setup script, and I need you to set up your tests. You probably need to modify an XML file and maybe set some command line arguments and run a batch file and look at an HTML file, and you still kind of got to make the Horde job template yourself, but that's optional. So one of the things that I thought was really important as we were developing this framework was I wanted to make it easy for people to set up, right? The early, early, early, early, early version of this was literally called the build machine boilerplate scripts. Not great, but what we managed to add was a command to UAT. So you can call runuat.bat, add automated perf tested project, and then just pass in the path to your project. Do make sure that everything's already checked out. default engine INI and Uproject. But once it does, once it runs, it's going to, of course, enable the plugin. It's going to create the four boilerplate scripts that we need, which are the build graph, the gauntlet settings XML, and the batch file and shell scripts so you can run things locally. And it's going to add a dummy sequence to your project settings. So I mentioned earlier test controllers, the static camera, the sequence, and so on and so forth. So there's a couple of different, there's a few different concepts I need you to keep in your head that are slightly different, and it's really easy to use the same word for them. So test controllers are the conditions under which you want to gather your data. So if I go to the cardiologist, am I going to be lying in a bed? Am I going to be on a treadmill? Those kinds of things. Those are the different test controllers. So for us, we have static camera, which we go to a different camera in the world. We soak for a fixed amount of time. This is kind of good for GPU performance. And depending on what you're pointing the camera at could be good for some other things, but I'll leave that up to you. There are sequences, which play a sequence in the level, really good for environment GPU performance. And depending on how you set up your sequence could be good for CPU. Replay runs are great if you have a network replicated game already because you can replay a match or a level. This is going to give you some good results for gameplay as well as GPU. And then I snuck this one in there. It's a full screen material test. So instead of looking at the instruction count, we can just draw the material and see how long it takes to draw a single version of that material. So good for getting more information about base pass costs of your materials. and one of the things that was really important another kind of like again this is for tech artists this is not for like the most technical people i wanted to get this stuff move some of the configuration into project settings so that i could keep it accessible i wanted i didn't want you to spend a whole lot of time in text editors so each of these test controllers has its own section in project settings sequencer this one i kind of have to explain because we need to alias these things, what we have is this struct called the map sequence combo, which is you've got to define the name of the combo itself and then which map you want to open and which sequence you want to play, which I think is really cool because it means you can run the same sequence in multiple levels or different sequences in the same level without having to drop a bunch of level sequence actors into your world. And my pro tip when you're making these sequences is keep the stuff in the sequence spawnable, right? I don't want to have to reference the camera rig rail. I should just spawn it in. And there's a really easy sequence in the Titan project if you want to copy it over. So I did mention that you need to modify an XML file. So there is the autoperftests.xml. This is the base build graph that you're going to look at. And all of your map sequence combo names are going to be right up here should be easy to find And you going to add the name of your map sequence combo as you go along should be all right And then static camera it just the list of maps, and what I'm going to ask you to do is place these automated perf test static cameras in your world. Just as a heads up, they are set to spatially loaded equals false, but what's really nice is this is going to give you, each camera is going to come out as its own separate CSV profile. And we do have to add a little bit of an extra section to that. This is all going to be in the EDC article. So you can copy down the minimal amount of boilerplate that you might need. And replay runs, which we're going to have to record network data. We play it back. Really great for automated testing, but it can become stale. Because if I'm building at a level and I'm running through, I'm running through, I'm running through, and then one day some level designer decides to put a wall here. The network traffic doesn't know that the wall changed, and I'm just running into the wall. And now my tests aren't consistent and comparable to each other, so you got to go make a new replay run. Replay runs themselves are a whole separate talk, but we have a really good example of how these are set up in Lyra, so definitely check that out. And there's a little bit of build graph that we have to do. Materials, we just have a list of materials. You drop them in there, it throws up the full screen quad, kicks out the CSV, easy peasy. And that's all well and good, you might say, but how do I actually run the darn thing? So like I said, one of the boilerplate scripts that we kick out is this run local tests batch file. It's going to look a little scary, but I'm going to walk us through it. So first we have to go find the UAT batch file, and we're going to call the build graph command. Then we're going to send in the script that we, the build graph script that we want to run, which is our auto-perf-tests-xml, and the target is the build-and-test space project name. Then we've got a little bit of boilerplate here. We specify our target platforms and our target configurations. I'll get to that in a second. And then we have the apt automated-perf-test with automated-perf-test-sequence-tests equals true. And then we have a few more options here, which are the data collection methods. And we'll talk about those in a second. We call those test types. And this is what it looks like. And we can just double click on this and it's going to run with a certain number of iterations so that we can get some statistical variants or we can average out some of the statistical variants. And then finally, we'll do this generate local reports. And that's all fine and good. But what if there was some kind of user interface, you might say? I thought you were trying to make this easier for people. I have great news. So in 5.6, we've added a new project launcher. And in 5.7, we made that project launcher extensible. And we extended it for our framework. So I can go to tools, open the new project launcher. I can create a new launch profile. And then down in the bottom here, I can add automated perf tests to my launch profile. and now I can specify the test type, I should change that to test controller, the sequence combo names that I want to run, the number of iterations, if I wanted to use the CSV profiler and so on, and then I can just click launch and it will run. And now theoretically, you don't have to open an XML file, you don't have to run a batch file. If you wanted to run one of these tests, you can do it from the editor on a target platform. I'm so happy about this. I literally cried when I saw it the first time because this is what I was working toward. I wanted to make things easier for people to run these tests. And now you don't have to leave the editor to do it. You don't have to modify anything beyond the U project to enable the plugin. It's great. So, of course, we can run on a bunch of different platforms, right? We can run on PC, both through Project Launcher and specifying the target platforms. We can run on consoles. The cool thing about Gauntlet is it has this device manager built into it that's going to figure out, based on the different SDKs, which devices I can go access either locally or on the network. It's really, really cool, useful to reboot the devices in between. And, of course, mobile, similarly with Gauntlet, it's like, all right, if I want to run on Android, do I have an Android phone connected? Let me do an ADB check. And the space intentionally left blank. and the other thing that you're probably going to want to have to think about here is different build configurations if you are not already aware the top two are going to be development which has a lot of information but some additional overhead there's also the test configuration which has slightly less information we did change some defaults and it a little closer to the shipping experience because there less overhead there is debug That not for this purpose There way too much overhead Debug is not for us And then shipping. You're not going to get a lot of information out of shipping because there's no real overhead. And again, it's not really for this purpose. So we're going to probably do either development or test. And then those test types that I mentioned earlier, we've got a few different ones, and they're kind of for different purposes. So the perf test type is a good overall gut check. On supported platforms, it will run with a free dynamic resolution. So the key metrics that we'd be looking at are going to be things called missed Vsync percent, the dynamic resolution percentage, and hitches per minute. The LLMEM or LLM test type is going to be your low-level memory reporting type. We might run about three iterations of that. So we're looking at things like total megabytes. the insights test type is going to do one run and then give you a utrace file so it's good for some of your preliminary investigations you know check on should i be looking more at the cpu or more at the gpu because there is a little extra overhead for testing with unreal insights and so if we're trying to get a general gut check of performance on our project maybe we don't need all that additional overhead if i want to know like are we running at about 30 milliseconds and then finally there is the GPU perf test type, which is also going to do about six runs, but this one runs at a fixed dynamic resolution percentage. So your key metric there is just going to be how many milliseconds did it take the GPU to run? I'm going to go on a bit of a tangent about metrics, because I think all of this talk is just a bunch of tangents. So missed vSync percent is something that is calculated from CSV profiler results using the perf report tool executable. Based on your target frame rate, and it's going to tell you what percent of the time did you miss that target frame rate. Dynamic resolution, again, on supported platforms is more representative of the shipping experience, and it is a kind of a stand-in for frame cost. So if you're looking at, if you want to run the perf test type and you want to figure out what number should I actually care about, now we can talk in terms of like percent dynamic resolution improved or regressed. And then hitches per minute does what it says on the tin, and this is going to help find the outliers. So missed vSync percent might be 0.2, but if there were two hitches per minute, you also kind of want to know about that. And then, of course, the timer averages are milliseconds. It's just what you get out of stat unit, all that kind of fun stuff. And then the PerfReport tool is going to kick out a summary table.html. It's going to basically parse all of your CSV results, kick it out into a spreadsheet in a template like this, as well as adding some, that's the word I'm looking for, more detailed reports that you can go look at. And it's going to put it in the saved performance test type folder at your project route. So you don't have to go looking around for it. And this is the kind of file that you might share with your team for the time being. Really helpful. Oh, also this FPS column here is target FPS, because how are we running at 60 frames per second if my RHI was 21 milliseconds? Not great. So let's take a look at the automated performance testing scorecard. We're making the build. We're putting the build on the device. We're running the test on the device, getting the results, processing the results. You still got to share the results, but we've got something that you can share with your team. And you still got to make your horrid job template, but I don't think it's that bad because we can run on the build. And if you do have any questions about running stuff on Horde, go find Jack Condon. So that's about what I've got but I do have some next steps. I need your feedback. I want people to start using this and start kicking the tires on it and I want you to tell me what you need out of an automated performance testing framework for the Unreal Engine. find me after class, hit me up on blue sky, send a carrier pigeon. I don't care. Let me know. Um, if you're on Epic pro support, I've already gotten my first UDN ticket for the automated perf testing framework, which I counted as a big milestone. Um, but again, I want to make this easier for people. Uh, cause I think if we make it easier, more people will do it. We'll keep an eye on our performance throughout our development. We'll be less concerned and we're not stressing out six weeks before we ship. So with that, thank you all so much. I really appreciate y'all coming out. I managed to break the build because of one of the Gauntlet test controllers in...