https://www.youtube.com/watch?v=TbNZ4GKaTow

Welcome to PCG Introduction Use Cases and Production Best Practices. My name is Matt Ostile. I will be your substitute teacher today. And we wanted to get, now that PCG is ready in 5.7, spoiler alert, we wanted to get a talk out there that was both an overview for newcomers as well as kind of getting everybody, all of our early adopters, up to speed on all the new stuff in 5.7. And before I go on to the next slide, just a quick show of hands. Who here is using PCG right now? Okay, that's, you know, not quite as many as I would hope, but we're going to change that today. So I reviewed the lesson plan that your professor provided to me, and he wanted me to give you first like a high-level overview of PCG. We'll go into a little history lesson about it. Then I'm going to kind of lay a foundation of what PCG should look like so that we can talk about some advanced concepts, production best practices, and then I'm going to show you some cool use cases at the end. And the thing to keep in mind as we're going through today is that this is not going to be the step-by-step YouTube tutorial that you're going to follow along later. What I'm trying to do today is cram as much knowledge about PCG into your heads so that when you go watch these YouTube tutorials and start digging into the framework a little bit more, you've got a bunch of little threads that you can start pulling on. So what is PCG? PCG for us is not sort of the broad concept of procedurally generated content. It is very specifically the procedural content generation framework that we've shipped in the engine. It interoperates with a bunch of other engine systems. And the thing to keep in mind about PCG is that it is a tool to make tools to build worlds. And like I said, it is production ready in 5.7. We've got a lot of integrations with other engine systems, instance actors, geometry script, things like that. And it's really meant to increase your iteration times. As we're building bigger and bigger and bigger and bigger worlds, we can't place every blade of grass by hand anymore. We need something to help us get all of this content into the world. Of course, it's not just about placing things in the world. It's a pretty generalized framework. it's also offline so we can do it at edit time we can also do it at runtime or a mix of both and the thing to keep in mind is that it's actually designed from the ground up to be used at runtime so the the systems are paralleled and multi-threaded and all that fun stuff and of course it's not limited to games i don't know if anybody went to my colleague simon's talk earlier he was showing some non-game uses of the pcg framework as well it was really cool so check that out when it gets on YouTube. And the PCG framework is built to support things like determinism. There's actually a determinism panel in the editor. And like I said, it's meant to be performance so you can do it at runtime. So caching, multi-threading, parallelism, asynchronous execution. And there's a lot of really useful ways that you can customize it to build your tools. So for example, the new vegetation editor is actually itself a PCG tool, which is really cool. Like I said, efficient runtime offline execution and hierarchical design, which we'll get into in a second, as well as debugging and profiling. And all of this is really important because PCG is going to be in everything. The more I start to think about PCG and what we can do with it, I'm starting to come to the conclusion that PCG is probably going to be as fundamental to our workflows as materials are right now, okay? It's kind of broad, but trust me. So let's go on a little history lesson. Let's walk through PCG here so that we know where we started, so we know where we're going. So back in GDC of 2023, we released Unreal Engine 5.2 and the now experiment, then experimental procedural content generation framework. That came along with the electric dream sample that you can see here where we're just dragging stuff through the world. The whole world is regenerating around it. This was a really fun GIF. And this is where we started to look into assembly workflows, which we'll talk about in a little bit. Then we get to 5.4. PCG is now beta. We're starting to get the foundation laid, really build it up from scratch. We've got a larger library of operations and types. Now graphs can be dynamically tracked and, oh, that input changed, I should automatically regenerate. Then we added runtime hierarchical generation, which is really, really important, as well as a native PCG data asset exporter. So that how we using that what we doing for the assembly workflows building on that as well as adding in builders and tools that you can use to customize the build process of your PCG as well as the BiomeCore V1 plugin that I showing here that allows you to make natural environments more easily with a set of rules and assets that was really easy to define Great example of PCG. Then 5.5 comes out. We've got the Cassini sample here. This added a bunch of GPU capabilities to PCG, so now we can execute some of our nodes on the GPU. This added shape grammar, which is an incredibly powerful tool for PCG. I'll show off in a second. as well as geometry processing, raycasting, pathfinding, and a bunch of performance improvements so we could do things like an entire asteroid field. Then we get to 5.6, which was only three months ago. This added a bunch of UX stability and performance improvements so that we could do GPU processing and instancing for our Witcher 4 demo, our Witcher 4 UE5 tech demo. A bunch of memory and performance and debugging improvements, a lot of usability for parameters and constants. This added templates, which I didn't even know about until recently. The templates for PCG are great. And a viewport to the PCG editor, because again, it's a tool for building tools. And one of the nice things is the viewport, as well as BiomeCore V2, which got a lot of upgrades, shored up some of the shortcomings of the original plugin, adding more local biome actors and functionality for overlapping. And now we're in 5.7, right? PCG is now production ready, which means it is actively supported. It works with a bunch of engine systems. If we need to change anything, we will go through the deprecation process, and we're going to try and not break anything. This also added a bunch of new functionality for GPU, so now we can do scene captures, we can run nodes natively on the GPU, we can sample textures as well as fast geometry, and we've added custom type support now. This is really cool for like plugin developers who want to get more deeply integrated into PCG. Now there is a way for you to define your own custom type for PCG. Very helpful. We've also added the Polygon 2D type, a bunch of operations for that, as well as some new asset workflows. And late breaking news, Nanite assemblies can now be created with PCG. That part's experimental. We'll get to it in a second. And pretty much every project at Epic is using PCG in some way. Fortnite BR does it. Lego Odyssey does it. We've got a bunch of licensees who are using it now, and we love talking to them about it. And we love to hear what people are doing with their own PCG. So let's actually lay a foundation so that you know what I'm talking about. So a typical PCG workflow is going to look something like this. You're going to create a PCG graph. Keep in mind, PCG graphs are not execution flow graphs. Then we're going to use that graph in a PCG component, and we're going to generate data off of that PCG. And we're probably just going to take that PCG graph from a content browser, drag it into the world. It'll make a PCG volume. really easy to get up and running. And typically in the graph, what we're going to do is get, create, or load some kind of data. And then we are going to process that data into a data set. So we'll sample something, we'll filter it, something like that. And then we're going to perform metadata operations on our created data set. So transforms, bounds, things like that. And then we will generate artifacts, typically like spawning static meshes or things. and I'm being kind of intentionally vague about it, mostly because I want you to, this is not just about the first graph. This framework here is kind of how I want you to be thinking about PCG. But what this might look like in your first graph, for example, would be something like this. We get landscape data and then we will sample that surface to create a bunch of points, which we will then filter based on some of their attributes and then we will transform those points. We'll jitter the scale, XY, change the rotation, and then we'll spawn static meshes. And so this might be what it looks like for me to spawn tree meshes on the parts of the landscape that are grass. So I was able to do that because the points that we create when we sample the landscape give us some interesting attributes. So if I click on a node and I hit A, that's going to bring up the attributes panel. Don't worry if you can't read this, it's totally fine. But what this shows me is all of the attributes of the data set on the node that I currently selected So I can see that a point is position rotation scale and bounds And points have attributes. And so those attributes are things like color, density. And you'll see that all of these attributes, you might be able to see it, but you guys probably can't. There's a dollar sign behind all of these attributes. Those are the ones that are built into the data set. Now, what's really cool is that if we go onto this filter node, click on that, when we're working with attributes, most of the things that you, or when you start, I should be, yeah, there we go. When you start working with attributes, most of the time the target attribute is going to be this at last thing I'll talk about in a second. But if you want to get at any of the built-in attributes of whatever data set you're working with, we can just click that plus icon there, which pops out that window. then I can select all of the different attributes that I might want. But for my use case here where I'm spawning stuff on the landscape, I actually want to get a different attribute. So if we pull up that attribute panel again, oh right, I was going to talk about at last. So at last is referring to the last modified attribute on the data set, which you can see in the attribute panel. And the attributes that I want to work with are actually over here. They don't have dollar signs in front of them. But when you sample a landscape, those points get the layer weights of the landscape as attributes. So I can filter on things like how much grass is here, how much sand is here, and so on and so forth. So we can manually type that in. When you are debugging nodes, so if you want to see like, hey, what does this look like? We can just click on a node and hit D. And what that's going to do is enable the debug checkbox. You can also come here and do that. and it's going to spawn static meshes at those points. We can change the scale method. So this one is like, all right, the bounds, this is going to make the boxes the size of the bounds of the points. You can also change what that static meshes is if you need to. Really, really useful tool when you're working with PCG. And if you ever need to do anything explicitly, you can just plug your data set into the debug node. um what i love about pcg and speaking about pcg as the tool to make tools in the sort of paradigm that it's kind of like the material system is we can expose parameters so maybe i want to on my first system expose the uh the the minimum and maximum scale that i'm going to transform my trees to so i can open up that node and open up the parameters panel add a couple of floats drag those in and connect those up, which is really cool because then when I've got it on, uh, when I've got that PCG on a component in the world, I can go down into the PCG component and I can override those component, those values directly on the PCG component, which is quite handy. I can also create an instance of that graph, just like materials. So I go into the content browser, right-click, create PCG graph instance. And then similarly, I've got all of the, I've got the parameters overridden here, right? So if I'm a content creator who's working with this, I'm probably not going to be using, making my own graphs every time in the way that you don't want to create a new material graph every time you want to change a texture. I might have, as the person who made the graph, expose a bunch of different parameters that my users can now change. They create instances and so on and so forth. So building on that core foundation of like create a graph, use the graph on a component, click generate, get set load data, filter it, sample it, metadata operations and generating artifacts. It's cool. I want you to keep that in the back of your head. So one of the major things that has come on with 5.7 is the PCG editor mode. So now if I'm a content creator, I don't have to manually do anything. If I want to draw a road spline, I just go into the tool, click the graph, draw the spline. And one of the things that I'm going to do in that graph is sample the spline data, for example. So now it's really easy for me to create and express myself in the world without having to go click on a blueprint and manually edit spline points. I can just do all of that in the editor mode now, as well as changing a bunch of the instance parameters, things like that. I want to spawn the road. I want to spawn trees. I want to spawn... What am I doing here? Oh, right. I'm spawning power lines, things like that. It really easy Now it makes things it just so much easier as a user to get into these And we talked earlier about the assembly workflow and this is one of these ways that we can for our content creators, give them the option to figure out what to spawn. So instead of just spawning a static mesh, what I might do is create a level instance with a bunch of things kind of clustered together. I'll turn that into a level instance. And then what I do, so right, level instance, make a change, do an update, get on with it. And then what I'm going to do is create a PCG data asset from this level instance. And what that does is put all of the static mesh actor transforms as points with attributes like the static mesh that should be spawned there. So then I can load that graph, I can load that PCG data asset, and now instead of spawning individual static mesh trees, I'm copying the points of that level instance to all of the places that I want to spawn trees, right? So now the artist can build these really nice level instances with a bunch of different parts, and it's all going to go into PCG and get spawned as instant static meshes all together. So we can reduce things like the number of components, right? So that's really handy, but the next, we can take this to the next level by tagging these assets. So if I go into the tree and go into the tree level instance, I can click on, I can add an actor tag because actor tags are attributes on those points. So in this example, we're tagging the clutter meshes. And what we'll do when we grab the attributes off of those points, we're going to filter and say, hey, only spawn like 50% of the clutter so that every time we place this level instance, we're going to place slightly different variations of it. And you can do a lot of really wild stuff with actor tags because they are attributes. Oh yeah, here we go. Yeah, now we're dragging into the world. Cool, cool, cool, cool, cool. So one of the other things you can do with actor tags, because those get turned into attributes on the points, is you can make values. So by default, it's going to be a Boolean, right? So it's either clutter or not clutter, but maybe I want to set like intensity or something. So I can do intensity colon some number. And now the value that gets put at that attribute is whatever number I put in there. Really, really powerful stuff. Cool. So taking a break from that, I want to talk about some of these plugins because you know, now we see that the PCG framework, there's no nothing next to it because it's version 1.0. Hooray, we're production ready. But you notice that most of these other interop plugins are either experimental or beta. And that's because we wanted you to be able to opt into some of these. So, for example, instance actors and fast geometry, those features themselves of the engine are experimental. So we didn't want to have to bake that into PCG itself. We're going to let you opt into something like that. I will say I do think geometry script should probably be mandatory, but I leave that up to you. and I'm going to talk about Biome Correlator. Another incredibly powerful tool of PCG is shape grammar. So for example, in this example that I've got, I want to spawn the end of the fence and then do a bunch of stuff in the middle of the fence and then spawn another end mesh at the very, very end. And what I'll be able to do with shape grammar is I associate, I have different modules that I associate with different static meshes, right? And that's the name, should have zoomed in on that, the names of the static meshes, right? So I know that index one is the end mesh that I want to use. And then I can go to, we're going to subdivide the spline based on the bounds of those. Yes, we all know how to do that. And then what we've got in that shape grammar node there was the sentence. End, and then do a bunch of stuff in the middle, and then another end. And that's how we're going to subdivide that spline and spawn each of those different static meshes. So at the end, we're going to do this, and in the middle, we'll spawn five or six different kinds of meshes. This is kind of the beginner example of what you can do with shape grammar. Chris Murphy has a video on this that is going to go into a whole lot more depth on it, which is really cool. The other thing that we can do with splines and now we can do in PCG is pathfinding. So I can say, hey, from the door of this house, pathfind to the road. And then I can use the information of that spline to carve out a road, you know, a little pathway from the door of my house to the road, right? And we can get a little bit more advanced with that because we can loop and we can do subgraphs. So there are, much like in material functions, there are subgraphs that we can use. So instead of having all of those houses path to the same point, when we make a path to the target point, we add the path that we're on to the list of places that we could pathfind to. So I have my goal positions, which is the points of the spline. I pathfind to it. And then whatever path I have found, I add back to the goal position. So now the next house can go to the road that I'm currently on or the road that I had just created, which gives us – yep, connect that up – which is going to let us draw sort of more deeply nested graphs, right? I'm pathing from one house to the road, the nearest road. This gives us some really interesting possibilities. And then, like I said, this one's kind of late breaking. I had to add this slide at the very last minute, but there is experimental support for creating nanite assemblies with PCG, which is incredibly powerful. There is probably a talk coming from me later next year about really in-depth greebles with PCG. I'm going to be really excited for it. And of course, profiling, right? We have to profile because one of the things that's really, really important with PCG is reactivity. I don't want my artists to click a button and wait for 10 minutes. We have a profiling panel that's going to show you which nodes are going to take the most time, what we can do about it. If you've ever been to any of my talks, you know I love profiling. I'm doing two more talks about profiling fest. And it's really important because PCG can generate a lot of points. You can very quickly spawn way too many points. And we don't want to have to deal with all of them everywhere. So what PCG can do, we have actually in this example here, the key takeaway from this example is the is partitioned checkbox. So by default, PCG will spawn instant static mesh components on the PCG volume that you've dragged into the world. not necessarily great when we've got a 16 kilometer by 16 kilometer open world. So if you tell that PCG graph is partitioned, it will instead spawn those instances on instance static mesh components of the PCG world actor, which is world partitioned. It's really handy. I think this one is probably going to be pretty mandatory for most people. The other nice thing that we can do is hierarchical generation. So we can perform one operation at a really, really high level that might be complex or complicated, and then pass that down to increasingly smaller grid sizes. So maybe I spawn the trees at a really high, you know, 256 meters, and then the rocks I only spawn at 10 meters, and we compare that with runtime generation. So when you combine partitioned hierarchical runtime generation, you can completely replace landscape grass types. You should probably completely replace landscape grass types. The trick with this one is that you need to, on the PCG world actor, select use editor viewport as generation source. Otherwise, this stuff's not going to show up. And what's the other... Oh, totally forgot about this. The other really cool thing is that because we can run PCG nodes on the GPU, we can also spawn GPU-only instances. so all of the math to get your grass only runs on the GPU it never hits your CPU so like I said a couple of things to keep in mind with runtime generation you have to trigger it somehow so the nice thing when you're working in the editor is I change a parameter on my PCG graph it's going to auto generate that's the default behavior but if you're trying to do something like oh I need to spawn a new POI in the middle of my world at runtime, you do have to manually trigger that. Just a thing that I need you to know. But the other nice thing about runtime generation is that we don't have to store those instance transforms on the disk. So maybe you've got, again, every blade of grass might be a lot of instances that you have to store offline Now we can just offload that work to the GPU There is some other cool stuff we can do with GPU right We can sample textures We can actually debug and see the HLSL that gets generated for our GPU nodes. Spawning GPU-only instances, incredibly powerful. They do keep in mind they don't show up in the Lumen scene. So it's really great for small stuff, not necessarily great for big stuff. and then advanced some advanced generation concepts this is going to get a little squiggly because again i'm just trying i want you to know that this stuff exists so that when you go back and you start playing around with pcg you know that by default pcg is going to generate on load but load does not mean level load it means when we cook the build um which is really handy as opposed to something like construction scripts which we'll talk about in a second um we can also generate on demand. So like maybe I don't want my PCG system to execute every single time I change every single parameter. Maybe I only want it to generate when I click the generate button. I can do that. And of course at runtime, the other, other, other, other, other, other, other, there's so many other things, is we can make our PCG components editor only because the graph is going to run, generate the instant static mesh components, but then we don't really need the PCG component to stick around. So you can say, hey, this component is editor only. It'll cook itself out, which is nice. The other thing I need you, another thing that I need you to be thinking about is PCG builders. So these are assets that will control how we generate PCG through a commandlet. So now it's part of our build process to generate the PCG. And what will end up happening is our users are going to set the settings on the PCG component, but they're not going to check in all of the PCG components in the world because that might create a lot of source control contention. So instead, we're going to offload that to our CICD. We're going to make it run on Horde or have it run locally with these PCG builder assets. And I want to very, not briefly, I want to talk about some production best practices, things that I want you to be thinking about, not necessarily at a technical level, but at a design level when we're building out, when we're sitting down and figuring out how to actually leverage the system. So I need you to think about the needs of your PCG system. You don't have to go full PCG, but it's really important to know at what level you want to give artistic control, what level you want to have people placing things manually. Maybe you just do rocks and that's totally fine. Or maybe you do an entire city. That's totally fine. But you got to think about that. Is it going to be done at runtime? Is it going to be done at edit time? Is it going to be this huge, big systemic thing where a bunch of graphs are passing points to other graphs that then get picked up in other graphs that spawn worlds? Or do I just want the PCG graph to spawn static meshes? It's entirely up to you. Another thing to think about with PCG is who is going to be reading and editing the code? Maybe you need to get an engineer in there to look at your PCG graph to figure out what is causing performance issues or something like that. We do want to have readable graphs. Yep. Cool. Again, I'm going to keep harping on performance. It's kind of my job. because you want to have live feedback. You want editing to be an easy thing that an artist can just paint things into the world and move things around and see the world change as you're doing it. So that's why I think it's really important to keep an eye on your performance, having debug options, figuring out what you can pre-calculate, what you can store locally, what you want to run on disk, partition everything, have stuff opt-outable if you need to so that we can provide these optimal paths to our artists. And again, going back to PCG builders, we can have basically a preview mode so that when somebody's changing something in a graph in a world, yes, it's going to update, but the build machine is going to be the thing that handles the big generation because then we can pass that off to HLOD generation as well. You can also use editor utility widgets to chain what PCG graphs need to execute in which order, which I think is really helpful. Speaking of editor utilities with assembly workflows, this is really, really powerful, right? I have a thing that I can edit, I can tag those things. Those tags that you set on the assemblies should probably be something you set with an editor utility widget not making your artists go into every single actor and type out the actor tag that they need. Because anytime somebody has to type something out, there's room for error, right? But this lets us define a bunch of really interesting procedural rules. We can build assemblies procedurally as well if we want to. And we do want to think about how we're configuring our PCG. So the default settings are there because we think there'll be good general defaults, but it's not set in stone. And you should probably start looking into cache sizes, editor and runtime frame times, thread counts, max loop counts, things like this, based on the needs of the PCG systems that you're developing for yourself. And then of course, I do need to mention again, tracking only works for editor workflows. It doesn't work at runtime. And this is kind of my hot take challenge to you, which is instead of using blueprints, try and use PCG. If you've got a construction script, make that PCG instead, because what's really nice about it is if you run it in PCG, those instances exist. The instance static meshes are just going to exist on the world until they need to be changed. But a construction script runs when you open the level and when you load into the level at runtime, which can be kind of expensive. If, for example, you are building a library generator in construction script and you don't turn off the collision on all of the instances of all of the books that you are spawning, you are going to create 170,000 physics bodies, and it'll take you seven minutes to open the map. Not great, but if we did it with PCG, those instances are just going to be there and not have to get spawned dynamically. The other thing with Blueprint, like where we can get Blueprint into our PCG process is going to be maybe instead of just dragging a graph into the world, I have a blueprint with a PCG component and a box collision or a spline on it. And that's how I'm going to get the bounds of the PCG graph into it. And then at the blueprint level, I can expose these parameters into the PCG component without having to run through all of the really expensive stuff that could happen on construct. And have rules. Have rules for how you're building your PCG and get the taxonomy right from the get-go. Get those tags that your artists are going to put on their assemblies right. Get your attribute naming scheme right. Because you can add your own attributes. You can make your own attributes on points and all this other data. And of course, we have to tell our users when data is missing, when something's not working right. And again, I really want you to think about what should be done manually. Where do we want to expose the inputs to our users. Maybe it's good enough to have someone place the road rather than building a PCG system that defines where the road should be, right? Think about those different levels. Cool. And I want to show you a few examples as we're wrapping up. I promise there's going to be a lot less text from here on out. So I do want to talk about the Electric Dream sample. It is still definitely relevant. It's a great example of hierarchical generation of like, how do we, at what level are we processing data and exposing it to downstream systems. It's also a good example of using Blueprint properties. And I think we converted this to use PCG data assets for the assembly workflows. So still totally valid sample to check out. Biome Core V2. This is a big one. One, it's a very big plugin. It is a fixed pipeline. It is an excellent example of cross-graph communication. So the biome core actor, the local biome volumes are passing data out of the PCG component on the component itself, which is also serialized, that is then picked up by other systems to work with. So if I wanted to, for example, mash two biomes together, I can do that. It is an example of making something really complex with PCG and exposing simple things to our users. Um, what I do want to let you know, and what I really need you to lock in on here is there are productions that are using something like BiomeCore. They using they they looked to BiomeCore for um for inspiration and they using BiomeCore like concepts but they not necessarily using BiomeCore itself in production And I wouldn actually recommend it What I would recommend if you want to start using it and start looking at it go into the engine folder, copy the plugin into your project, because if we ever need to make changes to BiomeCore, it's a little harder for us to deprecate like PCG stuff. So copy BiomeCore. Yes, cool. And then the Cassini sample is another one that I want you to check out. This has a lot of good examples of how we can use shape grammar to define our... I thought that video would play. For using shape grammar, things like that, we just write a phrase out, and then it's going to spawn along the spline. Good example of runtime generation. good example of, oh, I think they even do some like distance field building stuff in this, which is really, really cool. And also spline meshes with PCG. And then this one, you don't get to check out under the hood, but I do get to show you this. So Lego Odyssey, our Lego experience in Fortnite, uses PCG a lot. There's actually right here is their editor utility widget for their assembly workflows. So you can see they have a bunch of tags that they can use. And one of my favorites is STL, which stands for snap to landscape. So I might have a kind of more complicated level instance, and I don't want to have to deform the landscape to fit the level instance. Instead, I can snap all of those points down to wherever they need to be on the landscape. And I know that this actor is a thing that needs to snap to the landscape, but that one isn't. Again, really powerful use of that. Got that, got that, got that. Oh, and the POIs are built out of point clouds, again, all with assemblies. So there's a bunch of other stuff you can do with PCG. We can do dungeon pools. We can actually spawn actual levels or level instances with PCG. PCG. So your designers can build out a room and then you as the PCG person can define the rules for how all of these rooms are going to get snapped together. Or you could do them in subgraphs. It can get really wild. There's a bunch of possibilities for like tile-based games. Lego Odyssey, of course, is tile-based. We have seen examples of hex tiling with this as well. scattering debris throughout your world. Our tech audio folks use PCG to place ambient audio based on the biome and other stuff in the world. A lot of people are looking into machine learning-based applications, both using machine learning to inform what gets generated with PCG as well as using PCG to train machine learned models. There's opportunities for like voxelizing clouds and that really wild. And there's so much more we can do with PCG. I just want you to start using it. So in summary, good news, PCG is production ready. You can definitely start using it now. I really hope you do start using it to speed up your workflows. I want you to think about all the different things we can do in editor at runtime. And as you're starting out with PCG, start small and start and build on that foundation to get bigger and bigger and bigger, and think about how we can connect all of these systems together. And here at Unreal Fest, there is more to see about PCG. So coming up next in room eight, my colleague Hugh is going to talk about the runtime PCG we used in the Witcher 4 Unreal Engine tech demo. My buddy Tomislav has some PCG stuff going on in his technical content grimoire. And sort of semi-related to PCG, Simon is going to talk about the future of Nanite foliage in room 6 at 2.30. And of course, we all missed Simon's talk about PCG earlier. But if you have questions about PCG, definitely go talk to Simon. And finally, a big thank you to my colleagues, Camille Kay, Chris Murphy, and Jean-Claude Sebastien Kwai for their incredible and invaluable help in putting together this presentation. And a big thank you to all of you who are using PCG. The team at Epic loves seeing everything that you're building with it, and they find it really inspiring. So please, please, please keep up the hard work. and with that, this is a link to my link tree and I've got a section down there of like cool PCG resources and talks that I definitely think you should check out and that's it, that's my time. Thank you all so much, I really appreciate it.