https://m.youtube.com/watch?v=DRlhpzc7ImA&pp=0gcJCSMKAYcqIYzv

Hello and welcome to the final video in our series on the development of Arc Raiders. In the previous two episodes we detailed how Arc Raiders changed from the fast-paced co-op giant machine hunting game to a more intimate and tactical PvP-Ve extraction shooter. But the dream of taking down large intelligent machines never died. In fact it may be the defining technology pillar of this entire project. I say maybe because for years Embark has been struggling to wrestle results out of this new tech. You see, early in ArcGrader's development, the team at Embark decided to make a gamble on an emerging technology, to make machines that would seem to move with intelligence. It's not AI in either the traditional gaming sense or in the new generative sense, but rather adopting breakthrough machine learning research in the field of real-world robotics, A way to allow the machines in this game, both large and small, to move realistically, allowing them to cross complex terrain, to react, fall and stumble when under enemy fire, to walk, run and jump even when one of their limbs has been blown off. I spend a lot of time talking to a lot of smart people to understand how exactly this technology works, the groundbreaking new game design language it opens up, and frankly how difficult it is to get any of this to work effectively. But before we get to any of that, let's first quickly remind ourselves about the machines that populate this world. Instruments of the game's antagonist, ARK. We filmed these interviews in 2024, around a year before the game's release date. So much of what the interviewees are talking about has itself changed in the 12 months since. What work is left to do, and how is ARK Raiders likely to evolve before it's released? Internally we kind of broadly define the enemies into aerial enemies and ground enemies just because of how we have to build them and how they behave tends to most cleanly fall into those two categories. And then within those categories it tends to be like lighter enemies, fodder style enemies that if ignored they'll hurt you but aren't a tremendous deal to take down, up to much heavier enemies that are a significant threat to you and take a lot of effort. So on the low end of our aerial enemies we have the Wasp, which is just a pretty light aerial drone that is somewhat erratic in its movement. It has a light machine gun and it will pepper you with bullets. And then its sort of big brother version is the Hornet, which has significant armor plating, which necessitates the player to either use maybe a type of grenade to take it out, they need to flank around behind it and take out its thrusters, which are unarmored. So the Rocketeer is an aerial enemy and it's kind of what it sounds like it's just a very large aerial drone fairly heavily armored and will kind of saturate areas of threat with rockets from a distance so it kind of flushes players out if you want to just shoot for the center maybe it takes a few more hits but it's easier to hit and each hit will kind of knock it back so if it was getting ready to shoot at you you can kind of throw its aim off just by knocking it a little bit and that has always been the case so there's always this concept of like a popcorn enemy where it's like you get a lot of them and and they don't take much damage to destroy, but it's like pop, pop, pop, it feels very satisfying. And then our ground enemies are primarily our answer to interior spaces, which again in Pile 1 weren't really a factor. It was a building you would run through at 50 kilometers an hour, because our move speed was so high then it didn't matter. Whereas now you are slowed down and are interacting with containers, and we need a way to present threats to players in interior spaces. So we've gone through a lot of types in that area, but today we have what we just charmingly call the fireball, which is basically this metallic sphere that can roll quite easily in interiors and then it opens up and shoots fire at you. The initial need was to have an enemy that was really fast and that could really catch up to players, right? Like we needed something that would be able to traverse our world in the game. It's beautiful, but it has quite a lot of complex geometry and there's all of this verticality and all of these slopes, and it's quite difficult for many ground-moving enemies to navigate. So we were like, okay, we need something that can be on the ground, but that can really go after players when they're like swimming around and jumping and bolting and going with their snap hooks and stuff. And then we were like, oh, wait a second. A ball. You don't need legs, you don't need wheels, you don't need tracks. Just roll. It just goes to where it needs to be, right? We have another one called the Tick, which is kind of a creepy little spider-ish robot. It can cling to walls and hop off of them and it will summon other enemies if you get close to them. So the loot from the AI are the ones that are nicer. So you find a core from one of the enemies, it's a grenade. There's an enemy that calls other enemies, the piece of it does the same, so it can trap another player, just throw the thing and enemies will drop on them. We are trying to make you use of the loot that way. The first step on the journey to realistic machines is creating a world in which physics are accurate. Most games will animate their characters irrespective of the laws of physics. But if Embark was going to create a world in which intelligent robots could walk around as they saw fit they had to ensure that laws like mass and gravity were consistent that the world of arc raiders was capable of creating emergent moments between player machine and environment Our enemies are fully physicalized. All parts, everything, they use physics, to an extent that sometimes it's even a challenge. Because if we really want the drone to do a barrel roll, it can't, because of their weight and the thrusters, all of that is super realistic in the game, right? So it's not easy to justify it doing a better role. And all the parts of the drones as well can be peeled off, and the drone will actually react to that. If it has one thrusters left, it's going to try to shoot and that is going to impact its trajectory and all of that. I think it's a choice between control versus emergence, really. Because if you want to be able to say, look at a game like Dark Souls or something, Bloodborne, where they have these enemies that have really specific patterns and it's like they want the player to learn, okay, after three seconds it's going to move to the left and then it does this move which takes X amount of time and then something else happens. If you want that kind of control, you cannot use physics. Because a lot of design philosophies, especially with enemy design, are really difficult to pull off when you have this type of systems. Because it's so much about pattern recognition, right? Like you have an enemy moving in a certain way and like, Oh, that's the opening for the attack. I can go in. Or you as a player know that like if I do this, I'm going to get this reaction from it. And that's going to happen every time. And that's what games like Dark Souls, like Monster Hunter kind of rely upon. And we had this ambition to have boss fights in that same arena of like Dark Souls, Monster Hunter, like having all of that compelling pattern recognition. But then we also realized that, well, if you knock it in a certain way and the slope is a little bit angled and, you know, maybe someone else is standing somewhere else and firing off a grenade, then that is not going to work. You're not going to have the pattern that you thought. Like if you've got a big enemy flying in the middle of a pack of smaller drones and you throw a grenade and it explodes, and then maybe one of the engines flies off and hits this other one, which then like crashes to the side. You get all of this stuff, kind of these cinematic moments that are really memorable. And that's not something anyone's designed as such. It's just something that's happened naturally, which is really cool. And like we thought it was worth that versus having to struggle with, OK, maybe we don't get to do exactly what we want all the time, but we get these cool, memorable moments that can just occur without anyone having planned them. Getting physics to work for barrels or grenades is one thing. Getting physics to work for drones with parts that can be blown off is a whole other challenge. But getting physics to work for machines the size of buildings is an entirely different story. This is due to the nature of most game engines and physics simulations. Games are not used to calculating mass at that scale, And having enemies weighing thousands of tons creates challenges in how they move and interact within the world. Because sand in video games isn't sand, it's geometry, texture and shaders. Because rock and dirt doesn't crumble under the weight of heavy things in most games. Inside the simulation there are just numbers at work here. So the weights of things are there. So you set, you know, this leg weighs six tons or, you know, the center of mass of this robot should be 10,000 kilos or something like that. But to get it to support its own weight, you have to set values on how much strength each limb has. And when you start dealing with, like, very, very insane numbers to try and support the thing, you get behaviors which are sort of uncanny. very very stiff legs that sort of like have a lot of energy but they do small little inputs like this against the ground and you get sort of like rigid bounciness happening and if you want to dial that down then you get something that can't support itself and it starts to trip or fall over and this sort of thing so there's this middle ground which was really hard to find to make it look natural but also give it the power to actually locomote It is a classic game issue that when you have physics they are not penetrating each other, right? You have things and it's resting on top of something. You don't get it buried deep beneath, right? And if we're talking about these big bosses, they would bury themselves into concrete or anything, right? Not just sand. But we can't really do that in games without a lot of smoke mirrors. Yeah, it really breaks the illusion when you want to see like an at-at-foot kind of like come towards Luke's speeder and go and that doesn't happen. What happens is it kind of goes like this and you're kind of like, huh, I thought this thing was supposed to be huge and big, right? It just doesn't sell it. Early in the development of Ark Raiders, the team at Embark decided to establish a team to explore the possibility of using groundbreaking research in the field of robotics to enable the enemies to locomote independently of any preset animations. Decision making would be made using game logic but how a machine got from A to B would be decided by a brain generated using machine learning You know those Boston Dynamics videos where they try and teach a bipedal robot how to jump up on a box That's kind of like what Ark Raiders is trying to do, but instead their robots are way bigger and they have way more legs. And crucially, they have to be able to do something that traditional games can't. And that's when you blow off one of those legs, those robots have to be able to adapt. I think, as I've been told, their objectives have been much closer to the same objectives as Boston Dynamics has, rather than game design problems. I'm literally figuring out how these machines will navigate with their own vision. They had to be physically accurate, also where you put the engines and the downwash and all that has to be accurately. They have to be in accurate locations for it to work. What's exciting and what's still exciting is the fact that using this technology for our locomotion system is the losing limbs and still it wants to stand up. It just opens that door to the imagination of what is self-preservance and how that speaks back to the players. To us, when we saw that, at least, that is an exciting prospect of a new type of experience that players haven't had before, I think. They're literally playing against a machine. The ML only did, at that point, locomotion. so they that was responsible to make it go from point a to b and that's it no ml in decision making whatsoever so it depends a little bit on the model so some models consume a lot of memory that is maybe the biggest consideration and some language models and such consume gigabytes of memory but this locomotion does not consume that much so i'd say the expensive part of ml is the what we call observations and that is actually not really ml per se that is just how the agents get information about the world so they don't see the world the way a player sees it of course they have to if they want to see a wall you have to sort of line trace or to a shape cast to see it or detect it and we need to do a lot of those maybe thousands so they get a idea of what kind of environment they are in at least this is the case for locomotion the road to getting Getting this locomotion system to work has been long and difficult, and it involves several different stakeholders from within the team, from animators to technical animators, artists and gameplay engineers. It also required the studio to look outside into the world of research. I did my PhD in mathematics and thought pure mathematics was a bit too dry, so I found applications and the application I found was applying machine learning to biology. So I worked on hereditary diseases like diabetes and Crohn's disease and stuff like that and tried to identify the target genes that were involved with that through machine learning. It's completely revolutionized the field. I think that very soon we will see a Nobel Prize to Google DeepMind for their work on biology applications of machine learning. I'm certain of it. Really early on we had, there was a test where even though the gait, so the way it walked was kind of meh. We threw boxes at it, so we were just throwing these, I think there were 100 kilogram boxes, and it just reacted, and you know, that's when you see, holy shit, there's a magic here. From an animation perspective, I'm used to, you know, okay, how would I break this down into, let's say, a sequence of animations that can, in this case, it would be, we can throw a box at this from any angle, in any direction, in any weight or whatever, and I'm thinking of the animation complexity so well how do i handle arbitrary complexity well then i need to have infinite amount of animations or blending and that it's just a nightmare which is why you often see things like you know if i punch you in the shoulder in a video game you just do this oh cave-in animation or you know the canned sequence and the funny thing with this take i think is that it's kind of partially ruined animations in other games for me a little bit so now when i look at other games i'm I'm like, wait, the center of gravity is off here. Like, this doesn't feel right anymore. And it really annoys me. A connoisseur of animation, I hear. But everything else they do looks so good. Like, when they walk, like, on a flat ground, like, that's an authored animation cycle that looks really good. And an animator's been like, I want to convey this sense of weight, this sense of menace in this gate, or, like, this sense of joy or whatever. And we just take whatever the robots give us, and it looks frequently kind of shitty in comparison to these, like, Disney walks, right? But on the other hand, when you throw something at it or when it slips or when the ground is not even, our robots do the right stuff. And their robots are not robots. They don't do the right stuff. They just apply some kind of one out of their 20 animations that they have and it doesn't fit. The process of training these brains is complicated. It essentially requires somebody to give the brain certain conditions, have it train using those conditions, and then testing the brain in a machine and seeing what it does right and what it does wrong. An example of a condition could be, if an enemy is far away, run fast to get to them, or if a human enemy is firing at you, attempt to jump over them. The core problem with this work is that once one of the brains is baked, you can't simply tweak its behavior. You have to go back to the original conditions retrain the entire brain and then test to see if the new brain works better And this causes a problem because the solutions that a human might have can be very different to the solutions these robot brains come up with. We found some fun playing against the big ML enemies, right? And the problem was to replicate that fun. It was very difficult because it's not predictable. It's very hard to just say, do this, right? It doesn't. It doesn't follow orders that way. You have to train it and teach it. And sometimes it finds weird ways to get to do things more efficiently, like removing all the legs, but two from the ground, because that's faster for it. But then it looks silly. It doesn't become a menacing enemy that we wanted, right? So we always had that dilemma dealing with ML that is not the predictability part of it. The damn agent learned that if I just shove my foot into the ground really hard, faster than the server can keep up, I can use the penetration to bounce myself, right? A typical example is that we give it a cookie when it moves towards a flag, right? And when it gets to the flag, we move the flag to a new place, right? At some point in time, we try to train it. The closer it was to the flag, the bigger the reward it got, the better the cookie, right? And what it learned, of course, was that it runs up to the flag and then stops with this nose this close to the damn thing. And it's like, I know if I touch it, it goes away and stuff starts hurting. Like, that's much worse. But you're touching something that I think has been a huge problem, which is the interaction between game designers and the ML loop. Like, that's been really hard. For a long time, I felt like the game designers would ask us something. And we think that's possible. Let us get back to you. And then three months later, we'd be like, yeah, now we have the stomp attack request. And they're like, oh, we worked around that now. The team at Embark has tackled these issues with a number of solutions. One being training the brains on actual animation data to help guide their movement, almost like giving the brain a template to work from. And another has been instead of baking one brain for each machine, giving them multiple brains that focus on specific tasks. This in particular has proved helpful for both getting the machines to work properly and empowering developers in how they tweak machine behavior. And I think it's only fairly recently that we've managed to get the locomotion components we've been working on predictable and diverse enough that they feel like they have building blocks that they can use by themselves. Instead of involving us in every single step, they can be like, OK, I know I have the stop behavior, I have the patrol behavior, I have the assault behavior, and I have the melee behavior. Now I can combine these as I want to and create good game design with it. As you've probably talked about, it's very tricky to get ML to do things with extreme intent. And we've tended to try to ask it to do jobs it's bad at, which is telegraph and attack with precision. So trying to work through those problems where we've got this thing that walks very competently across complex terrain. How do we turn that into a game that feels satisfying? like our reinforcement learning and machine learning efforts into locomotion is something that is being used today to some extent, maybe not to the full extent that we were hoping as we started and that proved to be far more difficult than we expected. But we are using it today, it's a part of the game and it makes the game feel very different than any other game and that technology will evolve over time. So we were just a little bit too early with it and didn't maybe have enough resources and knowledge to figure it out at the time. I think we are getting to a point now where we are getting more and more comfortable with it. So that's why we want to keep it. It's still an ambition. That's why we always have at least one enemy in the game with it. But it is much more complicated. We just did a new prototype of one big enemy like those without machine learning, with procedural animation. And it got the results, as I said, like I can say, just stop there, put the leg here. And do things like that. It was much simpler to produce. But we had a couple of animators full time on it for six months. So that's the trade off there. Arc Raiders is launching into one of the least predictable video game markets ever, where players demand exciting new experiences that they can play with their friends for days, weeks, and months at a time. But this reality hasn't stopped Embark from taking bold risks when it comes to gameplay or technology. In both Arc Raiders and the finals, it's clear to see how much of a thirst the studio has for evolving any given genre, even if it takes years of experimenting. And while I've been embedded in the development of Arc Creators, it's also been clear to me that they're not scared to tear up lukewarm ideas that don't reach their lofty goals. Time will tell what type of audience Arc Creators ultimately receives. We produced these videos long before the game's release. But one thing is for sure. No matter how Arc Creators launches, in Embark's hands this game is going to continue to evolve, to take risks, and to push technology, and more importantly, gameplay, forward. Transcription by ESO. Translation by —