Frame at 6.00s

Hi everyone, my name is Ivan Cerevko. I'm a principal character artist at Hangar 13 and I worked on the hair and heads creation pipeline for Mafia the Old Country.

Frame at 15.50s

I'm truly honored to be here today and I would like to share with you some aspects of our metahuman journey.

Frame at 25.00s

What I'm going to cover today. First I will explain why metahuman and how it fits into our project vision. Then I will show you an example of MetaHuman customization and how we achieved production flexibility. Next, to groom or not to groom. In this part I will compare haircuts and strands and share the work that worked the best for our project. And finally, I will walk you through some aspects of our hair creation workflow, including how tools for high poly groom can also be used for in-game assets creation.

Frame at 60.20s

But before we dive in, I would like to start with a short cut scene from our game. All of the men in this room are bound by blood. This is a family. And in this family we follow a code of honor. As a way of life. In Brotherhood? You are here because of your deeds and the assertions of those who stand after your side. This life is one of secrecy. If you make a friend, meet a woman, they must not know about our thing. It is binding. It is not forgiving. To betray one is to betray all. Do you wish to commit yourself, your life, to this family? Yes, I do. Alright. Thank you.

Frame at 156.60s

So, why we decided to use MetaHuman?

Frame at 160.10s

The first reason is high-quality realism. Mafia the Old Country is a story-driven game with realism-oriented art direction, and it was important for us to push the quality of the story characters to the next level. MetaHuman Rig helped us to achieve high quality animations which can express even subtle emotions. The second reason is efficiency of the workflow. Because of our time and budget limits, scanning of NPC assets just wasn't an option for us. Thanks to MetaHuman, within roughly three months we were able to produce more than 100 NPC assets with a quality very close to main characters.

Frame at 203.25s

For asset processing, we used DNA Calibrator together with our custom tool, which gave us flexibility to update or regenerate a full rig setup in just a couple of minutes.

Frame at 215.06s

And lastly, flexibility and Unreal 5 integration. One of the strongest aspects of metahuman technology is that you can still iterate on the assets, even in late production phase without the risk to break rig or animations. In effect, it was quite common for us to create animations on the base heads and then later apply those animations to specific assets when assets were ready.

Frame at 245.97s

Development of Mafia the Old Country started on Unreal 5.1 and we were really lucky that just a few months later

Frame at 255.27s

DNA Calibrator tool was released that gave us much more flexibility over the asset scalability. On one side fidelity of the main characters

Frame at 266.30s

was crucial and on the other side optimization for NPC characters was

Frame at 272.62s

absolutely essential. And here is a small diagram of our workflow. For most of the main characters we used scans together with Mesh2MetaHuman plugin but before Before sending scans into Mesh to MetaHuman, we always did a first cleanup path just to make sure that input is matching MetaHuman measurements more closely. Because of our project time constraints we only processed neutral expressions that means all of the main characters sharing general MetaHuman expressions pool For NPC characters, the approach was different. They were created directly in MetaHuman Creator based on their story or concept. Unreal Engine implementation also differs between two categories. NPCs received an extra round of optimization to keep them performance friendly in crowd scenes.

Frame at 331.18s

Now I would like to share an example of workflow on one of the main characters. It's a pretty common situation that during scanning session actor is tilting or leaning to the side.

Frame at 343.68s

From our experience, it's always better to fix those issues before putting scan into MetaHuman Creator. If at some point you need to blend facial features from MetaHuman results with your scan, you have to make sure both inputs are aligned properly. In our case, we always did 100% blend for the mouth cavity. That helped us to maintain a nice clean sliding deformation of the lips over the teeth.

Frame at 372.82s

Additionally, for some characters, we used about 40% blend in the eyelids area. if there were some mesh penetration issues.

Frame at 381.72s

To fit each scan into MetaHuman measurements, we used base head as the main alignment point.

Frame at 388.72s

We had four types of base heads, male, female, brood and child. All of these heads were created in MetaHuman Creator with scale adjustments to match our body proportions. On the base heads, we kept face area completely unchanged

Frame at 404.41s

changed exactly as it comes from MetaHuman Creator, the rest of the features were adjusted according to the project needs. Once mesh were aligned, we wrapped base head topology over the scan and as part of the first cleanup pass we also

Frame at 421.09s

blended skull and neck area from the base head. As a result of this

Frame at 427.34s

pre-processing stage we ended up with a scan with minimum artifacts that fits into MetaHuman Matrix and ready to go for Mesh2MetaHuman. So as I mentioned

Frame at 439.50s

earlier we started production on Unreal 5.1 and the game was eventually released on Unreal 5.4.4. Because of that we were working with legacy versions of

Frame at 453.05s

Mesh2MetaHuman plugin and MetaHuman Creator. For Mesh2MetaHuman tracking our

Frame at 459.70s

standard setup was to use two or three camera positions. In some cases we added extra trackers for example on the edges of the eyelids. This was one of these areas where it was necessary to push precision a bit further. As you can see

Frame at 476.38s

on the image we also painted additional landmarks over the facial features. The reason for that was mainly because of the smoothness of the

Frame at 485.75s

features on the younger phase, so it can be difficult to see exactly where trackers need to be adjusted. Almost every Mesh to MetaHuman result required

Frame at 498.57s

some adjustments in MetaHuman Creator. Most of the time these modifications were around mouth area. In many cases the default this position caused a strong overbite or sometimes these were placed too deep in the mouse cavity. In many

Frame at 514.73s

cases this was causing unpredictable animation results. For hero assets we went a step further. We also tried to match the teeth shape to actors reference photos as close as possible and we made those changes in MetaHuman Creator because it's not only about teeth themselves, the contact point

Frame at 533.72s

between the lips and teeth also shifts when you're making those changes.

Frame at 540.50s

Mesh to MetaHuman usually gave us a pretty solid result and for non-story important characters we could often use output as it came. And thanks to all earlier preparation, MetaHuman result already fit very well with our data both in scale and position. But for the main characters likeness was critical.

Frame at 563.08s

That's why we added an extra step of wrapping the MetaHuman output over the scan. At this stage, it was important that all mesh loops are wrapped on the corresponding features, otherwise the distortion would carry over into the facial expressions. So after wrapping and before moving to the final adjustments in ZBrush,

Frame at 581.74s

we usually run a few tests to check if the wrapping was successful

Frame at 585.24s

and if the expressions work without issues.

Frame at 591.74s

During the ZBrush pass, we reprojected details from the scan on the final topology and added extra artistic adjustments on the top where it was required Since we weren using custom expressions it was necessary to compensate some breaking expressions by adjusting features on the neutral pose

Frame at 610.82s

It was common situation that we did an extra puff on the mouth corners to avoid sharp dimples which were propagated from neutral pose.

Frame at 623.54s

As the very last tweaks, we applied small modifications to head features based on the feedback from the art director or the animation team. And usually this meant emphasizing certain features or increasing intensity of the pores and wrinkles to compensate limitations of in-game lighting scenarios.

Frame at 644.83s

For the additional tertiary details we used high quality data from texture XYZ in combination with

Frame at 653.13s

displacement maps extracted from our diffuse textures. This gave us an extra layer of realism beyond what we could get directly from our scan geometry.

Frame at 674.58s

To propagate our custom changes over the MetaHuman result, we developed a small in-house tool built on the top of the DNA Calibrator. With this tool, we could reposition joints and vertices according to our head, teeth, eyes and other meshes. As a part of the initial processing of MetaHuman sources, we also reassembled the LODs to fit our project needs. In the table you can see that we used only four LODs instead of original seven. For the head geometry we completely removed MetaHuman LOD0 and all morphs, even for the main characters.

Frame at 713.12s

LOD1, which is our LOD0, was kept for cinematics where high quality mattered. LOD2 was for close-up in-game scenarios where we reduced the number of the mesh elements, which were only important for cinematics.

Frame at 728.46s

You may notice that for LOD 0 we had an extra mesh called eyeblur, which isn't part of MetaHuman by default. I will explain it in a few slides. And for the last LODs we applied really aggressive optimization because this was all about performance for characters in a distance.

Frame at 747.46s

For body meshes we used all four MetaHuman LODs.

Frame at 752.95s

human LODs. The only change on the topology that we did was around the feed area for LOD 0 where we did the blend between LOD 0 and LOD 1. Because we completely skipped LOD 0 it also meant that we skipped all of the corrective blend shapes. In this example you can see that for some expressions quality loss is noticeable. For NPC assets this wasn't such a big deal and it was perfectly fine for in-game situations but for main characters in cutscenes faces start to get more unnatural almost rubber feeling

Frame at 792.49s

animations. To partially compensate quality loss in expressions we decided to rebake mesh differences between LOD0 and LOD1 into our custom wrinkle maps. If the wrinkle maps required extra sculpting paths, we did bake from high poly mesh.

Frame at 810.10s

That was slower but quality oriented option. The second faster option was to use Substance Designer. There we could isolate topology differences between LOD 0 and 1

Frame at 820.62s

and then combine the difference with existing wrinkle map. Substance Designer was an important part of our pipeline. We used it for baking and also processing of metahuman sources.

Frame at 834.63s

For WrinkleMap1, since its combination of multiple overlapping expressions, it was necessary to bake three separated sets. Those bake results were then automatically combined into a single texture inside Substance Designer. For WrinkleMap2 and 3, the process was straightforward.

Frame at 853.87s

In those cases, there was no need for multiple sets.

Frame at 859.03s

As I already mentioned, Substance Designer is an important part of our texturing workflow and there are a few key reasons why we made this decision. The first reason is of course automation. For more technical parts, we wanted to avoid manual work as much as possible because of the limited amount of time and with large number of assets, manual steps can easily introduce bugs. Second reason is flexibility and ability to do batch updates. A big part of the functionality evolved during production. New parameters were added frequently, especially for NPCs and optimization. These changes required to re assets multiple times during production And the third reason handling large texture resources For main character assets we had to import a full set of MetaHuman sources.

Frame at 910.78s

Some of those came in 8K resolution and most of the graphs also using many texture masks. So Substance Designer really helped us to keep all that manageable,

Frame at 925.74s

but the setup that we had at the very beginning, as you can see, was a bit maybe over complicated. For regular artists who weren't familiar with Substance Designer, it was really tough to follow. So, for this reason, later on, all of our processing tools were combined into one node, with all small graphs exposed as simple parameters.

Frame at 951.64s

Here you can see four main texture outputs that we baked for scan-based assets. From this stage onward, we were already using MetaHuman LOD1 as our LOD0.

Frame at 963.45s

The first texture on the top is AO map. This texture comes from the blend between non-directional AO and green channel of the world space normal map. After multiple experiments in full ambient lighting scenarios, we found that this combination gave us the best result. The second texture is a cavity map. This one we

Frame at 984.81s

extracted from the final output of the normal map directly in Substance Designer. It turned out to be one of the most important passes for believable specular

Frame at 995.18s

breakups. As a good practice we applied a slight inflate deformation in ZBrush over the layer with fine details to give cavity map more flashy organic look.

Frame at 1006.17s

and yeah and the third texture is processed color map and the last one is of course normal map. On this comparison you can notice the difference in the

Frame at 1018.73s

amount of secondary shapes baked into the texture. The MetaHuman textures looks a bit flatter since a lot of heavy lifting comes from the LOD0 geometry. Here I put comparison between the default MetaHuman color maps and and our custom results, our custom color maps were created by combining the original MetaHuman textures with additional information extracted from the baked wrinkle maps. For NPC characters, instead of using full set of individual textures, we modified shader to use a cheaper blending method.

Frame at 1056.01s

We created a shared pool of generic wrinkle and color maps, and packed three color maps into a single RGB texture. But this setup was specifically just for the case if some NPC will require to load cinematic LODs for in-game scenarios, NPCs don't use wrinkle or color maps at all.

Frame at 1078.76s

On the shader side, we did an extra optimization pass on all materials and also introduced some additional functionality that was required for our project. And I would like to share with you a couple of changes that we did. During story progression, our character is going through some pretty tough scenarios. To support the impact and to show progression, we introduced cinematic masks for dirt, sweat and blood. In this example, you can see how effects from early events remain visible and continue to overlap with effects from the new events. All of the masks were packed into the texture arrays and could be activated with a simple

Frame at 1117.24s

selector slider. To better emphasize each character's in-game role,

Frame at 1126.37s

we also introduced multiple presets of teeth variations. For teeth color variations, we use simple one-channel masks. And for random scale or damage, we use the world position offset in combination with mask. This one of those tiny details that only a small amount

Frame at 1141.38s

of players will notice, but when they do, it leaves a good impression.

Frame at 1149.12s

And the next adjustment we did is improvement over eye specular. It's quite common that because of the limitations of in-game lighting,

Frame at 1157.12s

it's hard to achieve believable look for the eyes. And this becomes even more complicated under ambient lighting conditions. For that reason, we introduced a simple fake specular for the eye shader. It's basically a camera-aligned reflection vector combined with a texture, and we kept the option to adjust or even replace it for cinematic scenarios. The second challenge that we had with eyes is related to fog and occlusion mesh. This mesh is used for additional ambient occlusion and smoothing transition between the eyelid and the eyeball. Both effects were originally handled by a single shader with translucent blending method, but because of the complexity of the implementation, In this simulation we started to see some unpredictable shading behavior, for example glowing eyes in foggy scenes.

Frame at 1212.72s

After multiple experiments our solution was to split eyelid transition and eyelid eye occlusion effects into two separated shaders and meshes. For the eyelid transition we kept the translucent blending method with the activated applied fogging. This is the new ash that we named Eye Blur. The occlusion shader we switched to Modulate Blending method instead of Translucent

Frame at 1237.72s

and kept Apply Fogging active. So as you can see, the result was pretty successful.

Frame at 1248.66s

Mentioned tweaks are only a few of many that we did during production. and here's a small comparison how character evolves with all changes applied. And here another example.

Frame at 1268.22s

I want also to show a small comparison between a story character and NPC. The one on the left has custom color and wrinkle maps with all bells and whistles, and the one on the right is an NPC using wrinkle and color maps from generic pool. Of course there are some visible quality differences, but even if this NPC appears in a cutscenes together with main characters,

Frame at 1289.76s

it still holds up pretty well, I think. Our main characters got the full high poly treatment,

Frame at 1298.96s

while NPCs were processed only on the level of the low poly mesh. Because of that, we saw quality loss between the LODs not only in expressions, but also in neutral normal map. Instead of baking from high poly,

Frame at 1312.90s

we baked information directly from LOD0 mesh and then combined it with normal map that comes with metahuman sources.

Frame at 1322.54s

To get smoother transitions and better shading on the faces at a distance, additionally we baked information from LOD0 to the last LOD. For this purpose we used just a small texture 256 by 256. And now a couple of words about the MetaHuman optimization that we did.

Frame at 1345.76s

And the first one, yeah, we used only four LODs in total. MetaHuman LOD1 became our default LOD0, and LOD6 was used as the lowest LODs, which joins removes in further stages. Next, we implemented an additional LOD system, which dynamically shifted LOD distance based on the number of characters on the screen per specific mission. Third, toggling post-process and aggressive switching to leader pose on lower LODs. We also introduced our custom tick batching on the game thread. And the last one, we used Half-Res subsurface scattering for in-game scenarios. there was a slight quality drop for con but for consoles it was essential to say some performance and here a couple of more beauty shots of our main characters

Frame at 1411.35s

so this is our hero and Zofavara don't tourism Isabella tourism don galante and and Luca Trapani, friend and mentor of our main hero.

Frame at 1426.28s

Here's some close-up shots from the engine. And now we got to the essential question.

Frame at 1444.78s

To groom or not to groom? It was one of the first questions we faced at the beginning of the project,

Frame at 1453.28s

and it's also one of the questions I hear quite often from other game developers. So let's start with a quick comparison between strength-based system and card-based system. I will start with the pros of the strength-based system.

Frame at 1468.28s

So first one is a flexible altering. Strengths are more iterations and artistic friendly. friendly. It's much easier to prevent situations when during the iterations artists spends more time on technical fixes instead of focusing on the

Frame at 1484.66s

creative side. The second pro of course higher visual fidelity. Strength based hair give us more realistic hair specular, strength thickness, shadowing, line transmissions etc The third pro is easy transferring and runtime binding in Unreal Engine polygrooms thanks to binding assets are much easier to adapt across multiple assets

Frame at 1507.87s

even when the geometry varies a lot. But of course, there are also some cons, and let's take a look on them. So, first one, performance and memory cost.

Frame at 1519.22s

Strand-based hair is expensive to render. It requires advanced shading, and that can drop frame rates significantly,

Frame at 1527.30s

especially on low-end hardware. Also memory consumption from complex haircuts can grow exponentially and on top of that we have binding assets complexity which also connected to Groom assets.

Frame at 1544.61s

The second one is loss maintains. Groom assets have strong tools for

Frame at 1550.23s

controlling LODs but in practice for in-game scenarios especially on consoles and load low end PCs we still need cards based version of the haircut and tricky part is to prevent visible quality drop between strands and cards and

Frame at 1564.01s

in many cases that almost means building the haircut twice once with strands once with cards which can extend production time a lot so now quick overview overview about card system and I will start with pros so first one of

Frame at 1586.01s

Of course, consistent scalability instead of automated strand decimation across whole asset as it is with strands.

Frame at 1592.65s

Artists can choose exactly which clamps remain at distance to preserve silhouette. Stronger optimization can be also applied on the individual pieces like, for example, inner layers of the haircut. Second pro is modularity. Swapping modular hats or outfits is much smoother with cars since they are rigid meshes.

Frame at 1615.12s

And when it comes to clips, ribbons or braids, this also works better with rigid clumps.

Frame at 1620.51s

And the third pro is of course memory usage. Card-based meshes have much more predictable memory footprint. And here are the cons. And the first one, and probably the worst one, is long iteration time. Card still requires a lot of manual work.

Frame at 1639.83s

Not that long ago we were using maybe 10 to 20 thousand triangles for in-game hair. Now it's common for hero hair to go up around 100 thousand triangles. If the pipeline isn't flexible and we try to build the hair the old-school way,

Frame at 1655.03s

every feedback loop, every iteration can turn into never-ending nightmare for the artist, when five minutes of visual tweaks can introduce 30 minutes of technical fixes.

Frame at 1665.03s

And the second one is rendering issues. It's much harder to achieve realistic shading with cards, and on top of that we still face problems with alpha-dealer and unpredictable hair shadowing.

Frame at 1680.78s

For our project we decided to take some aspects from high poly hair creation so that we could stay flexible with higher poly count numbers. And we also used Groom Assets in Unreal Engine to take the advantage of automated binding, which allows us to create a wider pool of variety of the characters.

Frame at 1699.16s

And here some brief overview of our process. The first stage is ZBrush block out. Here we try to solve most of the artistic challenges early on to prevent as much guesswork as possible in the later stages. Next is the hair guides placement. We used Ornatrix for this stage. For main characters the entire haircut was done inside Ornatrix. But for NPCs this step was more an in-between stage just to speed up the initial card's placement. And few words about the card's geometry. So here we see two options, proxy cards and bound cards. For hero haircuts the polycounts are much higher, that means the cards are thinner and we can treat them almost like a high polygroom and take an advantage of all procedural modifiers. For NPCs with their lower poly counts, we need more detailed control over the card placement and rotation. That kind of control could be achieved, for example, with GS curve plugin inside Maya. And the last step is Unreal Engine implementation. For both NPCs and hero assets, the implementation was the same. The only variable that affected our choice was physics. that required physics were implemented as skeletal meshes because Groom Assets physics was simply too expensive for our scenarios. And here a bit of practical part. And I will start with ZBrush block out. Here we mainly focused on solving any missing pieces of puzzle because usually it hard to get a photo or concept of the haircut from every possible angle That also where we establish hair layering volume and the silhouette Usually this stage took us a couple of days.

Frame at 1816.53s

The more we solved, we'd block out the less guessing work we got later. Now a bit about planting guides.

Frame at 1824.38s

Here is a small demonstration on super simplified example. For this stage we used Ornatrix as I mentioned, but any other solution for high poly hair creation could be used as well. We started the placement by manual tracing the general shape and the rest filled using Ornatrix strands multiplier. And here is an example of complete hero haircut. To make the whole process more iteration friendly, we tried to work with multiple grooms and additionally we used Ornatrix strand groups to get more control over individual clumps.

Frame at 1861.33s

From this point, depending on the complexity of the haircut, we had two options. Either use proxy meshes directly in Ornatrix or extract curves and bind cars over them.

Frame at 1874.33s

In the case of Hero haircut, we went with proxy mesh option

Frame at 1880.83s

because the haircut had higher poly count, it was possible to use procedural modifiers in a similar way to how it works for high poly hair. And here a small comparison between the complexity of the hero and NPC haircuts.

Frame at 1897.33s

For NPCs, we usually spend less time in Ornatrix because we already knew the poly count would be almost twice as low compared to hero asset. That meant we had to place each card more carefully.

Frame at 1907.83s

For this reason, the final adjustments on the NPC haircuts were done manually or with the JS curve plugin in Maya, where we bound the cards over curves which were extracted

Frame at 1917.33s

from Ornatrix.

Frame at 1923.33s

And here's a small comparison between Hero and NPC haircuts. For short hair, facial hair and fur, we tried to keep the creation process as procedural possible. The main reason was that for this type of hairs it was more important

Frame at 1942.23s

for us to control the overall look rather than focusing on small details. There was no need for detailed block out or manual guides placement here. The approach was basically the same as for high poly hair creation but instead of

Frame at 1958.43s

strands the output was proxy meshes. Here I would like to demonstrate how we

Frame at 1964.09s

we applied proxy meshes over a high polygroom. In this case, hairstyle only required a few different cards because a lot of variety came from the modifiers. Instead of assigning one card per guide, as we did for longer hair, here the number of cards was controlled by the amount of strands. This approach allowed us to use procedural modifiers to introduce natural breakups and variety. Of course, the effect from modifiers depends on the card's density and the card size. Also for short hair, big part of the final look came from the hair shell texture which we baked from the groom that was later converted into

Frame at 2009.95s

the cards. Okay let's switch to Unreal part. So for groom export we used Ornatrix

Frame at 2020.13s

Alembic and cars geometry was exported as simple static mesh. As I mentioned earlier we didn't use strands for our characters not even for cinematics. With a few engine modifications we made it possible to display cars instead of strands already for LOD 0. For here we worked with four LODs. Each LOD was reduced by 50% or more compared to previous one. And for the last LOD we switched to solid shell

Frame at 2048.33s

mesh with a super optimized shader version. But for example for eyebrows we followed a slightly different approach. On NPC characters we kept cards only at

Frame at 2060.39s

LOD 0 and all other LODs relied completely on the eyebrow shell texture.

Frame at 2066.85s

For facial hair there was no need to export new alembic setup for every variation. After multiple experiments we ended up with a standardized setup that we used that you can see on this picture and those were used for 90% of our

Frame at 2083.50s

scenarios. However haircuts were more complex for those it was necessary to create unique alembic setup for each variation. To stay more flexible with

Frame at 2101.17s

texture usage and texture specking we decided not to use texture inputs inside groom asset Instead we did shader adjustments we used hair attribute node to pass information from

Frame at 2109.63s

the groom into the material and that allowed us to override attributes directly from the material instance. To create more natural looking hair to skin

Frame at 2121.55s

transition we used hair shell textures. As long as character haircut doesn't

Frame at 2127.44s

change, shell could be painted directly into the head texture if haircut change. The usual way was to add extra shell topology over the head mesh. But in our case extra topology was an optimal solution even at MetaHuman LOD1.

Frame at 2144.60s

The head topology is already quite dense, covering all necessary areas with shells would almost double the poly count of the head asset for each LODs.

Frame at 2153.46s

On top of that, shell mesh could at least introduce three extra draw calls.

Frame at 2158.46s

And one more thing to consider, that Unreal Engine using Masked Alpha for hair, which is very difficult to use for tiny strands like eyebrows or baby hairs.

Frame at 2167.46s

For this reason, we went with a different solution. We used second UV set on the head mesh instead of extra topology.

Frame at 2176.46s

With this setup, all hair shells shared single draw call together with the head, and we could paint small hairs since it's direct blend with a skin texture instead of masked mesh. Now a couple of words about also groom optimizations.

Frame at 2194.96s

So yeah, used cars, no strands. Next, on the lower LODs we switched rigid mode instead of skinned and cleared up per instance buffers. Third, we used parent bounds and disabled ticking whenever it was possible. For the last LODs, we used a simple static mesh shell as fallback. We also switched to an optimized shader version for the last LOD. And the most important optimization is Sicilian Sun. People in our game mostly wearing hats. heads. So yeah. And now I would like to share a couple of examples of our haircut

Frame at 2250.55s

renders. So this is NPC haircut. This is also NPC. Here is an example of hero haircut. Another hero and the short hero haircut. And some of you may know that in

Frame at 2268.42s

original Mafia game, there was one little Easter egg. Under specific conditions, the

Frame at 2273.52s

player could equip dog's hat. Later on, this became known as Baskerville Outfit. So yeah, we are keeping tradition alive. So this asset was also created with

Frame at 2286.82s

MetaHuman. I mean, it was not necessary, but you know, we decided to have fun. and fur was groomed, it's actually cards and they were groomed inside Ornatrix. And here is a small cutscene with speaking dog.

Frame at 2303.96s

I was not sure you could read. You took the chance.

Frame at 2312.08s

I did.

Frame at 2316.60s

Why did you write to me?

Frame at 2318.43s

Why did you come?

Frame at 2321.81s

I wanted to say I'm sorry. About the fight. It should never have happened.

Frame at 2329.11s

You are right.

Frame at 2330.63s

Si, si. It's just... What is it? Maybe it's not a good idea. A meeting like that.

Frame at 2342.31s

Like this. People might talk.

Frame at 2346.52s

And what would they say?

Frame at 2350.25s

I have to be careful and work hard. Your father is who he is.

Frame at 2356.75s

And who are you, Enzo?

Frame at 2359.75s

Does my father already control your life?

Frame at 2364.52s

So yeah, you can go crazy with MetaHuman. And finally, I would like to say a big thank you to all teams on this screen on the screen for the old hard work they did and for all huge support they did during our metahuman journey and thank you all for your attention