Rendering Wounds on Characters

Earlier this week I tweeted about hit-masking characters to show dynamic blood and wounds. Today I’d like to talk a little about the effect and how it came to be. I’ll talk a little bit about the technical details and some alternatives. The effect is a proof of concept to try and find a cheaper alternative to texture splatting using render targets. So let’s get going!

We have a few decals types in our game like bullet impacts and blood splats we splat on walls behind damaged characters. These decals are attached to the component it hit, but if you were to attempt to do this on an animated mesh you’d notice the decal sliding over the surface which doesn’t look too great. I noticed this kind of sliding in PlayerUnknown’s Battlegrounds the other day, where they use traditional decals on characters, but a more stable solution is desirable especially for third person games where you constantly see your own characters body. It does the trick with small decals where the problem isn’t as noticeable. Here is an exaggerated example of decal sliding:

I wanted to try and find a solution for this problem on our characters and I was inspired by Ryan Bruck’s GDC demos using Render Material to RenderTarget technique to splat spheres onto a character via a render target which can be used to mask wounds into the shader. Here is Ryan’s render target based damage implementation:

The effect is a lot more expensive than we were looking to budget however. It requires two render targets (for each character in the scene!), a mesh with unique UVs (UE4’s mannequin is not uniquely UV’ed for example, and requires modification outside of the engine to work) and have spiky performance costs at runtime due to rendering of TWO calls to render targets each time we hit the character which is an expensive operation (several milliseconds worth of spikyness). If you’re wondering why it requires two calls, let me explain.

The first call is straight forward, you want to render a splat into your character damage render target. To do so the material we render into this RT using a SphereMask to find the pixel we “hit”, but this material has no idea of the pixel positions of the character compared to the “hit” location, so Ryan encodes the world position in each pixel for the animated character into a second render target which can be sampled while doing the sphere mask operation. The problem here being that world positions of the pixels change each frame, especially on a animated mesh, this means that on each new hit, we need to first re-render this secondary render target to update the world positions before we can splat into the final render target. This wasn’t cost efficient enough for a purely visual gimmick in our case and needed to find a cheaper solution.

Optimizing the render target approach

There is a way to optimize the technique, by using the fairly recently added pre-skinned local position node. This replaces the world position we bake into the render target with the pre-skinned local position. By doing this, we only require a capture once which we can do offline (since this position never changes at runtime). To capture the reference pose locations I made a quick blueprint and modified version of Ryan’s unwrap material, captured the scene and turned the RT into a static texture (You can see an animated version of what this looks like below). This eliminates the need for the second render target at run-time entirely. Each hit, we transform the hit location (world-space) into the pre-skinned local-space of the mesh before we can apply it to the character. In the next section I’ll explain more how the transforming from world space to pre-skinned space works.

 

This optimization eliminates some of the cost, but still I wasn’t happy about having a unique render target created for character (which can really add up if you’re making a horde shooter for example) and using the costly DrawMaterialToRenderTarget operation on every hit. I measured performance hits of 1.6-4.5ms on my GTX 850M (Notebook), which is huge, especially when it’s not in your control how many might happen in single frame. Remember that this effect is purely a cosmetic gimmick and shouldn’t be a major cost in our rendering budget.

Finding an alternative

So I ditched render targets entirely to try out using just SphereMasks to do the effect. This puts a limit on the number of hits, since with each sphere mask you add, you add to the constant cost of the shader. There are some optimizations to be made here, like using branching to cut out the sphere mask cost if the shader has receive no hits yet or swapping out the shader until one hit is received. For now this is not neccessary as the shader is still well within our “budget”. I figured anything between 3-5 hits should be fine as any additional hits would surely already have killed the enemy in the first place (unless you’re talking about some kind of boss character) regardless, this was just a stepping stone to have a starting point.

Like the original RT effect, to have the sphere masks work consistently with an animated mesh we need to use the reference pose to place the sphere mask at. When hitting the character, we transform the world position of the hit into the reference pose position (using the BoneName info we get from point damage events) you can do so by first inverse transforming the hit location from the current transform of the bone we hit, and then transforming that location using the reference pose transform of that same bone.

Here you can see a visualization of the transform applied to the hit location (green) with the current pose transforms and the bone that was hit. And the blue lines the matching reference pose transformation. The purple line is to help indicate the difference.

In the reference pose example (bottom of the two images), the hit location is already in the correct space, so you can see that the blue and green lines are overlapping and z-fighting even because they are at exactly the same offsets and there is no purple line because there is no difference in transforms to visualize.

In the top image you can see how I transform from the original hit location into our reference pose location. Now we have a constant position that won’t animate, we push this location into the shader which also used pre-skinned position to sphere mask against. To support multiple hits, we increment the param name each time when a new hit is applied, eg. HitLocation_1, HitLocation_2 (these must exist in the material before hand)

Animating the blood

The great thing about using sphere masks this way as opposed to a render target is that we can keep modifying the data as time passes without requiring continous drawing into the RT which could add even more significant cost. For example, it’s super easy to scale the sphere mask as time passes to make it appear as if blood is spreading through the clothing of the character.

Other cool things to add are scaling the mask based on damage received, drying out the blood over time or fading out the wound when enough time has passed to signify the wound has “healed”.

Conclusion

There is a lot of room for visual improvement, but the basic technique is solid. In the examples above I added a HeightLerp to get a more interesting looking fall-off compared to the simple spherical shape we get from the SphereMask itself.

This technique has a much more consistent cost compared to the original, with some restrictions on amount of hits to register. We don’t need a render target per enemy and don’t have spiky cost as we only set a few material parameters to drive this effect. Using render targets is still a viable approach, and if the run-time cost isn’t an issue then it’s a great way to dynamically change your character state during gameplay in extreme ways. I feel like there is a lot more to talk about, like exit wounds, mesh deforming etc! But perhaps we’ll get to that another time…

I hope you found this post insightful! Leave a comment below with any questions or follow me on Twitter!

References

34 Responses

  1. Hi, this post is great! I’ve been looking for a system like this for a long time, but I can’t seem to get it working! Is it possible for you to post an example project so I can learn from it? Thanks!

    • I still have plans to release this at some point, although I can’t currently say when that might happen (my current projects don’t use this tech so it’s fridged/archived somewhere)

  2. Very cool Tom. This is extremely similar to the technique we used on Call of Duty:WWII for blood, wetness, dirt, mud, flames, and ash on characters and weapons. I prototyped that technique in Unreal before we implemented it fully in our proprietary engine so it’s cool to see someone taking it to complete in the Unreal engine.

  3. This is great! Thanks Tom! But I can’t see any ‘next section’ which you will explain ‘more how the transforming from world space to pre-skinned space works.’ Could you please explain? Thanks! :)

  4. I have looked around for the typical ways used in Unreal to generate wounds/bullets damages.

    Note that I learned how to make games in the Unity engine first, which explain why what I’m writing might sound a bit alien when it come to Unreal. I only started to learn about how to use Unreal half a year ago. Excuse me if the terms I’m using are wrong or unknown.

    When developing in Unity, I learned a lot about how shaders works, behind what’s rendered. I started by making a simple shader myself a few years ago, then I tried to make a toon shader about 2 years ago. I would say that making, from scratch, a toon shader with celshading (black outline) is actually the BEST introduction to new techniques that can easily revolutionize the gaming visual effects.
    Here’s what is done in a Toon shader:
    1) Modification of the lighting on a model in real time from the Engine to the Engine.
    2) Modification of the rendered vertex position in real time from a float variable.
    3) Extensive uses of maps/textures for both visual details and their actual RGBA values.

    Considering how Shader is more of an universal thing, but that change based on what “part” of the engine is public or hidden, I think the principle I have in mind could be used as much in Unity as in Unreal.

    I’m pretty sure it’s 100% possible to make a wounds system done in a single call.
    Within a single call, on the shader, by having 2 sets of textures for 1 single material and a mask, the principle is pretty straight forward: On each frame, make it so that the shader place the pixel color based on which of the 2 set of texture based on the mask.

    Now, this is pretty much similar to what have been mentioned in this, but here’s the trick that could make an huge difference. What if you were to use the position-map baked onto the model (which is super easy to produce in any 3D sofware) directly in the shader?
    To put it bluntly, a position-map is a map where each of the 3 directional end of the “edge” of the model (in a square volume based on the actual model) is displayed by the 3 light primary color : Red, Green and Blue. (I might be wrong, but I think the dispersion is literally as R(X), G(Y), B(Z) or I might remember that, sometimes, it R(X), G(Z), B(Y),) This means that, unless parts of the model is overlapping, 2 part will never have the “same” color on that map.

    So, you can record ANY specific part of the model as a simple RGB value if the model has a pre-baked Position-map stored in its shader. From this, there’s no need to even use a spheremask at all!

    Here, in theory, how it’s done:
    When a wound is detected, ray-trace the hit detection onto the model. From this, you can usually get the position, onto the model’s UVs of any color data from any map used on the material of the model. For instance in this case, you want the color value of the Position-Map that you would have pre-baked in the 3D software. From this texture’s, you get the color of the pixel located where the ray-trace hit previously and a the color is an RGB value (or RGBA is the material does read the Alpha channel even if it’s unused.) As I previously stated, the RGB values of the position map is based on 3D space. You can do a simple addition/substraction of that value and you would “move” along the model’s surface. You got your simulated spheremask without an actual spheremask right there and it’s all simplified mathematics that only uses physics at the point of getting the ray-trace hit (which is necessary anyway).

    This method can even be mixed with the actual real-time position map that is already generated behind the screen by the engine. When someone gets hit, you generate the masking effect via the baked RGB position map. If you want to save on the map/GPU, you might even want to store it in the Alpha channel of the position map! Anyway, once the “round spot” is added, nothing stops you from adding a slightly more “blood” toward the down-direction of the wound (blood goes down with gravity even within the cloth) by using the real-time position map that is generated by the engine itself on all models in a scene. There’s no rule forcing you to only use 1 position map (pre-baked or real-time). All that can be done in 1 single call if it’s done right through the shader itself.

  5. Hey, Tom, thank you so much, this is really great info and it helps me so much. But Tom, can you explain how we can convert Hit world location to ref pos bone space? I use

    auto Mesh = GetMesh();
    auto Sk = Mesh->SkeletalMesh->RefSkeleton;
    Transforms = Sk.GetRefBonePose(); //cache

    but GetRefBonePose() returned list contains strange values for me. It’s poses relative to parent bones or what? Thank you!

  6. Hey Tom. Thank you for a great post!

    I was wondering if you’d be willing to document your approach step by step?

    b.r.

  7. Thanks Tom , for taking the time to answer., Im just trying to find a more efficient way of making wounds on characters on a (NAH)Neural Active Hit response project were currently working on for a game were working on something similar to natural motion’s Morphome but it’s still premature. Anyway your findings seem great for what i was looking for but i guess ill just wait till you have some time to look into it. Thanks again & Cheers!

  8. Hi Tom, BTW your brilliant. I was hoping you can help me out, i was wondering if instead of the sphere i could add a wound mask instead. I’ve been trying but with no success. Really any help would be greatly appreciated. Oh and i see in your Shaderbits GDC package in the HitMask the character is an Actor. class and the Physics Asset using per-poly collision. Would this work with the Character class and simulate physics? because when i changed the class to character it stopped working. As for the above question if adding a wound mask image is possible how do i do that. I thank you for your time.

    • Thanks mate ;) That’s a good question, and a tricky one. It should be possible to add a direction vector to the impact so that you could basically project a texture onto the wound instead of using the sphere with a tiling texture as I did. It requires a bit more math that I’d have to dig into myself before I’d have an answer of its viability, but I think that is possible. BTW Shaderbit’s code should work fine on a Character class I think, after changing the class it may just need some fixing up inside (It’s been a while since I checked how that class was set up)

      I hope to some-day come back to this and try out actual wound masking, but no idea when ^^

      Cheers!

      Tom

  9. Hi! Very interesting article here! I’m currently using the Render Target technique into my game and I don’t fully understand how it works exactly. The technique you described seems to be more understandable, but could it be possible to have an example project so we can dig it and see how to create it exactly? We all would be very thankful!
    Thanks,
    Max.

    • This can work with multiple materials w/o a problem. The underlying component I coded for this already supports multiple material IDs per mesh.

      • Okay, thanks for the reply. How did you get the reference pose transform of the hit bone? I’m having difficulty figuring out how to do this in blueprints.

          • Okay, I made a plugin that makes it accessible in blueprints, but I’m having an issue with transforming the world position of the hit into the reference pose position. Any chance I could get a snippet of your code that does this?

          • Here is what I have. The final transformed location returns the correct location if no animation is playing.

            https://imgur.com/a/h5AK2

            Here is a code snippet of how I get the reference pose transform of the hit bone.

            FTransform UBlueprintUtilityBPLibrary::GetRefPoseBoneTransforms(USkeletalMeshComponent* SkelMesh, FName BoneName)
            {
            FTransform BoneTransform;
            FReferenceSkeleton RefSkel;

            RefSkel = SkelMesh->SkeletalMesh->RefSkeleton;
            BoneTransform = RefSkel.GetRefBonePose()[RefSkel.FindBoneIndex(BoneName)];

            return BoneTransform;
            }

      • I’ve tried to transform the world position of the hit location into the reference pose position in blueprints, but I’ve been unsuccessful. Here is some debugging images. In the No Anim image, the green and red debug points are Z fighting. https://imgur.com/a/LqrCQ

        Red=Hit Location

        Blue=The Hit Location inverse transformed by the Hit Bone’s current transform in RTS Parent Bone Space

        Green=blue’s location transformed by the reference pose transform of the Hit Bone

        Purple=the Hit bone’s world location

        • Hi Daniel. Did you ever get this working? I’m walking down the same road and running into the same problems you outline here.

        • As of 4.19 there’s no easy way to accomplish this because you can’t easily get the bone transform in mesh space (the only option is in bone space so you’d probably have to undo all those transforms all the way to the root which could be pretty expensive). If you have a willing programmer on your team, it should be pretty trivial for them to create a function to get the bone transform in mesh space.

  10. I have a silly question – how do I open ShaderBits’ GDC Demos ? :P

    When I add files to newly created project and open any map everything is without materials :(

  11. This is great for point wounds from ranged weapons. I am currently exploring approaches to melee Hellblade like slash marks. Capsule mask and Quad mask (to fill gap between capsules through fames of animation) worked pretty well on Render Target but your approach seems to save a lot on performance. Thanks for sharing!

Leave a comment on this post!