All posts by admin

Native Rendering Plugin in Unity

 

Overview:

One of the great things about Unity is that it welcomes different kinds of developers with different skills and abilities.  For example, perhaps you are an Artist with no computer science degree.  No problem!  You can make stuff with Unity!  Perhaps you are a Tech Artist and you are comfortable with C# and shaders.  Welcome aboard the Unity boat!  Perhaps you are a low-level GPU coder that speaks DirectX/OpenGL/Vulkan/etc.  Unity will accept you are your ilk as well.  =)  Unity offers development experiences for various skill levels.  This article is aimed at the low-level graphics developer.  The goal of this article is to demonstrate how you can extend Unity by writing custom native rendering plugins.  With a rendering plugin, you can use the same graphics device/context that Unity uses to render whatever you want however you want.

What Are We Gonna Make?

Great Question!  I don’t know yet.  j/k!
One nice aspect of Unity is that it supports a wide variety of asset types.  For example, Unity supports 3d model formats like obj, fbx, blend, maya scene files (if you have maya installed), etc.  As for textures, Unity supports png, jpg, bmp, psd, etc.  When Unity loads your project, it imports these asset files into an Asset Database in an internal format that the Unity engine uses.  At the end of the day, when Unity attempts to render the assets, it must use the native Graphics API to do so.  For this article, I have chosen to focus on the DirectX11 graphics API.

However, the principles of this article apply to all supported graphics APIs.  In the case of DirectX, to render a model we need an index Buffer, vertex buffer, and a texture.  Unity provides access to each of these through the scripting API.  The following are the Unity scripting APIs you would use to get each of the native graphic API resource pointers.

If you have access to the index/vertex buffers and texture resource for a 3d model, you can render it yourself in a native plugin!  So that is what we are going to do!  In general, you would not do this.  You should typically let Unity render your 3D models for you.  However, I think this is a good example to demonstrate how to pass all the native graphic resources from Unity to a native plugin.  We are also going to pass world/view/ and projection matrices from Unity to the native plugin so we don’t have to calculate them on our own in the native plugin.  Doing so will also give us the ability to control how the 3D model is rendered from a transform component!

Code from a Bird’s Eye View

We will be using two separate projects for this example.  The first project will be the Unity project which will include the 3d model and texture assets along with our scripts which will make use of the native plugin.  The second project will be our native rendering plugin built with C++ and DirectX11.  I have tried to comment the code in both projects to explain what I am doing.  Therefore in this article, I am mostly going to try to cover the gotchas.

Unity Project

To use a native plugin in Unity, you need to place it in your projects Asset/Plugins directory like so.

From there, you can configure which platforms and architectures should be able to use your plugin.  Your native plugin will only be loaded if a script in your project actually invokes one of the methods in the plugin.  The plugin will not be unloaded until the Unity Editor/Player shuts down.  This means that if you want to make a modification to the plugin, you will need to restart the Unity Editor/Player before you can use the modified plugin.

We will be using the “CustomRenderPluginExample” component to interface between Unity and our native plugin.  You will find it on the main camera game object like so,

Below, you will find the source code for the component and comments with regards to how it is implemented.

CustomRenderPuginExample.cs

For this example, I want my native plugin to render the 3D model during the forward opaque render pass.  This way my 3d model will appear correctly with respect to depth with objects rendered by Unity.  In this example, my Tekkaman model will be rendered natively while the moving blocks are rendered by Unity.
The image below is a RenderDoc frame capture of my Unity Scene.
The EventBrowser shows the order in which Unity renders the scene including the low-level graphics API calls.  For more information on the Unity/RenderDoc integration, click here.

The following is a video clip showing the draw calls as they are executed.  The last executed draw call is from the native rendering plugin.

This is achieved via Unity’s CommandBuffer system.  Command Buffers are kind of like Unity’s version of a ID3D11DeviceContext. You fill the command buffer with commands like SetViewport, EnableScissors, etc, and then at some later point Unity will execute all the commands in your command buffer.  When Unity is single-threaded, the commands are executed immediately.  When Unity uses the multi-threaded renderer, all the commands are executed on the render thread.  In our case, we just want Unity to give our native rendering plugin a heads up with regards to when it is ok to start rendering our 3D model which is after forward opaque objects have been rendered.  Our 3D mesh will be rendered into Unity’s depth buffer so the render order won’t matter.  The command we use to issue our plugin event is CommandBuffer.IssuePluginEventAndData.  This command will allow us to pass a custom event id to our plugin along with custom data.  The event id in our case isn’t too important.  However, the custom data will contain the world/view/projection matrices for our 3D model to use.

As I pointed out in the comments, it is important to use GL.GetGPUProjectionMatrix to convert your project matrix into one that is correct for the graphics API you are using.  Otherwise, you may get results like the following.


GL.GetGPUProjectionMatrix is needed to calculate the GPU ready version of the projection matrix which enables the projection matrix to play nicely with reverse z depth testing and inverted UV coordinates to go from OpenGL to d3d11.  The projection matrix will also be modified such that the normalized device coordinates for the z component will be converted from -1 to 1 (OpenGL) to 0 to 1 (DX11).

Native C++ Render Plugin

I have documented the native plugin code as well.  However, there are a few points I would like to call out.  When you build a shader in Unity, you use the Unity ShaderLab language and the Unity Shader Compiler automagically translates it for whatever platform you are targeting.  However, if you use a native plugin, you will need to use shaders built by hand for that platform.  DirectX uses HLSL compiled bytecode as input into the graphics pipeline whereas OpenGL uses glsl as input.  Since our plugin uses the DirectX API, we will need to compile our HLSL by hand.  I wrote a batch file located in

compile_all_shader.bat will be ran as a Custom Prebuild step when building the plugin in Visual Studio.  The batch file will compile the shaders as release or debug based on your target.  If the shaders are compiled with debug, the shader symbols will be included so you can see human readable shader uniforms and source code in a frame debuffer.  The batch file will compile the shaders using Microsoft’s fxc hlsl compiler.  Also, batch file will invoke my python script called “toheaderfile.py” which will convert the VertexShader.fxc and PixelShader.fxc output from the DirectX shader compiler into header files that can be included and statically linked into the native render plugin.  This is nice because you can hide your shader source code and you don’t need to worry about loading the files manually.

One last note is that you need to ensure that functions which are meant to be exposed to Unity are wrapped in extern “C”{} like so,

Unlike C, C++ allows for method overloading.  In other words, C++ allows you to have a single method defined multiple times with different input arguments.  To enable this, C++ must “mangle” the function names when compiled with additional identifier info for the various arguments so that it can jump to the correct one when executed in assembly.  If you don’t wrap methods like this, then the C++ compiler will mangle the names of your methods.  Unity won’t be able to find the exact method names you specified in your script after the DllImport declaration.

With extern “C”

Without extern “C”

I have divided the native plugin into two sections.  The first is the Unity plugin boiler plate code.  The second is the DirectX rendering logic.  Unity ships with C header files that you can include in your plugin to interface with the Unity Engine.  You can find them in your Unity installation directory like so,

Unity\Editor\Data\PluginAPI

UnityRenderPlugin.cpp

UnityRenderLogic.cpp

To figure out some of the graphic pipeline states that Unity uses, like the index buffer format and vertex attributes, I just dragged my model into my Unity project and had Unity render it.  Then I took a frame capture and took note of all the state information.  RenderDoc has a pretty useful feature that will allow you to see the resource creation parameters for any resource referenced in a graphics API call.  You do need to understand how the various low-level graphics API commands work in order to find their creation state.  I needed to know which vertex attributes and corresponding formats Unity expected to be in the vertex input layout.  I know that for DirectX11, you must call ID3D11Device::CreateInputLayout  to specify the vertex attributes that will be used in the vertex buffer along with their corresponding formats.  Therefore I knew Unity must be making a call to CreateInputLayout before rendering my model.  The following demonstrates how I went about finding the vertex attributes.

If you want to know more about writing native rendering plugins for Unity, take a look at the official Unity documentation on the subject below.

Well that is it folks.  Download/Enjoy and then Go Create!

Download the Sample Project