Use scripting languages in gamedev? It depends

What does the game or game developers gain (or lose) when having scripting language in the game?

Some games use scripting language to build gameplay code but some do not.

By scripting language I mean dynamically loaded bytecode or code that is compiled during runtime through interpreter or JIT engine. I go through some of the pros that first came to my mind.

Fast gameplay code iterations. It’s quite easy to implement script code reloading to enable much faster turnaround for code change. That can be also done remotely for example to a game running on a phone.

Modding and sandboxing. By giving end users possibility to easily change gameplay code, it can generate strong community around it. Scripting language also gives possibility to sandbox execution context for improved security and prevent crashes. For example Roblox combines these two and sends user created mods through the netcode.

Coding is easier. Often scripting languages such as lua implement memory management through garbage collection, this combined to the fact that scripting language is simpler than C++ can make gameplay coding more pleasant and productive.

What about the downsides of scripting language?

Need to implement a bridge between engine and gameplay code. That also means thinking how to send and receive messages between the layers or deciding how much of the game is implemented in the scripting language.

Another language to think about. A context switch has to be made when jumping across engine and gameplay code.

Poor debugging support and development experience. All big languages have lot’s of tools like IDEs and debuggers build around them. Those are probably lacking for the scripting language.

Low performance. Scripting languages are slow and they need to be interpreted on consoles and on iPhone, so performance is at least an order of magnitude slower compared to native code.

Thinking about my own gamedev projects, I would only implement scripting language when your main language doesn’t support hot reload during runtime. And/or when I want the game to be as moddable as possible. And maybe I would only choose little parts of the code that I want to be dynamically modified instead of trying to force generic scripting implementation on top of game engine.

DirectX 12 as a cross-platform graphics API?

I think it’s best to use graphics API directly instead of using wrappers like bgfx, sokol, Diligent Engine and so on. Best performance and latency can be achieved by fine tuning the rendering flow (especially with modern API’s) when using graphics API directly. And then on top of that, have a clear specification to work on. But those wrappers provide one big benefits, cross-platform support. Also one big downside with modern APIs is that one needs really to know what you are doing with it to get the benefits over DirectX 11 or OpenGL.

So Vulkan is known to be cross-platform but what about DX12? With some help it seems that it can support every popular modern platform, but PS4/PS5?

Thanks to rising popularity of Linux gaming the quality of DX12 to Vulkan wrapper VKD3D-Proton is very high so I would not hesitate in using it.

  • Windows 10+ and XBOX directly trough DX12. Windows can be also provided with Vulkan backend through VKD3D-Proton.
  • Linux, Nintendo Switch, Steam Deck and Android trough Vulkan using VKD3D-Proton.
  • MacOS and iOS trough Metal using MoltenVK and VKD3D-Proton.

It would be interesting to test this in practice. MacOS and iOS might be bit stretching it to get to work and debug through two layers.

Thoughts on efficiently solo developing a game without an game engine

I started prototyping a 2D game with Unity. And soon ran to performance and scalability issues using the build-in systems (Tilemap etc). To battle those issues I would need to build custom systems for those anyway, so I again decided to move to just plain C++. The game world is procedurally generated so luckily I don’t need a dedicated editor. As the project begun, I tried to focus on building a game instead of starting building a game framework (entity systems, filesystems, other abstractions etc..).

To reduce boilerplate as much as possible and to make refactoring fast and easy, I don’t use C++ headers for my code. I have one .cpp file that is compiled. That file includes all other .cpp dependencies I have. Much of the game code resides in a single .cpp file which is now around 5000 lines long. I have a build.bat that just uses the MSVC compiler and only use Visual Studio for debugging purposes.

In my code I don’t use classes at all. Just plain structs and functions. I do use templated structs and functions for arrays. I don’t use STL at the moment. Basically, I only use dynamic or fixed array as a container, so I wrote support that myself. I avoid abstraction as much as possible and only do it when it clearly improves code readability and understanding. I do use third-party libraries such as SDL for windowing and input, stb_image for image loading, stb_truetype and fontstash for ASCII text rendering and bgfx for graphics.

This way has worked well for me, keep iteration time low and motivation up. Results come up fast and I don’t spend time thinking code structuring, abstractions, system refactoring. Or should the new class name be XYZManager. At the moment, time to compile and run the game is much lower than script reload and play in current Unity!

Lightmap baking on CPU using Embree and OpenImageDenoise

I wanted to have a portable lightmapper that is part of a custom map editor that I was building in C++. Thus I chose to build CPU powered lightmapper.

And to my amazement, it is really fast as OpenImageDenoise works like magic and the noisy CPU path tracer result gets smoothed out. The lightmapper is based on design described by the creator of Bakery-addon for Unity.

First, it creates the lightmap UV space GBuffer with multipass rendering to emulate conservative rasterization. Secondly, it offsets the sample position by raytracing from each sample position. It checks if the sample position resides inside geometry and offsets it to redure shadow leaks. Thirdly, using Embree it path traces from the sample positions and builds the lightmap. Finally, the lightmap is filtered using the OpenImageDenoiser.

The only downside is that the OpenImageDenoise is huge. The .dll takes around 45 megabytes (compared to my exe ~700KB). Anyway, I think it is worth having as it speeds up the path tracing result significantly. Here is a screenshot of preliminary result.

Denoised lightmap result in 20 seconds on Intel i7-5820k

Developing own custom game engine from scratch in 2023

With the freshly released Unreal Engine 5 and its groundbreaking features, Unity’s free dark mode, and Godot 4’s ambitious plans to dominate the indie gamedev community, indie developers have a plethora of options to choose from. In this blog post, I’ll discuss the decision-making process behind selecting a game engine like Unity, Unreal, or Godot, versus building a custom game engine. I’ll also share the reasons why some developers, including myself, have chosen to build custom game engines for their game projects.

Next, I go through some decision reasons of choosing between Unity / Unreal / Godot vs Custom Game Engine. Lastly, I go over the reasons why I did choose to build my own custom game engine for my game project.

When to Choose Unity, Unreal, or Godot?

A cross-platform game

Developing cross-platform games can be challenging due to the many moving parts and potential undefined behaviors. Popular game engines like Unity, Unreal, and Godot have been extensively tested and provide built-in workarounds for many issues, making them ideal for cross-platform development.

Proper asset content pipeline

For games that rely heavily on imported 3D models, animations, collision, and materials, having a reliable content pipeline and asset authoring system is crucial. Off-the-shelf game engines can handle a wide variety of 3D model formats and offer effective texture compression, making them an excellent choice for such projects.

Realistic or next-gen graphics

Achieving cutting-edge graphics as an indie developer can be challenging. Choosing an off-the-shelf game engine ensures that your game stays competitive with current industry standards.

Build in the shortest time possible

Custom game engines require significant development time, making it difficult to beat the time-to-market offered by off-the-shelf game engines.

Find people with existing skills to use the engine

When bringing new team members on board, it’s advantageous to use a game engine with which they already have experience.

Mobile game that ships for Android

Developing games with OpenGL ES for Android can be challenging due to the wide variety of devices and drivers with inconsistent behavior. For mobile games, choosing a proven game engine like Unity or Defold is advisable.

No specific or good reasoning for building a custom engine

If there are no significant business or game design motivations to build a custom game engine, it’s best to opt for an off-the-shelf solution.

When it is viable to build a custom game engine?

Fine-tuning the technical aspects

A custom game engine allows developers to optimize and customize the entire stack, from raw input to rendering. This is particularly advantageous when seeking minimal distribution size and maximum performance.

Needs are well known before building the game

Having a clear understanding of the features required for your game engine ensures accurate estimations of development time and resources.

Content pipeline is simple and limited

If your game design allows for limited 3D model formats and excludes textures with alpha, building a custom engine might be more suitable.

No next-gen graphics

Sometimes, less is more. For games that don’t require cutting-edge visuals, a custom game engine can be a viable choice.

Long-term development or support plans

Upgrading a game built with off-the-shelf engines can result in game-breaking changes, requiring rework of levels and mechanics. For projects with long development cycles or extended post-launch support, a custom game engine offers greater control and stability.

Why I have chosen to build custom game engine for my game projects?

Achieving the lowest possible network and input lag

To ensure a smooth and responsive gaming experience, it’s essential to minimize input and network latency. With a custom game engine, I have full control over the rendering loop, swapchain presentation, and input polling. This allows me to fine-tune these aspects, resulting in a more enjoyable and lag-free experience for the players.

Ultra-light and ultra-fast

One of my top priorities is to create games that are small in distribution size, boast ultra-fast startup times, and have minimal loading times. By building a custom game engine, I can optimize performance for low-end machines, making the game non-intrusive, lightweight, and accessible to a wider range of players.

Simple visuals

While visuals are undeniably important, I believe that overly realistic or flashy graphics can detract from the core gameplay experience. By opting for simple visuals, I can create a streamlined content pipeline and rendering engine for my custom game engine. This design choice also allows me to focus on gameplay mechanics and ensure a fun, immersive experience for players.

Provide a map editor for the community

Yes, you can also build map editor with Unreal / Unity / Godot. Building a custom game engine also allows me to create a map editor tailored to my game’s specific needs. While it does require some additional effort to polish the editor for external users, I believe it’s worth it. Providing the community with a user-friendly map editor will enable players to create their own content and further engage with the game.

Long term project!

By having control over the entire software stack, I can more easily support future Windows versions and address issues related to new hardware. Using an existing engine like Unity or Unreal could result in being locked into a specific version, making it difficult to adapt to changes in core engine components. With my custom engine, I can ensure that my game stays up-to-date and true to its original vision.

PC-only game

By concentrating on a single platform, I can avoid the complexities of cross-platform development and tailor the game engine specifically for PC users. This focused approach allows me to optimize the game for the PC platform and create a more polished and enjoyable experience.

Software engineer first, game designer second.

Lastly, I simply enjoy the process of tinkering, learning, and developing new techniques and possibilities. As a software engineer first and a game designer second, building a custom game engine aligns with my interests and allows me to challenge myself in ways that using an off-the-shelf engine might not.

Note: This post was originally published in 2021 but I updated it for 2023. Not much has changed.

Sound and audio in competitive FPS game

Lately, I have been thinking sound design in my a multiplayer FPS game prototype.

Audio is very important in games. Sounds gives clues and reference about the current game situation and environment the listener is in. Both silence and high volume sounds have their places, but is important that the audio volume levels are correct.

In competitive FPS game, 3D positional audio plays important role when locating opponents. Headphones are must have to be competitive in multiplayer FPS game.

When working on the audio and sound design for my multiplayer FPS game, I take account the importance of audio occlusion, clues, directivity, effects and more.

Audio occlusion

Occlusion happens when there is physical obstacle (medium) between listener and audio source. Often the sound doesn’t completely get blocked because the sound either reflects from nearby walls which bounce the sound waves to the listener. Also lower frequency sound waves are good at traveling through mediums.

In games, audio occlusion test is often implemented using raycasting. Most likely a good estimation of audio occlusion is done by raycasting multiple lines from the listener to points near by the audio source. Path tracing can also be used to compute better reflection and realistic audio directivity.

In multiplayer FPS games, I argue that realistic audio directivity will cause confusion for the player. For example, gun shots or footstep audio should be seemed to be coming directly from the true audio source and not reflected from walls. Although it isn’t realistic, it is important from gameplay perspective to avoid confusing the player.

In my FPS game prototype, the audio occlusion implementation is still work-in-progress. Currently I use multiple raycast testing to define audio occlusion. The audio is simply low-pass filtered and reduced by volume depending on the obstace medium size.

Improving audio directivity with HRTF

For audio, I leverage OpenAL Soft audio library. It comes with an excellent HRTF (Head Related Transfer Function) implementation. In short, HRTF simulates how ears will hear the sound from a certain point in 3D world. Headphones has to be used when HRTF is enabled. With HRTF player gets much better immersion of the sound direction, especially on the up/down axis.

Minimizing input latency for games

I feel that input latency should be as minimal as possible. Input latency affects the game feel and is especially important when the game requires fast reactions, such as fast-paced multiplayer FPS game or racing sim.

I’m currently working on a project with custom game engine where low input latency is important. I take input latency seriously and do my best to minimize it. One part of minimizing input latency is to make the frame rendering as fast as possible.

How to minimize input latency

In order to minimize input latency, the game needs to postpone the frame’s input processing as late as possible before frame present. Also the game should only have single buffered present.

Naively by default, games use triple buffering. Triple buffering is best choice when smoothness and larger frametime budget is required. For example, when using single buffered present with 60hz screen, both the CPU and GPU must have computed the frame under 16.6ms to be ready for the next vblank.

When using double or triple buffering, both GPU and CPU have 16.6ms of time to process the frame, giving the best throughput with cost of additional latency. Thus single buffered rendering reduces the system parallelization. CPU has to wait for GPU to complete the present and then GPU has to wait for CPU to start pushing rendering commands.

In my current project, I use single buffered rendering. The game waits for GPU to complete present before processing input for next frame. This achieves theoretical input latency between 16.6ms – 33.2ms on 60hz refresh rate.