Thursday, November 28, 2013

Bolts

Last week I stumbled upon this blog post about generating and rendering Lightning Bolts. Really cool stuff. I was searching for such an effect for my missile interception weapon (see video below). So I implemented my own lighting effect generator. Have a look:


Missile Shield on Vimeo.

At first I wanted to implement the lightning effects straight forward and simply use a spritebatch for rendering. But then it came to me, that it would be really nice to integrate the lighting effect in the Mercury Particle Engine which I already use in my game. This allows to customize the bolt effect by adding all kind of Controllers and Modifiers, without changing any source code. Also Mercury has a really good particle editor and the effects can be saved and loaded from xml. Some useful modifiers are for example:
  • OpacityInterpolator: Adds a fading out effect.
  • ColourInterpolator: Bolt can change color depending on its age.
  • ScaleInterpolator: Bolt can change size.
Also its easy to add your own modifier. I created a simple jitter modifier, which you can see in the video above. The modifier moves all emitted particles randomly along their normal in the XZ plane. Implementing the modifier took me only a few minutes.

Needed Changes in Mercury
Now lets talk about the necessary changes to integrate the lightning effects in Mercury. Mercury provides Controllers and Modifiers to customize your particle effects.
Modifiers: They iterate over all active particles of an emitter and modify particle data like position, scale, color, velocity and so on.
Controllers: Those add logic which is executed before the particle trigger event. They get a TriggerContext which allows them for example to cancel a trigger event or change a trigger position where the particles get released.
The TriggerContext looks like this:
public struct TriggerContext
    {
        /// 
        /// True if the trigger should be cancelled.
        /// 
        public Boolean Cancelled;

        /// 
        /// Gets or sets the position of the trigger.
        /// 
        public Vector3 Position;

        ...

        /// 
        /// ADDED.
        /// The texture the emitter should use.
        /// 
        public Texture2D ParticleTexture;
    }
}

I added a field for the particle texture to the TriggerContext. This allows me to create controllers which can change the emitters particle textures dynamically. If the controller sets the ParticleTexture field, the emitters old texture reference is overwritten by the one provided by the controller (this is done in ParticleEffect.Trigger).
I created an abstract dynamic texture creation controller which can be used as base class for all texture changing controllers. All derived classes have to provide a DrawTexture method which gets called periodically (TextureRecalculatePeriod can be defined in the xml description of the controller).

Bolt generation controller
Bolt generation and rendering is done in the BoltParticleTextureController which is right now the only from the abstract controller derived class. Right now there are only a few parameters that tweak the particle textures look:
  • Two parameters to control the random placement of the start and end point of the bolt
  • GenerationCount: Number of times the line between start and end will be halved
  • BoltCount: Number of bolts to render into the particle texture
In the future I want to add more parameters to customize the created glow textures. Here two example textures created by the controller:

In the last texture three bolts were rendered. But with more then one bolt there are some problems with artifacts I could not fix yet. Write a comment if you have an idea :-)

Edit. In the last few days I improved the bolt effect a little bit. The new effect can be seen in the video at the top. The old videos are still here:

Red Bolt on Vimeo.


Missile Shield on Vimeo.

Wednesday, November 13, 2013

Game Architecture and Networking, Part 2

In this post I will talk in more detail about my game architecture and how the networking is realized in it. For the basics of my architecture, see these two old blog posts from last year:
Parallel Game Architecture, Part 1
Parallel Game Architecture, Part 2
In short, my architecture consists of 6 subsystems which all work on their own copy of a subset of game data. Because they work on a copy, they can be updated concurrently. If a subsystem changes its game data copy, it has to queue the change up in the change manager. After all subsystems are finished updating, the change manager distributes the changes to the other subsystems. Right now my engine has the following 6 subsystems:
  • Input: responsible for reading the input from the gamepad or keyboard
  • Physic: collision detection and physical simulation of the world (uses Jitter Physics Engine)
  • Graphic and Audio: uses the SunBurn Engine and Mercury Particle Engine
  • LevelLogic: responsible for checking winning/losing conditions, creating/destroying objects, and so on
  • Networking: communicate with other peers over the network
  • AI: calculates the InputActions for not human controlled game entities (allies, enemies, homing missiles, ...)
In the picture below you can see how a player controllable game entity (for example the player's space ship) is represented in the game architecture. Every box equates to an object in the specific subsystem.
Representation of a player controllable entity on the host and a client system. The numbers in the white circles show the latency in number of frames.
In the input system the pressed buttons are read and translated to InputActions (Accelerate, Rotate, Fire Weapon, ...). Those InputActions are interpreted by the phyics-system which is the central subsystem of the engine. It informs the other system about the physical state of all the game entities like the current position, current acceleration, collisions and so on. These information are needed in the graphics and audio system to render the game entities models at the right position, play a sound and/or trigger a particle effect.

Host Side Networking
The networking system has to send the physical state of the game entities to all clients. For important game entities like player ships, this is done a few times per second, for ai objects much more infrequently, because ai objects behave more or less deterministically (iff the client and host are in sync!). There are some important physical state changes which trigger an immediate network send. For example, if the game entity is firing, we want to create the fired projectile on the clients as fast as possible, to minimize the divergence between the host-world and the client worlds.

Client Side Networking
On the client side, the networking system receives the physical state and calculates a prediction, based on the latency, the received velocity and current acceleration. In the next change distribution phase the current physical state in the physic-system is overwritten by the predicted physical state. Then it takes one more frame till the information is really visible on the client side. On the Xbox my game runs on average at 60 frames per second. This means, if a player fires her weapon this is seen at the earliest 66 ms (4 frames) + X ms later on another peer. X depends on the latency L and the point in time the network packet is received. If the packet is received after the networking system has already finished updating, the packet has to wait for the next frame till it gets processed.

Conclution
The latency is a disadvantage of the game architecture. But it could be improved for example by merging the input system into the physics-system. A separate input system does not improve performance. But merging the two system would reduce the maintainability, therefor I have no plans doing this in the near future. Separation of concerns between the systems is really a big plus. You could even replace a system completely without affecting the other systems.
On importent part of my networking code is not finished yet: the interpolation. Right now the old physical state is simply overwritten by the new one received from the host. If the two states differ much, the movement of the game entity looks jerky. To solve this, the difference from the old to the new physical state can be evenly spread and applied over the next few frames.

Wednesday, October 16, 2013

Networking, Part 1

I'm back :-) This summer I finished my thesis. Yay! I wanted to continue working on my game while writing my thesis, but sadly a day has only twenty-four hours...

Anyway, now I'm ready to finish my game. The last two months I worked mostly on my network code. Its almost finished. The networking stuff is one of the bad parts of XNA. Don't get me wrong, the networking is nicely integrated in the Xbox environment and easy to use. However you can't use the XNA networking on windows, but you are forced to use XNA networking on the Xbox since the normal .NET networking classes are not accessible there. Therefore one needs two different networking implementations for Windows and Xbox. This sucks!

To support both implementations in my framework I wrote a wrapping layer which provides access to the XNA and Windows specific implementation on the respective platforms. Since I didn't want to reinvent the wheel, the code of the wrapping layer is based on some classes from the IndieFreaks Game Framework. The interface of the wrapping layer is oriented on the XNA networking capabilities. For XNA, the wrapping layer is therefore very thin.

The windows networking implementation uses the Lidgren networking library. Lidgren is really an excellent library and widely-used in .NET based games (AI War, XenoMiner, IndieFreaks Game Framework, etc.). Basically it is a layer over the unreliable, connectionless UDP protocol. Thus more complex features like session management (player joining, player leaving, start/end session, ...) are not supported, which means I had to implement them by my own. But this wasn't very difficult. Below you can see a sketch of the sequence diagram for the client join. It doesn't get more complicated than that. In XNA this is all hidden behind the provided API.
In the next post I will go into more detail how my networking system works on the game logic level.