Posted on : 09-12-2010 | By : Chaitanya Munjuluri | In : Game Development, Outsourced Product Development
As part of an initiative to start looking at various technologies and methodologies that will make revolutionise video game development, we were looking at various programming languages that would help scalability in servers. Multi threaded C/C++ code, design patterns, libraries, frameworks, and finally programming languages themselves fall under this division.
Subsequent to my earlier blog post on Erlang (refer: http://blogs.sierraatlantic.com/2010/10/erlang-and-experiments-with-scalable-servers/) we whipped up a quick server to test its scalability. Srini (a fresh graduate from University of Hyd) and Vijayender worked on the server side of the game while Chaitanya (not me) worked on writing a client using the Blender API.
It was in many ways an experiment on the productivity of the programmers. Vijayender, with his vast experience, was mostly involved in design while Srini and Chaitanya were responsible for actual implementation. Keep in mind that Chaitanya and Srini have each just about 2 months of experience and I was already demanding them to write multi threaded code. However the results definitely surprised me.
We planned on making a very simple server that would cater to multiple incoming connections. The overall idea is that we would use one thread to run the networking related code, one thread to do the event management and one thread to perform collisions and the like. The reason for this was straightforward since this is a very simple form of threading (using the producer consumer pattern).
So how long did it take for us the finish this application? 2 months. Yes, that’s right. Just two months with a brand new pair of hands who have no experience writing multi threaded code. The number of bugs that were reported? Less than 5 and all of them were deviations from the specification. None were technology related bugs.
The following graph depicts the throughput. X-Axis shows the number of clients while the Y-Axis shows the average time it took for a single update (in milliseconds)
The point to notice here is not how long each update took to perform. It is how the server scales automatically when we throw more cores at it. The performance is bound to improve if we switch all of the collision and mathematical calculations to a C/C++ interop (maybe even CUDA/OpenCL… hmmm).
Notice how the 800 clients an 8 core processor gives more than 30 fps consistent. Every time we figured an increased load on the server, all we had to do was toss more cores at it and the performance pretty much improved linearly.
Having worked extensively on multi-threaded code I have come to believe that the future lies in languages that are designed with threading in mind. Multi-core architectures are not going anytime soon and this is exactly why writing code that scales well is of extreme importance. I believe that it is better to toss more hardware at a problem than tossing programmers and Erlang seems to be a good choice.
Here is a list of things that are left:
- Mathematics interop in C/C++
- Checking the numbers with Hipe VM
- Checking the numbers on multiple operating systems
- Hopefully try it out on the cloud
- Persistent store (database integration)
Here is the current setup:
- Windows 7
- Intel i7 CPU (8 hardware threads)
- 6 GB RAM (although we were never limited by the RAM)
- Erlang R14B 5.8.11