The ins and outs of Intel’s OpenSim scaling

By now, you’ve probably read about Intel’s experiments in boosting the performance of open-source Second Life workalike OpenSim to very large numbers of users – or at least very large numbers of users compared to a traditional Second Life simulator.

You may have seen the video, if not, it’s here:

All of this, ultimately, is apparently going to become an open part of the OpenSim codebase.

Unfortunately, the potential utility of this is a bit limited. It works fine for ScienceSim, at present (albeit it is considered more of a demonstration than a practical system right now), but the possibilities of deriving large benefits from it if you’re not already a well-heeled organisation are actually a wee bit limited.

The system uses a ‘distributed scene-graph’ technology in a form of computing sometimes referred to as distributed- or cluster-computing. The distributed scene-graph slices the simulation-space up into optimal chunks, based on workload, and parcels out the workload to other servers, while keeping processing in lockstep so that no part of the simulation races ahead or falls behind. Here’s Intel’s Dan Lake’s slides on how it works.

The very first barrier of this solution then is hardware. You need a number of capable servers, and the simulation could wind up limited by the ability of the slowest server to cope with the load.

On the other hand, the same cluster can deal with a number of simulators concurrently, so long as things don’t get so busy as to overwhelm the hardware cluster.

The biggest issue, really, is bandwidth. The servers need to shovel a quite astonishing amount of data between them, and the cluster as a whole also needs to be able to deliver bandwidth to every client with a viewer.

If each viewer has its bandwidth slider set to no more than 500, then we’re looking at up to 500Kbps of data for one user. Ten users is up to 5Mbps, the 500 users shown in the video potentially runs up to 250Mbps. Many Second life users will tell you that 500Kbps for the viewer doesn’t exactly yield a snappy response when things get busy, so the peak bandwidth loads back to individual viewers could potentially be much higher.

So, what we’ve got here is a great technology, and a solid step forward in virtual environment simulation, but for practical uses it is limited to very-high-speed local networks, or to companies for whom the costs of hardware and high-capacity network connections are not really much of a consideration.

Comments

  1. Gosh, this is nothing less than impressive…

  2. Cristopher Lefavre says

    Awesome… Great post!

    Just a little thought about the impracticalities: Running any OS grid with 500 concurrent logins will require a lot of HW for simulators anyway, even if the avatars are distributed on a lot of sims. And for the bandwidth issue: Most of it are comprised of texture downloads, and that is not handled by the simulator servers. And again: 500 logins will generate texture downloads anyway.

  3. w00t! go OpenSim!

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Posts