dagid4

Member
I want to start a topic about concurrency in UO emulators :) Hope somebody will join me. Let's start with this quotation about ServUO core usage:

Core usage does not scale linearly with demand. The server relies heavily on a single core with other jobs like save file generation and player vendor search using other cores...

According to this, ServUO is not using concurrency for world processing (player, mobiles, items). Actually, I don't know about any other community UO emulator doing it. Now several questions (feel free to add your opinion):

1. How it can be done?
I will use a quotation from Raph Koster (former lead designer of UO) answering a question: What was the technology stack driving the original Ultima Online servers?

Raph Koster said:
...Each shard (the term sharding probably originated with UO) was actually multiple game servers that pointed at one persistence DB, and that did data mirroring across the boundaries. The load balancing within a shard was statically determined by config files, and was simply boxes on the map -- nothing fancy...
You can see that Origin server used a multiple servers (utilizing concurrency). I have even found an image with the mentioned boxes on the map:
uo-areas.png
So it was done back then. But...

Raph Koster said:
...Race conditions here led to most of the dupe bugs, by the way...
problems come. Supporting my opinion that it is much easier to do it single thread. Even such company as Origin had problems doing the concurrency correctly.

2. What is the main advantage?
The first advantage that comes to my mind is the ability to have more players (we're talking about thousands of players). Imagine today's 16-core processors, you could potentially divide the map into 16 boxes and have about 500 to 1000 players in each box, for a total of more than 10,000 players, assuming they don't meet in one box.

But, what shards have even more than 500 players? Maybe Outlands. This is not interesting for targeting smaller shards.

3. How else could it be useful?
Currently in ServUO, most of the BaseCreatures process (their AI) only, if there is a player near. Lets proof it:
C#:
public virtual bool PlayerRangeSensitive { get { return (CurrentWayPoint == null); } }
If there is no waypoint, the default BaseCreature will check players in range:
C#:
else if (m_Owner.m_Mobile.PlayerRangeSensitive) //have to check this in the timer....
                {
                    Sector sect = m_Owner.m_Mobile.Map.GetSector(m_Owner.m_Mobile);
                    if (!sect.Active)
                    {
                        m_Owner.Deactivate();
                        return;
                    }
                }
and deactivate AI in case of no players:
C#:
private bool PlayersInRange(Sector sect, int range)
        {
            for (int x = sect.X - range; x <= sect.X + range; ++x)
            {
                for (int y = sect.Y - range; y <= sect.Y + range; ++y)
                {
                    Sector check = GetRealSector(x, y);
                    if (check != m_InvalidSector && check.Players.Count > 0)
                    {
                        return true;
                    }
                }
            }

            return false;
        }
within 2 sectors from the creature sector:
C#:
public const int SectorSize = 16;
public static int SectorActiveRange = 2;
which is in the worst case 48 tiles.

So, it basically checks whether a player is in 32 to 48 tiles distance (depending where the creature is within the sector), otherwise it deactivates the AI.

Now, what comes to my mind. When using concurrency, we could remove this limit and process all mobiles every time, not depending on near players.

This could enable a sophisticated AI, which could simulate wolfs eating rabbits, dragon coming to village, etc. It is actually not new, Raph Koster was talking about it:

Raph Koster said:
...
At one point, we had effects like beggars who had desires for GOLD, and thus followed rich players in the street. We had bears who would hang out in their cave, but would wander over to hang around near beehives. All that stuff worked. The problem was that the constant radial searches were incredibly expensive, and so was all the pathfinding.

One of the first things to go was the search frequency. Next, the step was taken to put every creature to “sleep” when players were not nearby. This change alone really ruins the large-scale ecological applications completely.
...

So what about using the gained CPU power via concurrency to build a sophisticated AI? What do you think, is it worth it?
 
It's hard to say. It would probably be a ton of work and only the larger shards might notice the benefits.

Rather than dividing the world up into segments, why not use different cores for different tasks? One core for each world map. One core per map solely dedicated to sophisticated NPC AI, handling all the NPC interactions that occur when no players are around. You don't even need to have the NPCs physically in the world to model their behavior when players aren't around. It can run in the background as a simulation and the NPCs can be placed into the world when players are near.

(I'm a real novice on multi-threading and concurrency, so apologies in advance if my ideas aren't making any sense from a programming perspective.) :p
 
It's hard to say. It would probably be a ton of work and only the larger shards might notice the benefits.
Definitely, it would be enormous amount of work. And sure, for smaller shards it doesn't make much sense.

Rather than dividing the world up into segments, why not use different cores for different tasks?
This is actually a good question. Reason why dividing world, is because mobiles and players interact only to some range (I think 14 tiles). So if they are in groups separated by more than 14 tiles, these groups can be behave completely independent.

For effective concurrency you need independent data to work with. Otherwise, you have to lock (synchronize threads) and that is slow. If you don't lock, race conditions will come.

One core per map is the easiest thing. Because maps are isolated. I would personally start with this.

But concurrency of different tasks is not the right way. I will give 2 examples:
  1. Imagine one thread for player movement and second thread for creature AI movement. Creature will check player position, which is currently on next tile. Then the creature begin attack, calculating some difficult attack mechanic. In the meantime, player run away. But to the creature, it is still standing next. This is an example of race condition, you read something to calculate with, but in the meantime, it change without you knowing.
  2. Same thing with money. Imagine one thread for trading, second thread for stealing. You begin to trade with someone giving him all the money. In the meantime, thief will come and steal all the money from your backpack. Now you are giving money you no longer have. And it will work, because the first thread doesn't know that you no longer have the money.
To solve these problems, you would need something, what is called transactions in database systems, or locking in programming. For example, you would say, I'm currently calculating trade for these two guys, so don't mess with them. Basically nobody could not steal a money while it is being traded.

Locking complicate things, you can easily forget to lock something. It also slows down the threads, or it can completely freeze them in case of deadlock. It is better the avoid locking if you can.
 
While everything needs to be synchronized on the main core loop, there are a few places where threading is applied, not just for timers.

Some features use ThreadQueue, some use TPL, and some use explicit threads: custom house design compression is threaded, as well as background world saves, save backup archiving, delta queue processing, console input handling, etc.

They all end up synchronizing their output with the core thread in one way or another, using expensive locks.

Fortunately .NET 7 locks are extremely fast, so we don't have to worry as much when we get there.

As for whether it makes sense to do this [splitting the world by server lines]... it would be a massive waste of time unless you're looking to serve tens of thousands of players.
 
I understand, saving files, making compression or archiving some data in background. These are all tasks suitable for concurrency. Nothing challenging, but serves the purpose.

Currently I have to agree. It would be a waste of time.

But the future is in paralelism. Todays CPUs have the same frequency, but more and more cores. ServUO cannot benefit from these cores. How many are effectively used?

Again I agree that it doesn't make sense to do it now. Currently few cores are enough. We will see, if in the future sophisticated AI or even neural network AI comes into play. Then these ideas could be useful.
 
Multithreading under the hood is concurrent and handled automatically in Net, and as for the emulator, Net handles the cores, no need to upgrade, right click servuo in task manager, check 'set affinity', it should be using all cores! Unless I am missing something?
 
As long as you dont define to use multiple threads, as in make the code multithreaded, it wont be multithreaded.

Not all code can be multithreaded either.
In ServUO, not everything is multithreaded, some parts are.
 
Multithreading under the hood is concurrent and handled automatically in Net, and as for the emulator, Net handles the cores, no need to upgrade, right click servuo in task manager, check 'set affinity', it should be using all cores! Unless I am missing something?
Concurrency is definitely not automatic, .NET only offers you tools (TPL, threads, etc.). But you need to use them to achieve concurrency.

As Voxpire mentioned, reworking ServUO to use concurrency in world processing would require massive amount of time.

Also as PyrO mentioned, not all problems can be paralelized. Sometimes the performance is even worse, because of context switching overhead or locking.
 
Last edited:
Concurrency is definitely not automatic, .NET only offers you tools (TPL, threads, etc.). But you need to use them to achieve concurrency.

As Voxpire mentioned, reworking ServUO to use concurrency in world processing would require massive amount of time.

Also as PyrO mentioned, not all problems can be paralelized. Sometimes the performance is even worse, because of context switching overhead or locking.
Was thinking something different with how the OP schedules the threads, not enough coffee, your correct!
 

Active Shards

Donations

Total amount
$0.00
Goal
$1,000.00
Back