News:

Simutrans Tools
Know our tools that can help you to create add-ons, install and customize Simutrans.

SyncStep Stepping Packets (prevent out of sync from past commands).

Started by DrSuperGood, December 07, 2017, 08:03:25 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

DrSuperGood

People have complained for years about the nonsense known as going out of sync due to a command arriving for the past. Why do RTS games from pre 2000 not have this problem but Simutrans does? What on earth is it even? Is it not about time someone does something?

For some reason Simutrans net code works by loosely synchronizing commands from the server onto a client that is running in parallel. Since going back in time is not possible, it is important that the clients are purposely run chronologically behind the server to accommodate for communication latency between server and client. If the client is not sufficiently behind the server then the server may send a command to it that was due to be executed on a previous sync step frame which is not possible and so almost always has to drop the client for going out of sync.

As a user one would experience this in the form of a convoy starting or building being built a few frames frame before it was on the client that was not far behind enough. In true "butter fly effect" fashion this small difference causes that client state to greatly diverge over time. Any client with divergent state from the server is dropped by the server since they might start to perform actions that no longer make any sense on the server game.

Now in the perfect world this synchronization model works perfectly. There is a constant, reliable delay between server and client so the client runs sufficiently far enough behind the server to never experience packets that were scheduled for the past. Even clock drift is corrected thanks to periodic timing packets. However the world is not perfect...

What happens in reality is most of the time packets arrive constantly and reliably, but for some reason or another 1 badly behaved packet sporadically comes in several seconds late. Unless a time check is explicitly expected during this time, the client will keep running thinking the server just has no commands to send and not that the commands are horrendously delayed. The result is that all commands issued by the server during this delay time are then scheduled to have run in the past on the client so the client has to drop due to out of sync. The client's player gets angry for being dropped, all other clients get angry as he now has to rejoin, everyone is angry this happened.

Of course a solution to this already exists, if one can call that a solution. If one raises the delay between the client and server then that gives more tolerance to command packets coming late. For example if one was to set it to 60 seconds worth of delay then it would practically never occur as the TCP socket would time out before then. However this is a usability nightmare because no one likes having to wait 60 seconds from pressing a button to the action happening. Players have forever been trying to optimize their client delay for synchronization reliability as well as a good play experience. Of course that is only those lucky enough to know that feature exists, many just had to put up with dropping out of sync every 5-10 seconds.

Instead of trying to loosely synchronize the clients with the server, one could explicitly synchronize the clients with the server. Every time the server steps a sync step it sends a step packet to all clients notifying them to advance their sync step. Since these packets are ordered with respect to other command packets, it is impossible to receive a packet scheduled for a sync step represented by a previously received step packet, a very strong ordered relationship. The system is also tolerant to sporadic delay of packets because the client cannot advance its sync step beyond the last step packet it received. If packets get delayed, then the client will wait/pause until the packets come (TCP reliability), waiting for the server, instead of advancing past when the server commands need to be executed. End result is that it becomes theoretically impossible for this annoying out of sync error to occur anymore.

The only down side is that it means the server has to send 1 packet per sync step per client. This has bandwidth implications due to all these packets, however it should be nowhere near as bad as something like Overwatch which sends 60 packets per mobile object per second per client or something ludicrous. Hopefully it will be well under 1kb/sec which is trivial in 2017.

Attached is a hacky patch implementing stepping packets which will hopefully put an end to past command out of sync errors once and for all. Players and server owners with unreliable internet connections rejoice! In theory a lot of the previous checks can be removed if it works cleaning up the code a bit. Also network bandwidth could be optimized by removing some fields from some packets and sending others less frequently. Also I have no idea currently if this actually works for remove connections, I have only tested it on a locally run server with several locally run clients and it seemed to work.

If this works reliably I would strongly recommend it be incorporated into Simutrans Extended as at least 1 person needs it to play on the big test games.

Dwachs

We could use the already existing nwc_check_t command: It would be sent by the server after each sync-step. Then in karte_t::process_network_commands we adjust the sync_step_barrier. Then we should test this approach on a server to see whether it works under real world conditions ;)

A note to the patch: NWC_COUNT (network_cmd.h) is used as marker for the maximal possible packet id, so any new NCW_bla has to be inserted before...)

Another approach would be to ignore desyncs altogether: Only desync the client, if the a command fails on the client but not on the server (and vice-versa).
Parsley, sage, rosemary, and maggikraut.

DrSuperGood

QuoteWe could use the already existing nwc_check_t command: It would be sent by the server after each sync-step. Then in karte_t::process_network_commands we adjust the sync_step_barrier.
That packet contains extra fields that are not necessary to establish command order. Hence I thought a new sort of packet would be more appropriate. Literally all it has to do is signal clients that it is safe to advance up to a certain sync step.

One solution would be to create a combined command packet. This packet holds data for multiple tool commands for that sync step and is sent every sync step. Each time it is received the client knows what command to run on before or after a sync step and that it can advance exactly 1 sync step. Keeping track of exact sync step number between client and server is no longer needed as no packets are sent out either for the future or the past. Saves on bandwidth due to reduction of packet header overhead. This is a non trivial change due to the insanity that is Simutrans net code hence I did not go for it.

Quotenote to the patch: NWC_COUNT (network_cmd.h) is used as marker for the maximal possible packet id, so any new NCW_bla has to be inserted before...)
How is anyone meant to know that? This is why comments are important!

QuoteAnother approach would be to ignore desyncs altogether: Only desync the client, if the a command fails on the client but not on the server (and vice-versa).
Or one can synchronize properly like this and never have to deal with that desync ever. Like how games have managed to do it for over 20 years.

Dwachs

I think nwc_step_t only needs to be sent if no nwc_check_t emitted. And the nwc_check_t in process_network_commands can set the sync_step_barrier too. In fact, this can be done directly in network_world_command_t::execute on the clients.

Then we can drop (or at least ignore for now) these timing related settings settings.get_server_frames_ahead() and (uint32)env_t::additional_client_frames_behind, as the timing of the clients is determined by the server.

The downside of this patch is that the client's simulation is then also dependent on network latency. 

Quote from: DrSuperGood on December 07, 2017, 08:59:22 AM
Or one can synchronize properly like this and never have to deal with that desync ever. Like how games have managed to do it for over 20 years.

Do you know about a good reference here? I would love to read this up.
Parsley, sage, rosemary, and maggikraut.

prissi

When I coded the first network code, I used the approach OpenTTD did. It was rather uptimised for small upload, so it would work over an ADSL line.

I am also worried how smooth such a game would feel. The more players join, the more random the timing will become. Especially when people run servers over ADSL with low upload speeds. Or when people with varying latency join a game.

I think there has to be a time buffer, like for a video stream.

(On a sidenote: 20 years old games used exactly the Simutrans approach, like the LAN TTD mode from 1995. (Actually, I do not remember much multiplayer games before 2005 or so. It was text based MMUD before, imho).

jamespetts

This is very interesting - thank you for your work on this. I am certainly interested in incorporating this into Extended when it is ready, as I am aware of the player who has significant trouble connecting due to dropped packets.

How much testing do you anticipate this requiring before it is ready to be incorporated?
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

Dwachs

@prissi: The server advances the simulation without waiting for clients. The timing of one client then only depends on the server and the line from server to this particular client.

In the patched version we could also maintain a specific lag of a certain number of sync steps between server and clients. Then one packet arriving late and the next packets soon after will not make the client hickup too much.

Edit: Interesting read about the networking technique of 'Age of Empires'
https://www.gamasutra.com/view/feature/131503/1500_archers_on_a_288_network_.php
Parsley, sage, rosemary, and maggikraut.

HyperSim

This patch sounds interesting to me.
I introduced it to my friends to play network simutrans and they showed an interest.
However, one of them worried about the load for server machine.

Quote
If this works reliably I would strongly recommend it be incorporated into Simutrans Extended as at least 1 person needs it to play on the big test games.

You mean you wish to integrate this patch into Simutrans Extend first and Simutrans Standard later?

DrSuperGood

QuoteEdit: Interesting read about the networking technique of 'Age of Empires'
That approach solves a different problem, one we really do not care about in Simutrans. In RTS games one wants everyone to experience similar latency for fairness, so that even the host has a 200ms delay to his commands compared with his clients from another continent. This makes a huge difference as with low latency one could perform something like the infamous baneling split from StarCraft II more easily than someone with high latency. In Simutrans we do not care if one player can fractionally do something faster than another since the game is very slow paced and not even that competitive.

It also used UDP, which is very rare. As far as I am aware no one uses UDP anymore for games. With Blizzard games even Diablo II (and probably earlier) has exclusively used TCP for multiplayer, just like Simutrans does. I would not be surprised if the AoE remakes also changed to TCP. There is little benefit to using UDP for video games as ultimately one has to incorporate retransmit mechanics which TCP does more efficiently.

QuoteI am also worried how smooth such a game would feel. The more players join, the more random the timing will become. Especially when people run servers over ADSL with low upload speeds. Or when people with varying latency join a game.
Does smoothness matter? I think people are far more concerned with staying connected reliably rather than the odd stuttering in game, especially since this is not a RTS game but rather a much slower paced economic/transport simulator.

QuoteIn the patched version we could also maintain a specific lag of a certain number of sync steps between server and clients. Then one packet arriving late and the next packets soon after will not make the client hickup too much.
It currently is implemented on top of the current sync model. Buffering and everything still works. The difference is that the client will now pause when it is no longer safe to continue execution rather than continue and go out of sync. If the connection is reliable people should not notice any difference, however if there are sporadic lag bursts then the client will stutter to some extent, which is a huge improvement over going out of sync.

QuoteHowever, one of them worried about the load for server machine.
In the case of jamespetts's server, every time a client joins the server needs to upload a 20+ MB save file to it and then reload that save taking several minutes. This change might mean the server has to send 1kb/sec per client (not real numbers, only guesses). All it takes is 1 less disconnect per hour and it easily pays for itself. It also provides a quality of service to players which one could argue is invaluable, we all hate those servers that constantly go out of sync due to unreliable connections. It should also allow multiple servers per system more easily as network resource contention should be less likely to cause latency spikes.

Yes slow ADSL might be a problem. However they are practically unplayable due to out of sync disconnects as is anyway.
QuoteHow much testing do you anticipate this requiring before it is ready to be incorporated?
I am not sure how to even get this stuff tested. I am aware of a few issues which I will fix shortly and update this post (or make a new one) with.

EDIT:
This patch should fix many of the problems with the previous one. Especially the lack of initialization.

jamespetts

Splendid, thank you for that. Do you think that it might be worthwhile applying this to Extended straight away and using the existing server games to test this, or do you think that more offline testing is needed before taking this step?
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

DrSuperGood

I will give it another offline test later where I will purposely stall the server for a few seconds to see if it works. If it does I suggest integrating it into extended for more complete testing.

jamespetts

Quote from: DrSuperGood on December 08, 2017, 12:33:50 AM
I will give it another offline test later where I will purposely stall the server for a few seconds to see if it works. If it does I suggest integrating it into extended for more complete testing.

Splendid, thank you very much. Do let me know when you have completed your test.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

DrSuperGood

I have completed my tests and everything looks good for a larger scale test. Use r2 version.

jamespetts

I have now incorporated this - thank you very much for your work on this. I should be grateful for any feedback following the next nightly update.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

Dwachs

@DrSuperGood imho you could commit your patch. Then it can be tested with multiplayer games with nightly builds.

One comment: The sync_step_barrier can also be increased for arriving NCW_CHECK packets, then the server only needs to send NWC_STEP if it did not emit a NWC_CHECK (ie put the block sending nwc_step_t in an else block after the block for nwc_check_t).
Parsley, sage, rosemary, and maggikraut.

TurfIt

Was this tested with Nagle on?  I suspect a rather poor experience will result... Since OSX can't disable, assuming a poor experience does result, nwc_check needs to be yet another option so it can be disabled.

When the client has run ahead, this code results in continuously spinning the interactive loop. It should atleast use the 5ms timeout in the network check to slow things up.

What's the client timing profile during recovery from the run ahead? Looks like it will massively overreact and end up way behind before speeding ahead again. Perhaps some speculative advancement of next_step_time?

Receipt of the nwc_step should do the client timing adjustment for the 'way ahead' clause (more than half margin gone). Would make it less likely to get so far ahead as to trigger the full pause.

I agree with Dwachs suggestion for nwc_check to also adjust the barrier, and not send nwc_step on ss's where nwc_checks are going out.


Quote from: DrSuperGood on December 07, 2017, 08:59:08 PM
It also used UDP, which is very rare. As far as I am aware no one uses UDP anymore for games. With Blizzard games even Diablo II (and probably earlier) has exclusively used TCP for multiplayer, just like Simutrans does. I would not be surprised if the AoE remakes also changed to TCP. There is little benefit to using UDP for video games as ultimately one has to incorporate retransmit mechanics which TCP does more efficiently.
UDP is very much used. If you're streaming current positions of objects to a client, the client only cares about the last packet received. Retransmitting a 'stale' position report makes no sense, and delaying the processing of already received newer updates while waiting for the retrasmitted (TCP gauranteed in order) - hello warp/lag.
Arguably the nwc_step properly belongs on a UDP connection rather than TCP - only the highest received sync_step barrier matters, if an earlier one gets lost, leave it lost...
NOTE: I'm not suggesting actually adding a UDP connection too - TCP for nwc_step should be 'good enough'. This isn't an action game!


Quote from: DrSuperGood on December 07, 2017, 08:03:25 AM
The only down side is that it means the server has to send 1 packet per sync step per client. This has bandwidth implications due to all these packets, however it should be nowhere near as bad as something like Overwatch which sends 60 packets per mobile object per second per client or something ludicrous. Hopefully it will be well under 1kb/sec which is trivial in 2017.
For 16 clients on a server running at 25fps, I see 172 kb/s required and at 400 pps.
Of course people deciding to run servers at 25 is part of the desync problem - the defaults (bad as they are) were set/tested with 10 fps on servers.


Quote from: Dwachs on December 07, 2017, 08:46:26 AM
We could use the already existing nwc_check_t command: It would be sent by the server after each sync-step. Then in karte_t::process_network_commands we adjust the sync_step_barrier. Then we should test this approach on a server to see whether it works under real world conditions ;)
nwc_check adjusts client timing. You do *not* want to adjust timing every frame. Nausea. Vomitting. Bad things.
But more bad defaults, you definitely want adjustments more often than every 10 secs (or 25 secs if leaving the frames between checks at 256 at setting fps to 10). A nwc_check every second works pretty good in my experience.

IMHO the defaults should change for both clients and servers since apparently nobody cares to setup themselves properly and hence annoy everyone with their frequent reconnections. This patch would help that, but better experiences can still be had.
server_frames_ahead = 4
additional_client_frames_behind = 4
server_frames_between_checks = 25

Should be the bare minimum for frames_per_second = 25. This puts the client 320ms behind and gives a timing adjustment every second. Frames behind totaling 10 would be better for jittery connections... gives the same 400ms that the default 4 frames did for the 10 fps the original network testing was done at. Ideally these settings would be time based rather than frames so you don't need to adjust them for different fps. Even more ideally they could self adjust to back the client off if it keeps running ahead.

DrSuperGood

QuoteWas this tested with Nagle on?  I suspect a rather poor experience will result... Since OSX can't disable, assuming a poor experience does result, nwc_check needs to be yet another option so it can be disabled.
Well it cannot be any worse than currently where the client would be prone to disconnect with it on due to arbitrary packet delay.
QuoteWhen the client has run ahead, this code results in continuously spinning the interactive loop. It should at least use the 5ms timeout in the network check to slow things up.
It uses same delay as when paused I thought? If this is a problem then it is also a problem with pause.
QuoteWhat's the client timing profile during recovery from the run ahead? Looks like it will massively overreact and end up way behind before speeding ahead again. Perhaps some speculative advancement of next_step_time?
Not really a problem and I am sure few players will care. They mostly want to remain connected instead of going out of sync and having to reconnect repeatedly. Any flaws here existed with the old algorithm except now an extra safety net stops the client from advancing more than is safe to do so.
QuoteReceipt of the nwc_step should do the client timing adjustment for the 'way ahead' clause (more than half margin gone). Would make it less likely to get so far ahead as to trigger the full pause.
Probably, however technically one could operate the system without timing at all since the server acts as a clock source to some extent.

All the timing algorithm has to do is delay the client enough to smooth over any latency variance. I do understand the existing one does not do that but I thought it would still be better than nothing.
QuoteUDP is very much used. If you're streaming current positions of objects to a client, the client only cares about the last packet received. Retransmitting a 'stale' position report makes no sense, and delaying the processing of already received newer updates while waiting for the retrasmitted (TCP gauranteed in order) - hello warp/lag.
No game I know of uses UDP. Even World of Warcraft and Diablo II/III use TCP.
QuoteFor 16 clients on a server running at 25fps, I see 172 kb/s required and at 400 pps.
Of course people deciding to run servers at 25 is part of the desync problem - the defaults (bad as they are) were set/tested with 10 fps on servers.
I have never seen a server with 16 clients connected either.

One could address bandwidth problem by sacrificing latency. Instead of every frame sending a tick, only send one every 2+ frames. Then some client side timing algorithm can space the 2+ tick advancement such that it appears smooth (adjustments like you suggested). I am guessing this is what many RTS games use instead and why they have fixed latency.

QuoteShould be the bare minimum for frames_per_second = 25. This puts the client 320ms behind and gives a timing adjustment every second. Frames behind totaling 10 would be better for jittery connections... gives the same 400ms that the default 4 frames did for the 10 fps the original network testing was done at. Ideally these settings would be time based rather than frames so you don't need to adjust them for different fps. Even more ideally they could self adjust to back the client off if it keeps running ahead.
Server frames ahead and client frames behind should be kind of dynamic. If one is connected via a reliable LAN then values of 0 for both might even work. The client can theoretically decide how many frames behind it must be judging by how many sync step it is currently behind and the longest recent delay between sync step tick (and equivalent) packets. Basically if the client is running into pauses while waiting for host it then adds extra frames behind until it stops running into pauses and if it is running excessively far behind it then removes extra frames behind to keep latency tight.

TurfIt

Quote from: DrSuperGood on December 10, 2017, 02:29:39 AM
Well it cannot be any worse than currently where the client would be prone to disconnect with it on due to arbitrary packet delay.
no. Current pps is low enough for inter packet timing to be beyond time outs. You actually end up with a steady stream. This patch puts things right into the problem zone.


Quote from: DrSuperGood on December 10, 2017, 02:29:39 AM
It uses same delay as when paused I thought? If this is a problem then it is also a problem with pause.
No. There's no problem with pause. There is a problem with this patch. next_step_time....


Quote from: DrSuperGood on December 10, 2017, 02:29:39 AM
Not really a problem and I am sure few players will care. They mostly want to remain connected instead of going out of sync and having to reconnect repeatedly. Any flaws here existed with the old algorithm except now an extra safety net stops the client from advancing more than is safe to do so.
It's highly probable that once a client hits the safety net, they will get one frame behind before advancing into the net again. i.e. never recovers. Again, next_step_time is not being properly handled.
Simutrans timing is rather complex (overly so IMHO), the control variables all behave completely different depending on if  paused, fast forward, normal, network mode. And the control is performed in many unrelated places. You can't just stop updating the timing state as this patch does.


DrSuperGood

QuoteIt's highly probable that once a client hits the safety net, they will get one frame behind before advancing into the net again. i.e. never recovers. Again, next_step_time is not being properly handled.
I tested by pausing a server with debugger attached and checked if the clients stopped. I then resumed the server from the debugger and all clients went back to normal and remained in sync.

Edit:
Attached is an update version addressing some of above. I hope to commit it tomorrow sometime.

I slightly changed to functionality of server_frames_ahead so it now controls how often tick packets are generated by the server. A value of 0 means every sync step frame, a value of 1 every second frame, 2 every third etc. This seemed a reasonable idea to do as ultimately the tick rate determines latency so ties exactly in with the functionality of server_frames_ahead. This solves the problem with Nagle as setting the value sufficiently high will prevent it from delaying excessively.

Check packets are also used for timing and the server will never generate a tick packet on the same frame. Slightly saves on bandwidth.

Tick packets are used for timing as well now. Should make frame rates more stable.

Added some arbitrary delay when paused. This might or might not be the right thing to do but it appears to not have broken anything except made pause mode slightly lower frame rate.

TurfIt

Quote from: DrSuperGood on December 10, 2017, 05:21:25 AM
I tested by pausing a server with debugger attached and checked if the clients stopped. I then resumed the server from the debugger and all clients went back to normal and remained in sync.
When you pause a server, it's knows it's behind and send the check packets which get the clients going. Manipulating the networks flows is a much better test. On windows play with 'clumsy' for a simple tool.

With r2, I can get it into an unstable hiccupy state just with some delay jitter.
You stealth edit adding r3 wasn't seen so will need to try with it.

tick packets for timing sounds bad - you don't want timing adjustment per frame.
Server frames ahead can never be zero, that would mean the server expects the client to be executing the command this tick. Still haven't looked at r3 though...

Nagle on actually improved the stability! (just for some corner cases though, still better when off for those with quality connections)

DrSuperGood

The tick packets are used to define the maximum frame clients can go to. They are used for timing much like the check packets currently are but they are simpler and more regular.

Also they are not per frame anymore, client_frames_ahead now controls how often they are sent as it shares functionality overlap to some extent. This solves potential bandwidth problems.

TurfIt

Attached is my suggested version of the patch.

Only gross timing adjustments on NWC_STEP. Both gross and fine fom NWC_CHECK.
Changed pause timing. r3 broke single player pause fps.
Reverted to NWC_STEP every sync_step. With it every 4, it was far to likely for the client to actually encounter it. I don't see bandwidth as really a concern; If a server can't keep up this stream, it'd never transfer the save in a reasonable time anyway...
Renamed step to server_sync_step in "pull out server sync step" since step != sync_step.
Removed apparent stow away addition:

if(  nwc==NULL  &&  !network_check_server_connection()  ) {
dbg->warning("karte_t::process_network_commands", "lost connection to server");
+ *ms_difference = 0;
+ next_step_time = ms + fix_ratio_frame_time;
network_disconnect();
return;

and stuck in my suggested revision to default timing settings. Behaves so much better with things integral to frames_per_step, and with the extra 4 frames of margin.

DrSuperGood

QuoteChanged pause timing. r3 broke single player pause fps.
Pause timing illogical... Comments missing lol.
QuoteReverted to NWC_STEP every sync_step. With it every 4, it was far to likely for the client to actually encounter it. I don't see bandwidth as really a concern; If a server can't keep up this stream, it'd never transfer the save in a reasonable time anyway...
Except you complained it was too much per second hence I added it...

One can counteract running into the sync step wall by adding a client frame behind, or at least that is my theory. One could set the server to 1 or 2 frames behind with the client an extra 2 or 3 frames behind and it would drastically reduce bandwidth requirements.
QuoteRemoved apparent stow away addition:
Timing is a real mess if that is not needed... Since the function adjusts timing in response to some packets it seemed only logical to set it to a reasonable default value on disconnect.

TurfIt

Quote from: DrSuperGood on December 11, 2017, 05:40:00 AM
Pause timing illogical... Comments missing lol.
FTFY. All the timing spread out all over the place and mixing up frame timing with step timing is headache inducing.
Although pause is even worse. My patch to fix the fps limit is stalled trying to get pause to hit the target better.


Quote from: DrSuperGood on December 11, 2017, 05:40:00 AM
Except you complained it was too much per second hence I added it...
Did I? Though I just pointed out what it was....


Quote from: DrSuperGood on December 11, 2017, 05:40:00 AM
One can counteract running into the sync step wall by adding a client frame behind, or at least that is my theory. One could set the server to 1 or 2 frames behind with the client an extra 2 or 3 frames behind and it would drastically reduce bandwidth requirements.
1 or 2 + 2 or 3 is not enough. Even on LAN things would jump around too much, never mind trans-Atlantic hops.
Also, things work much better when integrals of the step frequency since stepping takes so much longer than sync_stepping normally when in network mode. Extended completely dies because of it's extra work - runs several frames super fast, pause, 5 super fast, pause, ... terrible smoothness.


Quote from: DrSuperGood on December 11, 2017, 05:40:00 AM
Timing is a real mess if that is not needed... Since the function adjusts timing in response to some packets it seemed only logical to set it to a reasonable default value on disconnect.
Does no harm, just looked like a stowaway. Not really needed IMO since it's a one time event here - disconnect.

DrSuperGood

Since your version had a lot more testing/fine tuning I committed it.

It might be a good idea to do a release later this month as there have been a ton of features/changes made since the last one.