Recent Posts

Pages: 1 [2] 3 4 5 ... 10
The tick packets are used to define the maximum frame clients can go to. They are used for timing much like the check packets currently are but they are simpler and more regular.

Also they are not per frame anymore, client_frames_ahead now controls how often they are sent as it shares functionality overlap to some extent. This solves potential bandwidth problems.
I tested by pausing a server with debugger attached and checked if the clients stopped. I then resumed the server from the debugger and all clients went back to normal and remained in sync.
When you pause a server, it's knows it's behind and send the check packets which get the clients going. Manipulating the networks flows is a much better test. On windows play with 'clumsy' for a simple tool.

With r2, I can get it into an unstable hiccupy state just with some delay jitter.
You stealth edit adding r3 wasn't seen so will need to try with it.

tick packets for timing sounds bad - you don't want timing adjustment per frame.
Server frames ahead can never be zero, that would mean the server expects the client to be executing the command this tick. Still haven't looked at r3 though...

Nagle on actually improved the stability! (just for some corner cases though, still better when off for those with quality connections)
Bug Reports / Re: Bug in Just_in_Time=2 ?
« Last post by TurfIt on Today at 02:24:16 AM »
The attached patch above should still be valid. I don't like stepping on others section of code, so didn't commit...
This is not part of standard. In Extended it pushes elements of the convoy to the far top of the component selection list in the order they were selected. In standard it just shows/hides elements as appropriate and does not put elements of the current convoy at the top of the convoy component selection list.
Bug Reports / Re: Bug in Just_in_Time=2 ?
« Last post by DrSuperGood on Today at 02:19:32 AM »
I will look into this soon. I think I might know the cause as I did not consider farm production changes when caching state. In retrospect I think caching state was maybe a bad idea...

In server tests no one noticed this because the game is save/load cycled every time someone joins, which is quite frequently. The cache state is rebuilt on load, so gets corrected.
The reason goods are not profitable to transport is because 1 vegetable farm only makes 1 unit every 4 or 5 months. Convoys sit idle most of the time.
The idea of different qualities of accommodation and handling are interesting, but are likely exponentially to increase the complexity of implementing this. It is also likely to be very difficult to get accurate data on the significance of cooling - just how much longer can milk be transported in a modern refrigerated lorry than it could be transported in milk churns in an ordinary box wagon in a railway train? What of UHT milk (and at what stage does it become UHT milk)? Does refrigeration have more of an effect for fish and/or meat than milk? What about vegetables? What about frozen meat - would that be transportable with an indefinite time? As to stock losses, would the player just receive slightly less revenue based on the amount actually delivered, or would the player have to pay compensation? If so, that would have to be calibrated to the actual value of the stock lost, which would be a whole other set of research to do and pakset data to implement.
Splendid, thank you for confirming. Can anyone upload a saved game in which this can reliably be reproduced to assist Ves in his debugging of this? It would be very helpful.
The Bridgewater-Brunel server has been crashing recently as a result of heap corruption. I am having trouble tracking down the ultimate cause, and any thoughts that anyone might have would be appreciated.

The backtrace for one of the crashes on the server was:

Code: [Select]
(gdb) backtrace
#0  __GI___libc_free (mem=0x800030300) at malloc.c:2951
#1  0x00000000004628a9 in strasse_t::~strasse_t() ()
#2  0x000000000046bb26 in objlist_t::~objlist_t() ()
#3  0x00000000004517e0 in boden_t::~boden_t() ()
#4  0x00000000006dcf17 in planquadrat_t::~planquadrat_t() ()
#5  0x0000000000718cc9 in karte_t::destroy() ()
#6  0x000000000072ce15 in karte_t::load(loadsave_t*) ()
#7  0x000000000073067e in karte_t::load(char const*) ()
#8  0x00000000005a385e in nwc_sync_t::do_command(karte_t*) ()
#9  0x0000000000725ae2 in karte_t::do_network_world_command(network_world_command_t*) ()
#10 0x000000000072618b in karte_t::process_network_commands(int*) ()
#11 0x000000000073d06f in karte_t::interactive(unsigned int) ()
#12 0x00000000006cd6c2 in simu_main(int, char**) ()
#13 0x00000000006e1807 in sysmain(int, char**) ()
#14 0x00007ffff6b45830 in __libc_start_main (main=0x40efb0 <main>, argc=21, argv=0x7fffffffe458, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
    stack_end=0x7fffffffe448) at ../csu/libc-start.c:291
#15 0x000000000040f019 in _start ()

This seems to be able to be reproduced (albeit sporadically) on the clients, too, and Dr. Memory gives me the following output that seems relevant:

Code: [Select]
Error #1: INVALID HEAP ARGUMENT to free 0x08000303
# 0 replace_operator_delete_array                             [d:\drmemory_package\common\alloc_replace.c:2998]
# 1 minivec_tpl<>::~minivec_tpl<>                             [c:\users\james\documents\development\simutrans\simutrans-extended-sources\tpl\minivec_tpl.h:20]
# 2 strasse_t::~strasse_t       
# 3 strasse_t::`scalar deleting destructor'
# 4 objlist_t::~objlist_t                                     [c:\users\james\documents\development\simutrans\simutrans-extended-sources\dataobj\]
# 5 grund_t::~grund_t                                         [c:\users\james\documents\development\simutrans\simutrans-extended-sources\boden\]
# 6 boden_t::~boden_t           
# 7 boden_t::`scalar deleting destructor'
# 8 planquadrat_t::~planquadrat_t                             [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
# 9 `vector destructor iterator'
#10 planquadrat_t::`vector deleting destructor'
#11 karte_t::destroy                                          [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
#12 karte_t::~karte_t                                         [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
#13 karte_t::`scalar deleting destructor'
#14 simu_main                                                 [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
#15 sysmain                                                   [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
#16 WinMain                                                   [c:\users\james\documents\development\simutrans\simutrans-extended-sources\]
Note: @0:20:28.141 in thread 14116

I have tried to run Valgrind on the server, but I cannot get this to work.

The issue appears to be some sort of heap corruption relating to roads, although it is not clear how this can have arisen or why it occurs sometimes rather than others. Any thoughts would be much appreciated.

Edit: Adding an express destructor to strasse_t makes it a little clearer to find the proximate cause of this issue in the debugger: the problem seems to be in the delete data [] line in minivec_tpl: a minivec of connected buildings is a part of every road tile in Simutrans-Extended. It is not clear at all why this is crashing, especially when (1) the minivec_tpl is a member of strasse_t and not a pointer to another value on the heap; and (2) the data[] in question consist of pointers to buildings, and this error occurs even when I explicitly clear the minivec in the destructor in strasse_t. Is there some known issue with minivec_tpl?

Edit 2: Testing replacing the minivec_tpl with a vector_tpl, this makes no difference: the crash still occurs when deleting the vector. Interestingly, the problem seems always to occur at the same location on the map: a stop at 838,1177.

Edit 3: I am having great difficulties reproducing this reliably enough to run any meaningful tests regarding this. I have, however, just pushed a speculative fix to the server: when the server restarts in about 20 minutes from now with the new version, it will be instructive to see whether the crashes still recur.
No, it’s not fixed yet. I would like some savegame a from people who still experiences this (I think it was AP?) or anyone else. The problem that remained was that the counter still counts wrong when it starts to count the next good after mail.
Pages: 1 [2] 3 4 5 ... 10