Started by jamespetts, July 22, 2020, 12:34:08 PM
0 Members and 1 Guest are viewing this topic.
QuoteCan I ask people to elaborate on the nature of the performance issues that people are experiencing?
Quote from: jamespetts on November 18, 2020, 05:32:07 PMI note that some people have reported performance problems on the Bridgewater-Brunel server. I have considered whether excessive population growth is the issue, and collected the following data.In 1777, the population of the Bridgewater-Brunel region was recoded as 1,629,866. In 1856, the population was 2,932,966.In 1943, the population is 4,062,888.Between 1777 and 1856, the population therefore grew by ~80%. Between 1856 and 1943, the population grew by a further ~40%. In 1943, the population was ~2.5 times what it was in 1777. However, according to this website, this level of population growth is actually less than the historical population growth of England during this time period; so curtailing growth is not the way forward.
Quote from: Freahk on November 19, 2020, 12:54:13 AMIn principle yes, but I suspect I confused it with private car route lookupThe precalculated journey times are not stored in a hastable but in a matrix (2d array), which is very fast and efficient
QuoteThere are still some options to decreasing the number of lookups, thus increase the performance, but these are not straight-forard.Just the draft of an idea I had posted when discussing about poor passenger success rates:Define a heuristics that systematically underestimates journey times and use it to filter out destination candidates.Concretely, in the path explorer run, extract the best-case (manhattan) distances that can be reached from a specific town within a small set of times, using the origin-destination time matrix. For example store the best-case distances that can be reached in 8, 16, 32, 64, 128, 256, 512 and 1024 minutes.Store that information somewhere (a few bytes per town really don't hurt anyone)When a passenger makes a journey attempt, randomly select destination candidates.Select the best-case distance of the smallest journey time greater than the time budget.Compare that distance against the (manhattan) distance from origin to destination. If the best-case distance is smaller than our actual distance, we can immediately reject that destination, otherwise we will have to look up all origin-destination pairs as-is.
QuoteSorry about the confusion in the very last section. By "these structures" I was not refering to the hashtable implementation, but to the way how routes from one stop to all stops is stored, which is the reason why the memory consumption of the routing data grows by square with the number of stops.In principle, it is not neccessary to store all origin-destination pairs. If we cut the world into some kind of areas, only all pairs within a single area as well as pairs between all hubs need to be stored explicitly, where a hub is any stop that has at least one service (includes walking) which leaves the area.That way, the memory consumption could be drastically reduced at cost of a two-level lookup. This won't increase the performance though.How exactly those areas are defined is a different thing. Choosing these well will lead to better results, but even a rather simple approach of fixed rectangles, let's say 1/10 of maps size should already do a good job.
Quote from: jamespetts on November 21, 2020, 12:33:58 PMThis an interesting idea, but the heuristic implicitly assumes a maximum speed, and it is not clear how one would calculate this.
Quote from: jamespetts on November 21, 2020, 12:33:58 PMif this would not improve memory bandwidth, it is unlikely ultimately to be worthwhile (unless this would reduce loading/saving times greatly, perhaps?
Quote from: jamespetts on November 21, 2020, 05:47:16 PMAlso, if towns are used as areas, I am not clear on how journeys starting or ending outside a town would work with this system. Can you assist with this?
Quote from: Freahk on November 21, 2020, 10:26:19 PMSpecifically, computing a best-case distance for a specific area means:1. Determine all stops that could serve as the origin stop for a housing in the origin town. That means any stop within city borders+stop coverage. I'll call these "areas stops".2. Check the journey times from those stops to all destinations and update current areas best-case data accordingly.
Quote from: jamespetts on November 22, 2020, 02:20:58 AMIf I understand correctly, your approach involves, for each town, taking each stop in that town and computing the minimum journey time for a set of journeys each with a minimum distance to any ultimate end point.
Quote(although what the origin point of that distance would be it not entirely clear)
Quote from: jamespetts on November 22, 2020, 02:20:58 AMUsing speed in this way would prevent the computations from being useless in the way that they would be for global journey times, would be considerably simpler to implement and would reduce the computational overhead of the background calculations necessary for this.
Quote from: freddyhayward on November 23, 2020, 10:29:26 PMAlthough it isn't strictly a problem, public transport passenger numbers on bridgewater-brunel have globally declined ~10% since this change.
Quote from: jamespetts on December 13, 2020, 12:38:28 PMI have adjusted some settings on the server's simuconf.tab, reducing the framerate to 10 from 15 and also reducing server_frames_between_steps. The latter should hopefully improve smoothness.Given the recent improvements in performance resulting from the passenger route finding heuristics, I am hoping that the server will be able to keep up with this rate. I should be grateful if people could post their findings when these changes become effective from to-morrow morning.
Quote from: Matthew on December 14, 2020, 08:05:49 PMMy subjective first impression is that the online game is faster to join and has fewer freezes now than last week. It's good to make progress! But last week it sometimes took 45 minutes to begin play, so it's against a low bar. My first join took 'only' 10 minutes but that is obviously only one data point.I notice that the server now saves promptly on the hour. That is a hopeful sign that the server is keeping up with real time.
Quote from: jamespetts on December 21, 2020, 02:28:13 PMThank you for this. It is odd that the server's run time should affect the joining time. The memory usage is not notably different, and, running an analysis using Visual Studio's performance analysis tools, I was unable to detect any memory leak. For example, yesterday evening at perhaps just before midnight, memory usage was 94.7%; now, it is 92.9% (but someone is in the process of joining; checking again after the person had finished joining, it was 95.3%). It seems very unlikely that such a relatively small difference in memory consumption (even if we extrapolate over a longer period) would be capable of causing a five-fold increase in server join times, and the memory consumption varies sufficiently over time when people join/leave to make it seem unlikely that there is any significant difference between memory consumption at different times of day in any event.
QuoteIt is thus most odd that there should be this difference. May I ask whether anyone else was online at any of these joining times? If someone else is connected and that other person has a slow download speed from this server, that might well delay that person's joining and thus delay everyone; if that person happens to be online more at some times of day than others, this may be causative of the difference that you report.
Quote from: prissi on December 22, 2020, 12:51:46 AMThe memory usage on a non-broken Linux will always hover at above 90% if you start swapping. I strongly suspect the memory size exceeds the main memory and the server starts swapping. Use htop or so and see how much swapping usage you have. That will dramatically influence performance.
total used free shared buff/cache availableMem: 7961 7655 143 1 162 92Swap: 18047 4005 14042