News:

Simutrans Sites
Know our official sites. Find tools and resources for Simutrans.

Oscillating asset values

Started by whaleberg, September 07, 2018, 04:02:32 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

whaleberg

I noticed something weird with asset depreciation.  It seems like the asset values of a vehicle fluctuate over time before stabilizing in a downward slope.  I noticed this at first because I was trying to include more information in the vehicle information charts including the depreciation of the vehicles value.  I assumed at first it was something I had done wrong, but I reproduced it on the published  nightly builds on both OSX and PC.

How to reproduce: 

1. Start a new game in 1750.  (haven't tested other start years)
2. Immediately create a route with 1 vehicle. 
3. Start vehicle on route.
4.  Turn on fast motion
5.  Wait for a while

The same problem doesn't happen in simutrans regular as far as I can tell.

I put a savegame showing the issue on github here https://github.com/whaleberg/simutrans-extended-bug-repro/blob/master/README.md#asset

The save was created in Simutrans 120.2.1 Extended Nightly development build 14.0 #a3b3c5e

whaleberg

I can try and track down the cause of the bug if you want.  I've never used a C++ debugger before, so it would be a useful learned project.  I didn't want to do that without checking to make sure it wasn't already known / had a fix in the works / was intended...

jamespetts

Thank you for your report - that is most helpful. I am quite time limited at present, and I am not aware of this as an issue, so please do see if you can track it down: that would be most helpful. It is possible that it might transpire to have an innocent cause, but it seems worth investigating in any event.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

whaleberg

It seems to be due to a bug in the custom implementation pow for float32e8_t.

I ran pow() for both `doubles` and `float32e8_t` with the base of .997 and exponents 0 - 7.  You can see the results here:

https://docs.google.com/spreadsheets/d/e/2PACX-1vR2AmJpzO-mwPFQ-_kw6ltNLU0KXxkJn45QBTnMHn3OAYjFSXgnPvKwotS7tg2meYPPs1ivx7CRcTqM/pubhtml

The float32e8_t version seems to be critically broken.  Either that or the float32e8_t.to_double() which I'm using to understand the values is critically broken... 

I haven't looked into what the actual bug is yet, just that the math seems to be incorrect. 

jamespetts

Thank you for looking into that: that is most helpful. The float32e8_t code was written by somebody who has largely retired from Simutrans-Extended development, and unfortunately I do not know how this code works: I suspect that rather advanced mathematical knowledge beyond that which I possess may be needed to understand it.

The problem is equally likely to be in the .to_double() method as in the pow() method itself. Indeed, I suspect (but am not sure) that it is more likely to be in to_double(), as the pow() method is used for the physics, and the physics would behave very strangely if this method were broken, whereas in fact they do not appear to behave very strangely.

If anyone can assist in hunting the fault, that would be much appreciated.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

ACarlotti

I have traced the error to exp2(), and will investigate and fix the error later.

I have also found a separate error in pow(base, expo). If base is 0, then pow will always return zero, even if expo is not positive. I believe the correct behaviour is to raise an exception, since 0^0 is undefined and 0^negative is an infinity, which our floating point implementation doesn't handle. Unles anyone sees reason to disagree, I'll change this later as well.

jamespetts

That is very helpful - thank you very much.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

DrSuperGood

Quotesince 0^0 is undefined
Wikipedia disagrees...
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero

QuoteIEEE floating-point standard[edit]
The IEEE 754-2008 floating-point standard is used in the design of most floating-point libraries. It recommends a number of operations for computing a power:[20]
pow treats 00 as 1. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases).
pown treats 00 as 1. The power must be an exact integer. The value is defined for negative bases; e.g., pown(−3,5) is −243.
powr treats 00 as NaN (Not-a-Number – undefined). The value is also NaN for cases like powr(−3,2) where the base is less than zero. The value is defined by epower×log(base).
The pow variant is inspired by the pow function from C99, mainly for compatibility.[21] It is useful mostly for languages with a single power function. The pown and powr variants have been introduced due to conflicting usage of the power functions and the different points of view (as stated above).[22]
Programming languages[edit]
The C and C++ standards do not specify the result of 00 (a domain error may occur), but as of C99, if the normative annex F is supported, the result is required to be 1 because this value is more useful than NaN for significant applications[23] (for instance, with discrete exponents). The Java standard[24] and the .NET Framework method System.Math.Pow[25] also treat 00 as 1.
So 0 to the power of 0 is generally considered 1 or NaN with regard to programming.

ACarlotti

#8
Quote from: DrSuperGood on September 09, 2018, 04:56:48 PMSo 0 to the power of 0 is generally considered 1 or NaN with regard to programming.
NaN was what I meant when I said 'undefined'. However, I think 1 is probably the right definition for 0^0 in this case.

Also, pow is currently broken for negative bases (e.g. pow(-1.0,1.0) outputs 1.0). log2 does call dbg->error for non-positive numbers, but then quite happily returns zero. So there's more fixing to do.
EDIT: James: I you changed that deliberately in commit 09003f (replacing dbg->fatal with dbg->error). Why?

I would suggest prohibiting negative bases for float^float, and also 0.0^negative, but then adding a signature for float^int which additionally allows raising negative numbers to positive integer powers.

jamespetts

I must confess that so much time has passed since then that I have no recollection of what prompted this change. Thank you for looking into this, however.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

DrSuperGood

#10
It is important to note that some of those functions might be performance sensitive. As such one may have to make a compromise between producing correct results and producing results fast. Functions that make such a compromise should have it clearly documented in case people accidently use them expecting accurate results.

Are floats really so platform specific? I would imagine AMD, Intel and ARM (with FPUs) CPUs could all be made to produce equivelant results using their FPU hardware. How efficiently each performs at it is another question but I highly doubt it will be slower than a software floating point solution.

jamespetts

Quote from: DrSuperGood on September 10, 2018, 01:00:33 AM
Are floats really so platform specific? I would imagine AMD, Intel and ARM (with FPUs) CPUs could all be made to produce equivelant results using their FPU hardware. How efficiently each performs at it is another question but I highly doubt it will be slower than a software floating point solution.

The problem was not a difference between architectures, but between operating systems: Windows clients would not stay in sync with Linux servers and vice versa when using native floats.

If anyone can find a workable way around this and demonstrate that it works robustly, I should very happily include it, but I am not aware of any possible means of doing this.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

ACarlotti

Quote from: DrSuperGood on September 10, 2018, 01:00:33 AMIt is important to note that some of those functions might be performance sensitive.

Noted. I think I've already improved the performance and accuracy of the log2 function, and got side tracked by checking the accuracy of some hardcoded values. I don't think I'll make anything slower.
The inaccuracies in the exp2 function are on the order of 30%, which is unacceptable. If it were a difference in least significant few bits of the mantissa then that would be different.

DrSuperGood

QuoteThe problem was not a difference between architectures, but between operating systems: Windows clients would not stay in sync with Linux servers and vice versa when using native floats.

If anyone can find a workable way around this and demonstrate that it works robustly, I should very happily include it, but I am not aware of any possible means of doing this.
This link makes reference to how companies like GPG got around the problem with Supreme Commander. Unfortunatly this is not a viable or good enough solution for Simutrans Extended...

Out of interest, why are floating points needed over fixed point? Are the numbers being calculated really ranging that much in orders of magnitude?

Also surely there should be a ready made library somewhere that does all this and has been properly tested and optimized? I cannot help but feel that some of this is reinventing the wheel with it occasionally being square at first.

jamespetts

Floating point calculations are used principally in the physics engine. This was written by Bernd Gabriel (who also wrote the floating point implementation), so I cannot comment in detail on the order of magnitude ranges required, but I assume that these must have been necessary for Bernd to spend such a lot of effort creating the synthetic floating point type.

If there are any robust C++ libraries with compatible licences available, that might be of interest, but if their interfaces are significantly different from Bernd's implementation, trying to make this work with the physics code might be beyond what I can achieve given that I do not know how this code functions internally. Also, there would need to be extensive testing to see whether this is actually any faster.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.

DrSuperGood

QuoteAlso, there would need to be extensive testing to see whether this is actually any faster.
Speed is less of the reason to use such a thing, rather one would be using it for the reliability and accuracy. For example chances are it would not have broken exp and log functions.

That said it is now time to question why floating point maths is being used for periodic asset depreciation in the first place. Simply converting an asset value to and from float could change its value for some of the more valuable assets such as ships or aircraft.

ACarlotti

I have pushed (to my master) fixes or improvements to log2 (improve accuracy/bias and speed), exp2 (improve accuracy/bias, speed and correctness) and to_double (improve accuracy by using library pow).

I believe there may be bugs currently obscured by float32e8 not raising fatal errors, but it is hard to tell so far because, when I change this, the first fatal I encounter is in computing default config settings. This needs further investigation.

I also intend to make minor changes to one or two of the basic operations to reduce bias (which will lead to further improvements in dependent operations).
Note that the accuracy/bias improvements are based on my mathematical understanding and have had little computation verification (I checked that the answers were about right in a number of cases, but didn't do an extensive side-by-side comparision with the old methods).

DrSuperGood

#17
IEEE 754 single : exponent = 8b, fraction = 23b
float32e8_t : exponent = 10b, fraction = 32b
IEEE 754 double : exponent = 11b, fraction = 52b

Is there a reason a non standard floating point range is needed that sits between single and double precision standards? I would imagine packing in accordance to IEEE 754 single would yield performance improvements despite the additional masking operations due to reduction in memory footprint from 8bytes to 4bytes.

ACarlotti

I suspect only Bernd Gabriel could confirm the reason those choices - there doesn't seem to have been much discussion about float32e8_t on the forum.

Your table is slightly wrong; I've fixed/expanded it here:
IEEE 754 single : exponent = 8b [-126,127], significand = 24b (including implicit leading one)
float32e8_t : exponent = 11b [-1023,1023], significand = 32b (no subnormals, infinities or NaNs)
IEEE 754 double : exponent = 11b [-1022,1023], significand = 53b (including implicit leading one)

From the corrected table, I would guess that a 32 bit significand was chosen to fit in a uint32, and the 11 bit exponent was chosen to reflect the range of a double. Judging by the (misleading) name, I suspect that the exponent was 8/9 bits at some point in development (before entering version control).

I think attempting to pack it into an IEEE 754 single would make the code more complicated and/or significantly less precise. The version we have now hopefully works well enough (with fixes), and if there any particular functions whose running time is affecting the performance then we (I?) could see if that can be improved.

As for using IEE 754 floats reproducably: Having just read a lot about this, my understanding is that it can be, but this requires careful control of various compiler settings, with a probable reduction in floating performance across the code base (I don't know how much they are used in graphics, so this might not be significant). It also require carefully resetting floating point settings after any calls to external libraries. So it can be done in theory, but I wouldn't trust Simutrans development to be able achieve this reliably and consistently (as compared to a more controlled professional development team). and the fact that  Standard is not facing this issue (due to currently not using any sort of floats in the game engine) means that they have little incentive to support such efforts.

Maintaining a custom software floating-point library, however, only requires that the library itself be reproducable, reasonably accurate, and fast enough. I have already found some easy improvements to accuracy and speed (so far without adding complexity), and if necessary I could look for (and probably find) some more in any critical areas.

DrSuperGood

#19
QuoteFrom the corrected table, I would guess that a 32 bit significand was chosen to fit in a uint32, and the 11 bit exponent was chosen to reflect the range of a double. Judging by the (misleading) name, I suspect that the exponent was 8/9 bits at some point in development (before entering version control).
In the code...

static const uint8 bpm = 32; // bits per mantissa
static const uint8 bpe = 10; // bits per exponent

I know that 10 != 11 so something is not quite right there...

ACarlotti

I hadn't noticed those particular lines, but there is a similar inconsistency in the .cc file. Apparently Bernd (or whoever orignally wrote this code) didn't include the sign of the exponent when considering how many bits it has (it's an sint11, except that it isn't allowed to represent -1024).

I should probably write some better documentation in comments at some point.

whaleberg

I was busy and didn't get a chance to check back on this for a few days, thanks for looking into it and fixing the error.  I have to say, I was a pretty confused why there was a custom floating point implementation, but I can see why it's useful for calculation stability across operating systems.

jamespetts

Thank you very much for this - this is extremely helpful. I have now incorporated this, and the fix should appear in the next nightly build.
Download Simutrans-Extended.

Want to help with development? See here for things to do for coding, and here for information on how to make graphics/objects.

Follow Simutrans-Extended on Facebook.