Difference between decimal, float and double in .NET?
Difference between decimal, float and double in .NET?
Accepted Answer
float
and double
are floating binary point types. In other words, they represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal
is a floating decimal point type. In other words, they represent a number like this:
12345.65789
Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal
still a floating point type instead of a fixed point type.
The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.
As for what to use when:
For values which are "naturally exact decimals" it's good to use
decimal
. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.For values which are more artefacts of nature which can't really be measured exactly anyway,
float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.
Popular Answer
Precision is the main difference.
Float - 7 digits (32 bit)
Double-15-16 digits (64 bit)
Decimal -28-29 significant digits (128 bit)
Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result :
float: 0.3333333
double: 0.333333333333333
decimal: 0.3333333333333333333333333333
Read more… Read less…
The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:
- A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
- Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
- Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.
+---------+----------------+---------+----------+---------------------------------------------+
| C# | .Net Framework | Signed? | Bytes | Possible Values |
| Type | (System) type | | Occupied | |
+---------+----------------+---------+----------+---------------------------------------------+
| sbyte | System.Sbyte | Yes | 1 | -128 to 127 |
| short | System.Int16 | Yes | 2 | -32768 to 32767 |
| int | System.Int32 | Yes | 4 | -2147483648 to 2147483647 |
| long | System.Int64 | Yes | 8 | -9223372036854775808 to 9223372036854775807 |
| byte | System.Byte | No | 1 | 0 to 255 |
| ushort | System.Uint16 | No | 2 | 0 to 65535 |
| uint | System.UInt32 | No | 4 | 0 to 4294967295 |
| ulong | System.Uint64 | No | 8 | 0 to 18446744073709551615 |
| float | System.Single | Yes | 4 | Approximately ±1.5 x 10-45 to ±3.4 x 1038 |
| | | | | with 7 significant figures |
| double | System.Double | Yes | 8 | Approximately ±5.0 x 10-324 to ±1.7 x 10308 |
| | | | | with 15 or 16 significant figures |
| decimal | System.Decimal | Yes | 12 | Approximately ±1.0 x 10-28 to ±7.9 x 1028 |
| | | | | with 28 or 29 significant figures |
| char | System.Char | N/A | 2 | Any Unicode character (16 bit) |
| bool | System.Boolean | N/A | 1 / 2 | true or false |
+---------+----------------+---------+----------+---------------------------------------------+
I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:
When would someone use one of these?
Use decimal for counted values
Use float/double for measured values
Some examples:
money (do we count money or measure money?)
distance (do we count distance or measure distance? *)
scores (do we count scores or measure scores?)
We always count money and should never measure it. We usually measure distance. We often count scores.
* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).
float
7 digits of precision
double
has about 15 digits of precision
decimal
has about 28 digits of precision
If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.
I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic
No one has mentioned that
In default settings, Floats (System.Single) and doubles (System.Double) will never use overflow checking while Decimal (System.Decimal) will always use overflow checking.
I mean
decimal myNumber = decimal.MaxValue;
myNumber += 1;
throws OverflowException.
But these do not:
float myNumber = float.MaxValue;
myNumber += 1;
&
double myNumber = double.MaxValue;
myNumber += 1;