The names Single and Double come from single-precision and double-precision numbers. Double-precision numbers are stored internally with greater accuracy than single-precision numbers. In scientific calculations, you need all the precision you can get; in those cases, you should use the Double data type.
The result of the operation 1/3 is 0.333333. . . (an infinite number of digits 3). You could fill 256MB of RAM with 3 digits, and the result would still be truncated. Here’s a simple example that demonstrates the effects of truncation:
In a button’s Click event handler, declare two variables as follows:
Dim a As Single, b As Double
Then enter the following statements:
a = 1 / 3
Debug.WriteLine(a)
Code language: VB.NET (vbnet)
Run the application, and you should get the following result in the Output window:
.3333333
Code language: VB.NET (vbnet)
There are seven digits to the right of the decimal point. Break the application by pressing Ctrl+Break and append the following lines to the end of the previous code segment:
a = a * 100000
Debug.WriteLine(a)
This time, the following value will be printed in the Output window:
33333.34
Code language: VB.NET (vbnet)
The result is not as accurate as you might have expected initially — it isn’t even rounded properly. If you divide a by 100,000, the result will be
0.3333334
Code language: VB.NET (vbnet)
This number is different from the number we started with (0.3333333). The initial value was rounded when we multiplied it by 100,000 and stored it in a Single variable. This is an important point in numeric calculations, and it’s called error propagation. In long sequences of numeric calculations, errors propagate. Even if you can tolerate the error introduced by the Single data type in a single operation, the cumulative errors might be significant.
Let’s perform the same operations with double-precision numbers, this time using the variable b. Add these lines to the button’s Click event handler:
b = 1 / 3
Debug.WriteLine(b)
b = b * 100000
Debug.WriteLine(b)
Code language: VB.NET (vbnet)
This time, the following numbers are displayed in the Output window:
0.333333333333333
33333.3333333333
Code language: VB.NET (vbnet)
The results produced by the double-precision variables are more accurate.
Why are such errors introduced in our calculations? The reason is that computers store numbers internally with two digits: zero and one. This is very convenient for computers because electronics understand two states: on and off. As a matter of fact, all the statements are translated into bits (zeros and ones) before the computer can understand and execute them.
The binary numbering system used by computers is not much different from the decimal system we humans use; computers just use fewer digits. We humans use 10 different digits to represent any number, whole or fractional, because we have 10 fingers (in effect, computers count with just two fingers). Just as with the decimal numbering system, in which some numbers can’t be precisely represented, there are also numbers that can’t be represented precisely in the binary system.
Let me give you a more illuminating example. Create a single-precision variable, a, and a double-precision variable, b, and assign the same value to them:
Dim a As Single, b As Double
a = 0.03007
b = 0.03007
Then print their difference:
Debug.WriteLine(a-b)
Code language: VB.NET (vbnet)
If you execute these lines, the result won’t be zero! It will be −6.03199004634014E-10. This is a very small number that can also be written as 0.000000000603199004634014. Because different numeric types are stored differently in memory, they don’t quite match. What this means to you is that all variables in a calculation should be of the same type.
Eventually, computers will understand mathematical notation and will not convert all numeric expressions into values, as they do today. If you multiply the expression 1/3 by 3, the result should be 1. Computers, however, must convert the expression 1/3 into a value before they can multiply it by 3. Because 1/3 can’t be represented precisely, the result of the (1/3) × 3 will not be exactly 1. If the variables a and b are declared as Single or Double, the following statements will print 1:
a = 3
b = 1 / a
Debug.WriteLine(a * b)
Code language: VB.NET (vbnet)
If the two variables are declared as Decimal, however, the result will be a number very close to 1, but not exactly 1 (it will be 0.9999999999999999999999999999— there are 28 digits after the decimal point).