Description
Description
As we all know very well, in binary floating-point math, 0.1 + 0.2 != 0.3
.
And as users are taught and rightly expect, in the absence of other type context, a float literal defaults to FloatLiteralType
(aka Double
, unless shadowed).
Therefore, the following code shows the expected behavior:
func sum<T: Numeric>(_ numbers: [T]) -> T {
return numbers.reduce(0, +)
}
print(sum([0.1, 0.2]) == 0.3)
// Prints "false"
However:
Foundation defines a Decimal
type that conforms to ExpressibleByFloatLiteral
, in which (rightly) 0.1 + 0.2 == 0.3 as Decimal
.
On Apple platforms, Foundation also defines a RunLoop.SchedulerTimeType.Stride
type which shockingly also conforms to ExpressibleByFloatLiteral
.
For...reasons, importing Foundation (without ever touching Decimal
) breaks user expectations:
import Foundation
func sum<T: Numeric>(_ numbers: [T]) -> T {
return numbers.reduce(0, +)
}
print(sum([0.1, 0.2]) == 0.3)
// Prints "true" on Linux, does not compile on macOS due to ambiguous overloads
Here (unless I'm mistaken), Foundation isn't getting any special treatment that a third-party library wouldn't get. And a third-party library shouldn't be able to break user code that uses float literals without further type context, or worse yet silently change how it executes.
Reproduction
import Foundation
func sum<T: Numeric>(_ numbers: [T]) -> T {
return numbers.reduce(0, +)
}
print(sum([0.1, 0.2]) == 0.3)
// Prints "true" on Linux, does not compile on macOS due to ambiguous overloads
Expected behavior
Regardless of what libraries are imported, as long as the user isn't shadowing FloatLiteralType
, the expression sum([0.1, 0.2]) == 0.3
should always compile and evaluate to false
.
Environment
All versions from at least Swift 4.2 onwards (checked on godbolt.org)
Additional information
Based on the question raised on the Swift Forums in: