Using this test code below we can see the loss of accuracy that

occurs for `Math.sqrt(n)`

as n becomes larger.

The first line uses the previous given method for `Integer sqrt`

,

compared to `Math.sqrt(n)`

as a float, and converted to various integer types.

```
e = 19
puts "(10**#{e}).sqrt = #{x = (10.to_big_i ** e).sqrt}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e).to_big_i}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e)}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e).to_i}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e).to_u32}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e).to_i64}, #{x.class}"
puts "(10**#{e}).sqrt = #{x = Math.sqrt(10.to_big_i ** e).to_u64}, #{x.class}"
```

Here sqrt exceeds I32 range

```
(10**19).sqrt = 3162277660, BigInt
(10**19).sqrt = 3162277660, BigInt
(10**19).sqrt = 3162277660.168379332, BigFloat
(10**19).sqrt = -1132689636, Int32
(10**19).sqrt = 3162277660, UInt32
(10**19).sqrt = 3162277660, Int64
(10**19).sqrt = 3162277660, UInt64
```

```
(10**20).sqrt = 10000000000, BigInt
(10**20).sqrt = 10000000000, BigInt
(10**20).sqrt = 10000000000.0, BigFloat
(10**20).sqrt = 1410065408, Int32
(10**20).sqrt = 1410065408, UInt32
(10**20).sqrt = 10000000000, Int64
(10**20).sqrt = 10000000000, UInt64
```

```
(10**30).sqrt = 1000000000000000, BigInt
(10**30).sqrt = 1000000000000000, BigInt
(10**30).sqrt = 1000000000000000.0, BigFloat
(10**30).sqrt = -1530494976, Int32
(10**30).sqrt = 2764472320, UInt32
(10**30).sqrt = 1000000000000000, Int64
(10**30).sqrt = 1000000000000000, UInt64
```

Here the sqrt excedes 64-bits range

```
(10**40).sqrt = 100000000000000000000, BigInt
(10**40).sqrt = 100000000000000000000, BigInt
(10**40).sqrt = 100000000000000000000.0, BigFloat
(10**40).sqrt = 1661992960, Int32
(10**40).sqrt = 1661992960, UInt32
(10**40).sqrt = 7766279631452241920, Int64
(10**40).sqrt = 7766279631452241920, UInt64
```

This is the limit for accuracy for Math.sqrt(n) as a float

```
(10**76).sqrt = 100000000000000000000000000000000000000, BigInt
(10**76).sqrt = 100000000000000000000000000000000000000, BigInt
(10**76).sqrt = 100000000000000000000000000000000000000.0, BigFloat
(10**76).sqrt = 0, Int32
(10**76).sqrt = 0, UInt32
(10**76).sqrt = 687399551400673280, Int64
(10**76).sqrt = 687399551400673280, UInt64
```

Next exponment value of 77 creates errors in Math.sqrt(n) for float values

```
(10**77).sqrt = 316227766016837933199889354443271853371, BigInt
(10**77).sqrt = 316227766016837933199889354443271853371, BigInt
(10**77).sqrt = 316227766016837933200000000000000000000.0, BigFloat
(10**77).sqrt = 1467897147, Int32
(10**77).sqrt = 1467897147, UInt32
(10**77).sqrt = 4387618993402172731, Int64
(10**77).sqrt = 4387618993402172731, UInt64
```

Now conversion of Math.sqrt(n).to_big_i produces incorrect result too

```
(10**78).sqrt = 1000000000000000000000000000000000000000, BigInt
(10**78).sqrt = 999999999999999999993126004485993267200, BigInt
(10**78).sqrt = 999999999999999999993000000000000000000.0, BigFloat
(10**78).sqrt = 0, Int32
(10**78).sqrt = 0, UInt32
(10**78).sqrt = 0, Int64
(10**78).sqrt = 0, UInt64
```

Even worse now for Math.sqrt(n)

```
(10**99).sqrt = 31622776601683793319988935444327185337195551393252, BigInt
(10**99).sqrt = 31622776601683793319988935444319318001775823290368, BigInt
(10**99).sqrt = 31622776601683793320000000000000000000000000000000.0, BigFloat
(10**99).sqrt = 0, Int32
(10**99).sqrt = 0, UInt32
(10**99).sqrt = 0, Int64
(10**99).sqrt = 0, UInt64
```

Sometimes now Math.sqrt(n) will produce correct values for even epxponents

```
(10**100).sqrt = 100000000000000000000000000000000000000000000000000, BigInt
(10**100).sqrt = 99999999999999999999999999999986929427981463977984, BigInt
(10**100).sqrt = 100000000000000000000000000000000000000000000000000.0, BigFloat
(10**100).sqrt = 0, Int32
(10**100).sqrt = 0, UInt32
(10**100).sqrt = 0, Int64
(10**100).sqrt = 0, UInt64
```

But inaccurate results for odd large exponents.

```
(10**101).sqrt = 316227766016837933199889354443271853371955513932521, BigInt
(10**101).sqrt = 316227766016837933199889354443266966994053071110144, BigInt
(10**101).sqrt = 316227766016837933200000000000000000000000000000000.0, BigFloat
(10**101).sqrt = 0, Int32
(10**101).sqrt = 0, UInt32
(10**101).sqrt = 0, Int64
(10**101).sqrt = 0, UInt64
```

Thus, Math.sqrt(n) becomes inaccurate as a float, and all integer type conversions,

after a certain size, making it unusable for whole classes of numerical algorithms

applied to large numbers (cryptography, number theory, combinatorics, prime number

theory, factorization, eliptic curves, etc, etc.).

Crystal is excellent for doing numerical processing. This is a standard issue having to

do with the limits of how floating point values can represent numbers past a certain

point, which is why Ruby (et al languages) have at least an `Integer#sqrt`

equivalent.