# Increasing Floating Point Digits of Precision

I’m tyring to get Crystal to generate more accurate digits of the
`Hardy-Littlewood Twin Prime Constant`.

`C2 = 0.660161815846869573927812110014... (30 digits)`

The code below let’s me pick various numbers of primes to compute it with,
starting from 350M to 1000M. I have 40GB, and can do upto 750M primes.

``````require "big"

def soz(val) # generate array of primes <= val
md, rscnt = 30u64, 8
res  = [7,11,13,17,19,23,29,31]
bitn = [0,0,0,0,0,1,0,0,0,2,0,4,0,0,0,8,0,16,0,0,0,32,0,0,0,0,0,64,0,128]

kmax = (val - 2) // md + 1
prms = Array(UInt8).new(kmax, 0)
modk, r, k = 0, -1, 0

loop do
if (r += 1) == rscnt; r = 0; modk += md; k += 1 end
next if prms[k] & (1 << r) != 0
prm_r = res[r]
prime = modk + prm_r
break if prime > Math.isqrt(val)
res.each do |ri|
kn,rn = (prm_r * ri - 2).divmod md
bit_r = bitn[rn]
kpm = k * (prime + ri) + kn
while kpm < kmax; prms[kpm] |= bit_r; kpm += prime end
end end

primes = [2, 3, 5] of UInt64
prms.each_with_index do |resgroup, k|
res.each_with_index do |r_i, i|
if resgroup & (1 << i) == 0
primes << md * k + r_i
end end end
primes
end

# Select pth number of primes to use

#pth, pthprime = 350_000_000, 7594955549
#pth, pthprime = 400_000_000, 8736028057
pth, pthprime = 500_000_000, 11037271757
#pth, pthprime = 600_000_000, 13359555403
#pth, pthprime = 700_000_000, 15699342107
#pth, pthprime = 750_000_000, 16875026921
#pth, pthprime = 760_000_000, 17110593779
#pth, pthprime = 770_000_000, 17346308407
#pth, pthprime = 780_000_000, 17582163853
#pth, pthprime = 800_000_000, 18054236957
#pth, pthprime = 1_000_000_000, 22801763489
primespth = soz(pthprime)
print primespth.last 5
puts

r1, r2 = 1.to_big_f, 1.to_big_f

primespth[1..].each_with_index do |p, i|
p_1 = (p - 1).to_big_f
r1 *= p / p_1
r2 *= (p - 2) / p_1
c2  = r1 * r2
pth_prime = i + 2
ith_value = pth // 10
if pth_prime.divisible_by? ith_value
puts "\nUpto #{pth_prime}th prime #{p}"
puts "r1 = #{r1} \nr2 = #{r2} \nC2 = #{c2} \n"
end
end
``````

Here is the output using 500M primes to compute C2.

``````➜  time crystal run --release computeC2.cr
[11037271613, 11037271633, 11037271747, 11037271757, 11037271769]

Upto 50000000th prime 982451653
r1 = 18.4390726850470216036
r2 = 0.0358023327502957080337
C2 = 0.660161815876941998773

Upto 100000000th prime 2038074743
r1 = 19.0889045801585151122
r2 = 0.034583535848653902319
C2 = 0.660161815859445675877

Upto 150000000th prime 3121238909
r1 = 19.4684651626370355963
r2 = 0.0339092892191758448472
C2 = 0.660161815853308542037

Upto 200000000th prime 4222234741
r1 = 19.7375265434149726891
r2 = 0.0334470387866361888708
C2 = 0.660161815849861899147

Upto 250000000th prime 5336500537
r1 = 19.946093415804209269
r2 = 0.0330972989088870887354
C2 = 0.660161815847456600165

Upto 300000000th prime 6461335109
r1 = 20.1164207996513208323
r2 = 0.0328170613659564308093
C2 = 0.660161815845559730981

Upto 350000000th prime 7594955549
r1 = 20.2603725855298737404
r2 = 0.0325838931666755768264
C2 = 0.66016181584394804167

Upto 400000000th prime 8736028057
r1 = 20.3850267951978571712
r2 = 0.0323846430262248914225
C2 = 0.660161815842511833202

Upto 450000000th prime 9883692017
r1 = 20.4949476825011607233
r2 = 0.0322109539418266175212
C2 = 0.660161815841191062585

Upto 500000000th prime 11037271757
r1 = 20.593250120372128703
r2 = 0.0320571940796696729072
C2 = 0.66016181583995018321
crystal run --release computeC2.cr  137.73s user 10.36s system 99% cpu 2:28.10 total
``````

According to the Docs, using `BigFloats` is supposed to give arbitrary
precision, but using 500M primes only gives 10 stable digits as shown:

C2 = 0.6601618158…

and using more primes produce more divergent results.

What do I need to do to get more accurate digits|precision?

https://wiki2.org/en/Polignac’s_conjecture

OK, I figured out how to apparently set the level of precision.
Looking at the `BigFloats` docs to see how to set the float precision it mentions at the top two Class Method – `default_precision` and `default_precision=(prec : Int)`

https://crystal-lang.org/api/1.11.2/BigFloat.html

So I put this at the top of the code to check and change the precision.

``````puts "default floating point precision is #{BigFloat.default_precision}"
BigFloat.default_precision=100
puts "setting floating point precision to #{BigFloat.default_precision}"
``````

Apparently the default precision is `64` which I assume means bits?
When I set it equal to 100 it said the precision is `128`, and gave 39/40 digits of output
When I set it equal to 200 it said the precision is `256`, and gave 79/80 digits of output
When I set it equal to 300 it said the precision is `320` and gave 98 digits of output.

So apparently whatever precision number you give it it uses the next higher multiple of 64.

It would be very helpful if how to actually check and set the floating point precision was documented and explained clearly, and with examples. I was able to figure it out because they have something like this in Ruby, but I’ve also seen this in other languages too. But if you’re a novice programmer, or new to OOP structure, and don’t now what a Class Method is, you’d be completely lost on what to do.

Here’s the output using 750M primes with a precision value set to 100 (internally 128).

``````➜  Prime Generator Theory time crystal run --release computeC2.cr
number of generated primes are 750000000
[16875026843, 16875026867, 16875026887, 16875026911, 16875026921]
default floating point precision is 64
setting floating point precision to 128

Upto 75000000th prime 1505776939
r1 = 18.81935209474441794677153992510738654028
r2 = 0.03507888117206093126277283191849016218049
C2 = 0.6601618158667154097528023870013712114389

Upto 150000000th prime 3121238909
r1 = 19.46846516272046571148227039422638736422
r2 = 0.0339092892191758448472155710176500609936
C2 = 0.6601618158561375979430851632326004739631

Upto 225000000th prime 4777890881
r1 = 19.84762981506118330673819567999696050951
r2 = 0.03326149379065179482099753318934548071914
C2 = 0.660161815852812979349657267498792592082

Upto 300000000th prime 6461335109
r1 = 20.11642079982343713864755251167483699107
r2 = 0.0328170613659564308092548537845233481726
C2 = 0.6601618158512080823691030468942695298284

Upto 375000000th prime 8164628191
r1 = 20.32478350391486844333390448908240018824
r2 = 0.03248063211709545440287448830774594064517
C2 = 0.6601618158502691612673004566609085154499

Upto 450000000th prime 9883692017
r1 = 20.49494768276394019372981437330517068452
r2 = 0.03221095394182661752116650389507542904337
C2 = 0.6601618158496554400058731790666931664625

Upto 525000000th prime 11616020609
r1 = 20.63876371195958538892207512845843212102
r2 = 0.03198650001824861371046796014191833435227
C2 = 0.6601618158492241044839302557863022780197

Upto 600000000th prime 13359555403
r1 = 20.7633022890559570691311490469056864798
r2 = 0.03179464454442138339436987353079100165699
C2 = 0.660161815848905007135639083038893474658

Upto 675000000th prime 15112928683
r1 = 20.87312226823996366956691126419373650578
r2 = 0.03162736304444237131550304843167064906968
C2 = 0.6601618158486597524432204436875546573215

Upto 750000000th prime 16875026921
r1 = 20.97133557074293422304603229725881444777
r2 = 0.03147924525939377668717617974298213474027
C2 = 0.6601618158484655942952777920101009891609
crystal run --release computeC2.cr  243.10s user 14.19s system 99% cpu 4:17.66 total
``````

So now I’m able to generate 11 stable digits, with the rest trending in the correct direction.
I have a way to generate as many primes as I want, one-at-a-time, to not run out of memory by generating|storing all them at once, which will allow me to generate more stable digits, which I might feel like doing for awhile to see how many primes I need to use to get each additional digit of precision.

If you’re interested in the math, I just released a paper showing the correct twin prime constant value is actually a factor of 2 larger.

3 Likes