I’m sorry, but your claims are getting more and more uninformed and ridiculous.
Modular arithmetic was 1st semester math in my physics course. That’s far from your surreal concept of “old, unknown math”.
It’s even one of the basics in programming (a certain 256-element cyclic group is something that most people here encountered at some point - albeit under the more common name of “byte”).
And, for some mysterious reason, nearly every programming language contains modular arithmetics as a CORE concept, often enough even with its own symbol.
Actually, there are now considered such things as computer proofs, or close to it.
FORMAL proofs. You just verify the conjecture for a finite amount of numbers. That is not a proof, because an infinite amount of numbers is still missing.
My code gives the number of pcp prime pairs, and first|last pair values, for any n.
Okay, what about 2^65? Can your code give me at least one single pair of prime summands to that number? What about 2^100 or 2^1000? Maybe a googolplex? No?
In another place and time people would be really interested in asking: How’d he’d do that!
This does not seem to be that place and time.
You know, you didn’t even answer my (really not that complicated) question on that topic, but threw in some self-promotion links instead, continued to ridicule scientists and made even more absurd claims.
This is a programming forum. Not a platform for your increasingly excessive self-promotion.
I finally received my new laptop I ordered after Thanksgiving (Nov 2024)
two days ago (Monday, Jan 13, 2025). I unboxed|started setting it up yesterday.
I still have a ways to go to completely set it up, but it’s usable enough now.
So while waiting for the devs to update a Crystal release with the index fix,
I ran a Ruby version on it to see how large an input n would run.
These are the results run with the latest Ruby 3.4.1 (Xmas 2024 release).
The largest amount of mem used was ~47.6 GB for n = 1,000,000,000 (1 Trillion).
So I didn’t come close to hitting the limit for how large n could go on it.
I ran with 100M increments upto 1Tr. Here are the Ruby outputs with times.
The number of pcp prime pairs is a function of the number of residues for n.
The number of the residues is a function of the number|size of its prime factors.
Thus the large variability of the pcp is a results of the factorization profiles of n.
Undoubtedly the Crystal (Rust, C|C++, D, et al) versions will run faster.
But it’s good to know, that now at the start of the 2nd quater of the 21st Century,
we know how to identify the number of prime pairs for any n, and also print|store
all of them if desired.
Here’s the Ruby code.
def prime_pairs_lohi(n)
return puts "Input not even n > 2" unless n.even? && n > 2
return (pp [n, 1]; pp [n/2, n/2]; pp [n/2, n/2]) if n <= 6
# generate the low-half-residues (lhr) r < n/2
lhr = 3.step(n/2, 2).select { |r| r if r.gcd(n) == 1 }
ndiv2, rhi = n/2, n-2 # lhr:hhr midpoint, max residue limit
lhr_mults = [] # for lhr values not part of a pcp
# store all the powers of the lhr members < n-2
lhr.each do |r| # step thru the lhr members
r_pwr = r # set to first power of r
break if r > rhi / r_pwr # exit if r^2 > n-2, as all others are too
while r < rhi / r_pwr # while r^e < n-2
lhr_mults << (r_pwr *= r) # store its current power of r
end
end
# store all the cross-products of the lhr members < n-2
lhr_dup = lhr.dup # make copy of the lhr members list
while (r = lhr_dup.shift) && !lhr_dup.empty? # do mults of 1st list r w/others
ri_max = rhi / r # ri can't multiply r with values > this
break if lhr_dup[0] > ri_max # exit if product of consecutive r’s > n-2
lhr_dup.each do |ri| # for each residue in reduced list
break if ri > ri_max # exit for r if cross-product with ri > n-2
lhr_mults << r * ri # store value if < n-2
end # check cross-products of next lhr member
end
# convert lhr_mults vals > n/2 to their lhr complements n-r,
# store them, those < n/2, in lhr_del; it now holds non-pcp lhr vals
lhr_del = lhr_mults.map { |r_del| (r_del > ndiv2 ? n - r_del : r_del) }
pp [n, (lhr -= lhr_del).size] # show n and pcp prime pairs count
pp [lhr.first, n-lhr.first] # show first pcp prime pair of n
pp [lhr.last, n-lhr.last] # show last pcp prime pair of n
end
def tm; t = Time.now; yield; Time.now - t end # to time runtime execution
n = ARGV[0].to_i # get n input from terminal
puts tm { prime_pairs_lohi(n) } # show execution runtime as last output
Nice to see that we returned to the relevant discussion of code efficiency.
But it’s good to know, that now at the start of the 2nd quater of the 21st Century,
we know how to identify the number of prime pairs for any n, and also print|store
all of them if desired.
If that’s your goal, your code is just incredibly inefficient. If I use your first Crystal implementation, the calculation for 10^8 alone takes 22 seconds and about 8.3 GB (with release flag).
However, by using my own quick and dirty algorithm to calculate all prime summand pairs, I get the result for the ten numbers from 10^8 to a billion (10^9 is not a trillion, by the way) in about 43 seconds TOTAL (with release flag).
In Crystal. With at max 400 MB of memory consumption. And it still stores each pair as an array of integer tuples, so there’s still potential for memory reduction. Oh, and most of the time is wasted at startup for some redundant calculations. Generating the pairs for the billion took less than a second.
Here is my code:
def calculate_primes_up_to(number : UInt64)
no_prime = BitArray.new(size: number + 1, initial: false)
no_prime[0] = true
no_prime[1] = true
max_potential_factor = Math.sqrt(number + 1).ceil.to_i
2u64.upto(max_potential_factor) do |i|
(i*i).step(to: number, by: i) do |j|
no_prime[j] = true
end
end
return no_prime
end
# NOTE: The 2 is just a value to have a default value
def get_prime_pairs(number : UInt64, no_prime = calculate_primes_up_to(2))
if number > no_prime.size
no_prime = get_primes_up_to(number)
end
prime_pairs = [] of Tuple(UInt64, UInt64)
2u64.upto((number / 2).to_u64) do |i|
next if no_prime[i] || no_prime[number - i]
prime_pairs.push Tuple(UInt64, UInt64).new(i, number - i)
end
return prime_pairs
end
sieve = calculate_primes_up_to(1000000000u64)
1.upto(10) {|i| get_prime_pairs(100000000u64*i, sieve)}
# Print them or something
# For example, the elements 0 and -1 correspond to your results
This algorithm first calculates all prime numbers up to a billion using a simple implementation of the sieve of Eratosthenes and then stores the result in a bit array (since we only need to know if a number is prime or not).
Then the next function just goes through all primes (until half of the value) and pushes a tuple into the resulting array if the difference between the value and the prime is also a prime. That’s just way faster than whatever you’re doing - and it gives the exact same results with more compact code.
By the way, your code is not even doing anything else than mine. You just throw out all even numbers, check the odd numbers for any common divisor with your number (and throw these out, since they aren’t viable as prime summand pairs - see Proof 1 below) and then check the remaining numbers for any common multiples, so only primes remain. After that, you check for prime summand pairs.
But like 50% of your code isn’t even necessary. You don’t need to check for common divisors with your initial number n, because that part takes up so much time (around O(n*log(n)), while only excluding up to log(n)/log(2) factors AT MOST (see Proof 2 below) - which could be excluded in a simple check for the differences later on ANYWAY, where their exclusion consumes time around O(log(n)), so it’s faster by a factor of n (which is MASSIVE for numbers like a billion).
(EDIT: Rephrased wording of the sentence above a bit)
Then, your code part that is throwing out all powers of the “lhr” members also doesn’t seem necessary to me. Nothing changes if I just comment it out (except for the performance). Are you sure you don’t do the same thing in the block after it anyway?
Proof 1: Suppose you have a number x=ab, where a is a prime and b is a number, both greater than 2 for simplicity. Then x-a = a(b-1) is a product of two numbers and by definition not prime - and therefore {a, x-a} not a prime summand pair.
Proof 2: The lowest prime is 2, so if we replace all prime factors in a number n (greater than 1) by 2, we always get a lower value 2^m <= n with the same number m of prime factors as n. Therefore, m <= log(n)/log(2) for all n >= 2.
Iteratively remove from L any r (and its complement n-r) that can be written as a product or a power of other elements in L. Denote by L′ the set of elements left after no more removals are possible.
Claim: Every r in L′ is a genuine prime; hence n-r is also prime, so n always decomposes as r + (n-r) with both terms prime.
No proof yet guarantees that all composite numbers are eliminated in every case, leaving this conjecture open.
I don’t understand math at all, so I asked my friend ChatGPT(o1) for help. (I didn’t pay a thousand dollars, but I did pay OpenAI a small consultation fee.) The AI extracted the above conjecture—does this align with your intention?
Not really a fan of reviving the math part here, but maybe this helps in understanding why all this stuff from jzakiya is nothing new and FAR from any proof of the Goldbach conjecture.
Jzakiya’s Conjecture
Let n>6 be an even integer, and define
L = { r | 3 ≤ r < n/2, r is odd, gcd(r,n)=1 }
Iteratively remove from L any r (and its complement n-r) that can be written as a product or a power of other elements in L. Denote by L′ the set of elements left after no more removals are possible.
Claim: Every r in L′ is a genuine prime; hence n-r is also prime, so n always decomposes as r + (n-r) with both terms prime.
No proof yet guarantees that all composite numbers are eliminated in every case, leaving this conjecture open.
If that is a summary of jzakiya’s papers, it at least gives SOME understanding . Sadly ChatGPT isn’t really viable for such proofs, so its result goes a bit in the wrong direction.
If you take the set of primes r with GCD(n, r)=1 and remove all primes and complements, it actually DOES eliminate all other composite numbers.
But this can easily be done without the GCD(n, r)=1 check, which simply eats away massive amounts of time. If one has GCD(n, r) != 1, n-r is never a prime and will always be filtered out later (see Proof 1 earlier again), so there’s nothing new.
However, checking each r and also n-r for being prime is still necessary. As an example, if you take n=32 and r=11, with GCD(32, 11)=1, the difference is 21, which is not prime.
Anyway, if this is done, the only numbers that remain are exactly those making up the prime summand pairs. But there’s no guarantee at all that the set of numbers generated this way isn’t empty.
Just because the first billion even numbers always give such pairs, the proof that this holds up for ALL even numbers is missing here. It’s probably true, but this needs to be proven explicitely instead of by trying out the first billion numbers. And the GCD-stuff, which jzakiya uses for his “new math” is not even relevant to it AT ALL.
It’s just a premature optimization attempt that worsens the code by a whole lot.
It seems that the remaining numbers are primes, regardless of the GCD condition.
And there doesn’t seem to be any evidence that prime pairs remain—at least not within the code itself.
Looking at it this way, the Jzakiya’s Conjecture written by ChatGPT almost feels like sharp irony. While I personally enjoy irony to some extent, I want to make it clear that I did not post this with any ironic intent.
Here’s the algorithm done by hand for 50, from my paper.
1) For even n treat as Zn, and identify its low-half-residues (lhr) < n/2.
The residue values r are odd integers coprime to n, e.g. gcd(r, n) = 1.
The 1st canonical residue is 1, but 1 is not prime, so it can't be part of
a prime pair, so test the odds numbers from 3 < n/2 to identify the lhr.
2) Store in lhr_mults the powers of ri in lrh < n-2; these are composite residues.
We test up to n-2 because n-1 is the mcp for 1, which can't be part of a pcp.
Once the square of an ri (i.e. ri^2) > n-2, exit process, as all other rj are > n-2.
We test against the nth roots of n-2 to prevent arithmetic overflow possibilities.
3) Store in lhr_mults the cross-products ri*rj < n-2.
Starting with smallest ri lhr, test cross-products with all larger lhr.
If ri > sqrt(n-2), exit process, as ri^2, and all other ri lhr cross-products, are > n-2.
If for next larger residue rj, ri*rj > n-2, exit process, as no others are < n-2.
Otherwise save in lhr_mults cross-product ri*rj < n-2, repeat for next larger rj lhr.
lhr_mults now has all the non-prime composite residue values < n-2.
4) Remove from the lhr the non-pcp lhr_mults values. The pcp prime pairs remain.
a) For lhr_mults values r > n/2, convert lhr to their mcp values n-r.
b) All now are non-pcp values < n/2; remove their values from the lhr list.
The lhr list now contains all prime residues whose mcp make a pcp prime pair.
For the remaining primes pn, the pcp for n are the prime pairs [pn, n - pn].
Let’s now do a non-trivial example that performs all the algorithmic parts.
Example: Find the pcp prime pairs for n = 50.
1) Identify the lhr values < n/2.
Write out list of odd numbers from 3 to < 25 = 50/2
[3 5 7 9 11 13 15 17 19 21 23]
The lhr are values r coprime to n; i.e gcd(r,50) = 1.
[3 7 9 11 13 17 19 21 23]
Thus: lhr = [3, 7, 9, 11, 13, 17, 19, 21, 23]
2) Store in lhr_mults the powers of the lhr < 48 = 50-2.
a) for 3: lhr_mults = []
3 * 3: lhr_mults = [9], as 9 < 48
3 * 9: lhr_mults = [9, 27], as 27 < 48
3 * 27: stop powers for 3, as 81 > 48
b) for 7: exit powers process; as 7*7 = 49 > 48; no other lhr power can be < 48.
3) Store in lhr_mults the lhr cross-products < 48.
a) for 3: lhr_mults = [9, 27]
3 * 7: lhr_mults = [9, 27, 21], as 21 < 48
3 * 9: lhr_mults = [9, 27, 21, 27], as 27 < 48
3 * 11: lhr_mults = [9, 27, 21, 27, 33], as 33 < 48
3 * 13: lhr_mults = [9, 27, 21, 27, 33, 39], as 39 < 48
3 * 17: stop cross-products process for 3, as 3 * 17 = 51 > 48
b) for 7: lhr_mults = [9, 27, 21, 27, 33, 39]
7 * 9: exit total cross-product process, as 7 * 9 = 63 > 48;
4) Remove from the lhr the lhr_mults values
lhr = [3, 7, 9, 11, 13, 17, 19, 21, 23]
lhr_mults = [9, 27, 21, 27, 33, 39]
a) Convert lhr_mults values > 50/2 = 25 to their modular complement value 50-r
lhr_mults = [9, 23, 21, 23, 17, 11]
b) Remove from lhr values in lhr_mults (software equivalent is: lhr -= lhr_mults)
lhr = [3, 7, 9, 11, 13, 17, 19, 21, 23]
lhr_mults = [9, 23, 21, 23, 17, 11]
b1) For lhr_mults val 9; remove from lhr:
lhr_mults = [9, 23, 21, 23, 17, 11]
^
lhr = [3, 7, 11, 13, 17, 19, 21, 23]
b2) For lhr_mults val 23; remove from lhr:
lhr_mults = [9, 23, 21, 23, 17, 11]
^
lhr = [3, 7, 11, 13, 17, 19, 21]
b3) For lhr_mults val 21; remove from lhr:
lhr_mults = [9, 23, 21, 23, 17, 11]
^
lhr = [3, 7, 11, 13, 19]
b4) For lhr_mults val 17; remove from lhr:
lhr_mults = [9, 23, 21, 23, 17, 11]
^
lhr = [3, 7, 11, 13, 19]
b5) For lhr_mults val 11; remove from lhr:
lhr_mults = [9, 23, 21, 23, 17, 11]
^
lhr = [3, 7, 13, 19]
lhr list of only primes now exists; as original lhr composite residues have been removed.
Thus lhr = [3, 7, 13, 19] now contains 4 prime pcp residues for n = 50.
Their 4 pcp prime pairs values for 50 are:
pcp for 3: [3, 50 - 3] = [3, 47]
pcp for 7: [7, 50 - 7] = [7, 43]
pcp for 13: [13, 50 - 13] = [13, 37]
pcp for 19: [19, 50 - 19] = [19, 31]
Every even integer n > 6 has at least one pcp prime pair.
This is a property of modular groups Zn over even integers n.
I urge people to do the algorithm by hand for 38, 40, and 46, Why?
38 is the largest even n with only one pcp prime pair.
(It’s of type n = 2p, so it has a trivial prime pair of 38 = 19 + 19)
Thus it has (p-1) = (19-1) = 18 residues.
Do the algorithm and see what those values are, and how they interact.
Now do 40. It has a different prime factor profile.
See how those residues interact.
Then do 46, because it’s the next larger type n = 2p; 46 = 2*23 = 23 + 23.
It has (23-1) = 22 residues.
Because they have more|larger values, there’s more spacing between them.
Doing the algorithm by hand creates a visceral cognitive experience of how the math works.
When you see the math done in steps, you can begin to realize how the outcomes must be.
So the (modern) Golbach’s Conjecture states:
Every even number > 2 can be written as the sum of 2 primes (not necessarily unique).
The PGT Goldbach Conjecture equivalent states:
Every even integer n > 6, has for its modular group Zn at least one prime complement pair (pcp).
FYI 1.
It seems a PR has been committed to fix the out-of-bounds indexing issue.
FYI 2.
I ran more n values on my new TUXEDO laptop, and made it to n = 1_5000_000_000 (1.5 billion).
Again, the number of pcp for n is a function of the number of residues it has.
The number of residues it has is a function of its prime factorization profile.
There should be no doubt in anyone’s mind, that as the numbers get larger and larger,
it’s not possible for some number n to have no (0|zero) pcp prime pairs.
I’ll take bets from anyone willing to claim they can prove otherwise.
The really more interesting (and fun) question is:
Can one determine the expected|minimum number of pcp for an n just from its prime factors?
The onus isn’t on us to prove that your ideas aren’t true, but for you to prove that your ideas are true. Showing that your program works for arbitrarily big numbers isn’t a proof either.
You all keep saying the same thing over and over, and I haven’t even presented the proof! But I have provided empirical evidence that I apparently know enough math to create a 40 line program that generates an increasing number of prime pairs as n get larger. And I haven’t heard anyone of you challenging the validity of those prime pairs values.
Actually, from my perspective, I feel the onus is on YOU to honestly, and objectively, assess the proof and accept whatever outcomes the verified math establishes.
It’s truly fascinating how some of you seem sooooooooo invested in dismissing and demeaning a body of work you haven’t even seen the proof of, and are working so hard to fluff off results no one else in the history of mankind has previously created so easily.
But hey. People still challenge Evolution, General Relativity, and claim the earth is flat.
And I haven’t heard anyone of you challenging the validity of those prime pairs values.
I am not challenging the validity of the program you’ve created or the data you’ve found; I’m challenging the idea that this somehow correlates to a general proof for any input, which it does not. All you have demonstrated is that you’ve found prime pairs up to some N, not any N.
But hey. People still challenge Evolution, General Relativity, and claim the earth is flat.
To be fair to Ruby, the previous code didn’t enable using YJIT.
This code adds a line at the top to enable YJIT for Ruby >=-3.3.
# Enable YJIT if using CRuby >= 3.3"
RubyVM::YJIT.enable if RUBY_ENGINE == "ruby" and RUBY_VERSION.to_f >= 3.3
def prime_pairs_lohi(n)
return puts "Input not even n > 2" unless n.even? && n > 2
return (pp [n, 1]; pp [n/2, n/2]; pp [n/2, n/2]) if n <= 6
# generate the low-half-residues (lhr) r < n/2
lhr = 3.step(n/2, 2).select { |r| r if r.gcd(n) == 1 }
ndiv2, rhi = n/2, n-2 # lhr:hhr midpoint, max residue limit
lhr_mults = [] # for lhr values not part of a pcp
# store all the powers of the lhr members < n-2
lhr.each do |r| # step thru the lhr members
r_pwr = r # set to first power of r
break if r > rhi / r_pwr # exit if r^2 > n-2, as all others are too
while r < rhi / r_pwr # while r^e < n-2
lhr_mults << (r_pwr *= r) # store its current power of r
end
end
# store all the cross-products of the lhr members < n-2
lhr_dup = lhr.dup # make copy of the lhr members list
while (r = lhr_dup.shift) && !lhr_dup.empty? # do mults of 1st list r w/others
ri_max = rhi / r # ri can't multiply r with values > this
break if lhr_dup[0] > ri_max # exit if product of consecutive r’s > n-2
lhr_dup.each do |ri| # for each residue in reduced list
break if ri > ri_max # exit for r if cross-product with ri > n-2
lhr_mults << r * ri # store value if < n-2
end # check cross-products of next lhr member
end
# convert lhr_mults vals > n/2 to their lhr complements n-r,
# store them, those < n/2, in lhr_del; it now holds non-pcp lhr vals
lhr_del = lhr_mults.map { |r_del| (r_del > ndiv2 ? n - r_del : r_del) }
pp [n, (lhr -= lhr_del).size] # show n and pcp prime pairs count
pp [lhr.first, n-lhr.first] # show first pcp prime pair of n
pp [lhr.last, n-lhr.last] # show last pcp prime pair of n
end
def tm; t = Time.now; yield; Time.now - t end # to time runtime execution
n = ARGV[0].to_i # get n value from terminal
puts tm { prime_pairs_lohi(n) } # show execution runtime as last output
It makes a difference: Here for Ruby 3.4.1.
I would recommend enabling YJIT as default environment.
You wanted to get ideas for performance improvement in this thread. I gave you a complete code example which speeds up your calculations massively and decreases your RAM consumption by a factor of over ONE HUNDRED.
Seriously, why do you STILL use your old inperformant code with all of its completely obsolete calculations?
Here are the intentions I’ve picked up in people’s replies to you in this thread:
a cautious curiosity about your reasoning in how you structured your code
a desire to help make your code faster
a frustration that you’re repeatedly denying that anyone would have reason to be skeptical of a stranger on the internet claiming to have proved a conjecture that, conservatively, tens of thousands of mathematicians have attempted to prove (more on skepticism below)
a desire to gently remind you about the difference between a list of examples (even a very, very large list) and a proof
a desire to save you time and money (!!!) by warning you about potentially-predatory academic journals (or, rather, more-predatory-than-usual ones)
Nobody is invested in dismissing you. The people who dismissed you completely didn’t bother to reply to this thread.
I don’t have any degree in math, my formal training stopped at the requirements for my computer science degree (linear algebra was almost too much for me at the time), and I haven’t put in the time and effort I believe you have to become knowledgeable about math as an amateur. I’m not claiming any kind of expertise except as a software dev and someone who uses the internet.
People are right to be skeptical of your claim. That isn’t a specific judgment of you; skepticism is the safest default attitude toward an internet stranger making these kinds of high-significance claims. Skepticism is also the starting point to evaluating novel proofs in mathematics, whether the author is an amateur or a world-class, international award-winning mathematician. Nobody is saying that they’re certain you’re wrong, let alone that they’re certain you’re not capable of proving Goldbach’s Conjecture, but it’s reasonable for people to look at the history of mathematics and reason that it’s unlikely that you have a correct proof. Not impossible, but unlikely.
I think you think that creating a program that gives examples of the accuracy of Goldbach’s Conjecture should make people less skeptical, but it’s not really evidence either way. If you wrote a program that found a counterexample that disproved Goldbach’s Conjecture, that would be quite impressive (and easy to verify). But a quick glance at Wikipedia leads to results from 2013 that show that Goldbach’s Conjecture holds up to 4 * 10^18. If your code ran in linear time, based on “10^8 in 22s”, it would take more than 25,000 years (4*10^18 / (10^8 / 22 seconds), in case I’ve made an error in my arithmetic) to reach 4 * 10^18. If your computer is 1000 times as fast as @Hadeweka’s (unlikely), then it would take 25 years. Your program isn’t bad, it’s just not an impressive feat.
People here want to engage and be helpful re Crystal stuff, and they’re not attacking you when they’re not convinced about your claims. Frankly, I think I engaging is probably a waste of my time and mental and emotional energy because I expect you will dismiss me, but I’ve already written this whole reply, so I’m just gonna post it.
If you want me to take your code seriously, then you need to write it up with comments, and have it produce the same outputs my program does, include timing information.
Then provide a|b output results between my|your code, run on your system (with specs of it) so everyone can see that what you’ve done can mimic mine.
Until you do that, there is nothing for me (anyone else) to empirically evaluate.
I read your reply and this is similar to what is written in the “heuristic justification” section of the Wikipedia page on Goldbach’s conjecture.
There is also a page with a fascinating figure called “Goldbach’s comet”, and I have a feeling that it says something similar to what you are saying, if I may guess from the words.
It would be wonderful if you rediscovered a fascinating idea on your own, but I think we should also respect the ideas of your predecessors.
If you like to stick to your inefficient code, go for it. I have neither the motivation nor the obligation to help you anymore, after you made derogatory comments on me and several other people without even a single apology afterwards.
Honestly not my problem anymore if you aren’t willing or able to understand 20 (give or take) lines of code and my very extensive explanation below. Your conventions don’t even follow usual rules, so why should I play by your rules?
Unless you are apologizing for your rude comments I don’t see any reason for engaging with you directly anymore and hereby politely ask for you to do the same thing.
Since I just saw your most recent reply: The Goldbach conjecture is probably true, but simply saying “There is a pattern” is just not enough here. If you are interested, I will elaborate a bit on this.
A VERY good example for such conjectures is the conjecture that GCD(n^17 + 9, (n+1)^17 + 9)=1 is true for any natural number n. Is this conjecture correct or not?
Let’s assume you are a person interested in that and attempting a proof. Let’s try to imagine you are that person. It’s a bit longer, but it will be worth it to read:
If you test any UInt64 for n, it will always result in 1, so this seems like it. But why 17 and 9? Is it some deep fundamental symmetry of nature? Maybe a natural constant? Because mathematically, there is no reason for this to be true.
You decide to test further and check all UInt128 (somehow). Still, always 1, so you believe to have found something astonishing and groundbreaking about nature. We are already far beyond the range of values tested for the Goldbach conjecture, for example.
So is this a pattern? A proof? You dedicate your life to it, because you can’t find a counterexample. You tested quadrillions of numbers, all give a GCD of 1. You HAVE to be correct. Why don’t the mathematicians listen? Why do they laugh at you?
Years later, you extended your checked numbers to 2^160 and still always get a GCD of 1. Even more proof, but even more rejection. Why are people so blind? I found some deep meaning in nature and they won’t even read my paper…
Finally, after years and years, after known for being an obsessive guy trying to prove what mathematicians don’t believe to be true, you get a letter from a stranger. In it is only a single number:
You immediately run it as a new value for n through your calculation program. Within a second, you get the resulting GCD, fully expecting it to be 1 again:
The simple answer is: Humans are very good at recognizing patterns. Especially in places where there are none to be found. Just because a pattern holds for 8424432925592889329288197322308900672459420460792432 numbers, it doesn’t mean that it’s true for all the infinite others. And in the case of our little conjecture above, it suddenly fails at that of a high number (and not any earlier!).
And even if the pattern from Goldbach’s comet definitely seems intriguing, there might still be a number that breaks the pattern. Maybe it’s 8424432925592889329288197322308900672459420460792433, we don’t even know that. The article itself even discusses how the probability of zero prime summand pairs never truly goes to zero. The pattern is simply not enough without a formal proof.
I guess the last paragraph is the closest thing you’ll get mathematicians to say about the validity of Goldbach’s Conjecture based on empirical data analysis.
It is interesting to speculate on the possibility of any number E having zero prime pairs, taking these Gaussian forms as probabilities, and assuming it is legitimate to extrapolate to the zero-pair point. If this is done, the probability of zero pairs for any one E, in the range considered here, is of order 10^(−3700). The integrated probability over all E to infinity, taking into account the narrowing of the peak width, is not much larger. Any search for violation of the Goldbach conjecture may reasonably be expected to have these odds to contend with.
But you see, they consider the primes to be randomly distributed so they resort to using probability to model where they occur. From PGT we know the primes are the residues of modular groups Zn and are uniformly distributed over them.
But I have to say, a probability of 10^(-3700) for an n with no prime pairs is a lot of zeros to the right of the decimal point.
Please continue your efforts. It may not be fair for me to deny your claim that you have proven Goldbach’s conjecture when the supporting paper has not yet been published, and I do not understand mathematics.
Reading your comments, I sometimes reflect on whether I might behave similarly to you in certain situations within the complex theater of life. While this could be an interesting scenario in a literary work, thinking that the real world might be like that feels isolating.
Even so, we continue to face challenges. At the very least, your work could contribute to improving numerical calculations in Crystal.
I want to emphasize that people are questioning your claim, not you as a person. I hope you can understand this. Recognizing that will help protect your well-being.