Goldbach's Conjecture and Crystal Code

I’m finishing a math paper proving Goldbach Conjecture’s (1742).
It states: every even n > 2 can be wriiten as the sum of at least one prime pair.

Here is a Crystal program that will provide the count of all the prime pairs for n,
and the first|last prime pair values (within the limits of your hardware and time).

What I want to do is speed it up (it’s already blazing fast) by doing in
parallel the code blocks for the residue powers and residue cross-products.

I tried doing it using fibers and waitgroups, but it didn’t operate in parallel.

  1. Can Crystal currently do what I want to do in parallel with fibers.
  2. If so, can someone show me how.
# Compile: $ crystal build --release --mcpu native prime_pairs_lohi.cr
# Run as:  $ ./primes_pairs_lohi <n>

def prime_pairs_lohi(n)
  return puts "Input not even n > 2" unless n.even? && n > 2
  return (pp [n, 1]; pp [n//2, n//2]; pp [n//2, n//2]) if n <= 6

  # generate the low-half-residues (lhr) r < n/2
  lhr = 3u64.step(to: n//2, by: 2).select { |r| r if r.gcd(n) == 1 }.to_a
  ndiv2, rhi = n//2, n-2           # lhr:hhr midpoint, largest residue val
  lhr_mults = [] of typeof(n)      # for lhr values not part of a pcp

  # store all the powers of the lhr members < n-2
  lhr.each do |r|                  # step thru the lhr members
    r_pwr, root = r, 2             # for this lhr, and its square power
    break if r > rhi ** (0.5)      # exit if r^2 > rhi, as all others are too
    while    r < rhi ** (1.0/root) # if r < nth root of rhi
      lhr_mults << (r_pwr *= r)    # store its nth power of r
      root = root.succ             # check for next power of r
    end
  end

  # store all the cross-products of the lhr members < n-2
  lhr_res = lhr.dup                # make copy of the lhr members list
  while (r = lhr_res.shift) && !lhr_res.empty? # do mults of 1st list r w/others
    break if r > rhi ** (0.5)      # exit if r^2 > rhi; all other mults are > too
    break if r * lhr_res[0] > rhi  # exit if product of consecutive r’s > rhi
    lhr_res.each do |ri|           # for each residue in reduced list
      break if (r_mult = r * ri) > rhi  # compute cross-product with current r
      lhr_mults << r_mult          # store value if < rhi
    end
  end

  # remove lhr_mults duplicates, convert vals > n/2 to their lhr complements,
  # store them, and those < n/2, in lhr_del; lhr_del now holds non-prime lhr vals
  lhr_del = [] of typeof(n)
  lhr_mults.uniq.each do |r_del|
    lhr_del << (r_del > ndiv2 ? n - r_del : r_del)
  end

  lhr -= lhr_del                   # remove from lhr its non-pcp residues

  lo, hi = lhr.first, lhr.last     # lhr first|last prime residue pcp of n
  pp [n, lhr.size]                 # show n and its number of pcp prime pairs
  pp [lo, n-lo]                    # show first pcp prime pair of n
  pp [hi, n-hi]                    # show last  pcp prime pair of n
end

def tm; t = Time.monotonic; yield; Time.monotonic - t end

def gen_pcp
  n = (ARGV[0].to_u64 underscore: true)
  puts tm { prime_pairs_lohi(n) }
end

gen_pcp
1 Like

First of all: I can’t really help you with optimizing your code, sorry. But if you don’t mind, I have some questions to the topic.

I was wondering, why do you need this code in the first place?

The conjecture is known to be true for most UInt64 values. Of course it would be nice to actually check them up to a higher number, but this would take AGES with your code - regardless of fibers or not.

Besides that, your code already fails for UInt32::MAX-1 because it allocates WAY too much memory, so your bottleneck is absolutely not the speed. You won’t find any number violating the Goldbach conjecture with this code, so what do you hope to achieve with it?

I’m sure you already know this, but not being able to find a counterexample doesn’t prove a conjecture, so I’m even more confused about how this would help in a proof of the conjecture. I’m curious to know!

I’m writing a paper on the GC proof, and the code is an empirical expression of the conceptual framework and mathematics of the proof. The code works because the underlying math that validates the GC proof works, and showing that it works is part of my paper.

Why make it faster? Because it’s fun to do so! :grinning:

I’m all in for performance optimization for fun, but I still think you should rather fix the memory issues first (if even possible). Is there maybe any way to discard values that aren’t needed anymore?

And as scientist I’m still sceptical about the proof you’re mentioning. I’m not aware of any accepted proof for the conjecture yet. Also I still fail to see the code as empirical data, since it stops working at an arbitrary number.

This is getting off-topic (but you may send me a message, if you like to continue), so I’m stopping here, but your claims just seem… hard to believe, sorry.

I’m all in for performance optimization for fun, but I still think you should rather fix the memory issues first (if even possible). Is there maybe any way to discard values that aren’t needed anymore?

And as scientist I’m still sceptical about the proof you’re mentioning. I’m not aware of any accepted proof for the conjecture yet. Also I still fail to see the code as empirical data, since it stops working at an arbitrary number.

This is getting off-topic (but you may send me a message, if you like to continue), so I’m stopping here, but your claims just seem… hard to believe, sorry.

First the software.

The purpose of the software is to show the underlying math for the GC proof, based on what I developed, I call Prime Generator Theory (PGT).

PGT is based on the residue properties of modular groups Zn, where n are even numbers. Not only do I show the GC is true, but that Bertrand’s Postulate (BP) is just a consequence (given property) of having prime pairs over Zn groups.

The GC was proposed in 1742 (the BP in 1845) and been an open question for about 283 years now. That’s because academic mathematicians use the same classical numerical tools that are completely insufficient for modeling the problem correctly.

I show the GC falls simply and easily out of modular groups Zn as a property of the mirror image symmetry of their residues. That is to say, all the residues of Zn groups exist as modular complement pairs (mcp), one < n/2 and one > n/2. It’s easy to understand and show then, every even n > 6 has at least one mcp made up of prime residues, i.e a prime complement pair (pcp), 2 primes that sum to n.

In fact, only the 5 numbers: 8|{3, 5}, 10|{3, 7}, 12|{5, 7}, 14|{3, 11} and 38|{7, 31} have just one pcp prime pair (not including types: n = p + p). All others have 2 or more, which increase substantially as n become larger.

So the purpose of the software, which you see is short|simple, is to demonstrate that the math works, and produces empirical data: pcp prime pairs counts, and prime pair values.

The software eats memory because it store the residue values as u64 types. As it stores their odd values from 3 < n/2, as n grows so do the number of residues.

The amount of memory used can be significantly reduced by using bit-arrays to encode the positions of the residue values, and for their multiples array. But these are implementation issues that have nothing to do with the underlying math.

If you want to discuss further I can direct you to papers I’ve written and/or published on these related topics.

Here’s one place you can find most of my papers.

So just email me if you want to talk.

2 Likes

Hi jzakiya, I’m always interested in your comments about Crystal performance optimizations. But your claim that you have proven Goldbach’s conjecture is hard to believe (no matter who claims it).

However, I don’t think it’s a waste of time to write another incorrect proof. If the base of the mountain is not wide, the mountain will not be high. In the same way, amateur efforts are necessary in any field.

I once read an essay by a mathematician who wrote that he receives dozens of letters a year from amateur mathematicians proving “unsolved conjectures.” He wrote that most of these letters are simply wrong, so these days he just throws them away without reading them. So it may be difficult to find someone who will take your “proof” seriously.

I hope you can find a mathematician who will look at your proof and carefully think about what’s wrong with it!

Regarding the code

The amount of memory used can be significantly reduced by using bit-arrays to encode the positions of the residue values, and for their multiples array.

This would definitely help, but only shift the problem, as you still need a huge amount of memory. You might get on or two orders of magnitude higher with this, but that’s probably it.

Regarding the math:

I have to agree with @kojix2 in the point that I’ve seen several simple “proofs” of the Goldbach conjecture. All of them had some errors on technicalities or were based on circular logic.

Based on your comment it seems to me like you want to defend your proof publicly (which I’m totally fine with - please tell me if you aren’t) - otherwise you could’ve sent me a mail in the first place. I’m a scientist myself, so I’m always interested in such discussions.

I show the GC falls simply and easily out of modular groups Zn as a property of the mirror image symmetry of their residues.

Here however I simply fail to understand what you mean by that. I suppose you mean the cyclic rings (Z/nZ), but how do you define a residue, then? The rest of your proof heavily depends on this definition. Could you maybe provide one or two examples, ideally in a mathematically unambiguous way?

Because I don’t see how this leads to the later results yet. Also the claim that every even numbers has 2 or more prime complement pairs (as you dubbed them) is something that I don’t see proven yet. Does this immediately follow from the paragraph before? If so, how?

EDIT: Since you posted shortly after my answer to a previous post: Your new post is not a proof, it just shows some values for which the GC is already known to be true. This only contradicts the statement “The GC is not true for any number”.

Nelson Mandela: It always seems impossible until its’ done.

My works speaks for itself.
One just must be open to listen to what is says, and learn from it.

Listen:

Crystal 1.15.0
➜  ~ ./prime_pairs_lohi 150_000_000

[150000000, 835256]
[43, 149999957]
[74999917, 75000083]
00:00:09.905725291

Listen:

        N	       # PCP 	 First Prime Pair	  Last Prime Pair
    1,000,000	    5,402	  [17, 999983]	     [499943, 500057]
    2,000,000	    9,720	  [7, 1999993]	     [999961, 1000039]
    3,000,000	   27,502	  [43, 2999957]	     [1499857, 1500143]
    4,000,000	   17,630	  [29, 3999971]	     [1999853, 2000147]
    5,000,000	   21,290	  [37, 4999963]	     [2499949, 2500051]
    6,000,000	   49,783	  [7, 5999993]	     [2999911, 3000089]
    7,000,000	   34,284	  [3, 6999997]	     [3499967, 3500033]
    8,000,000	   31,753	  [7, 7999993]	     [3999763, 4000237]
    9,000,000	   70,619	  [7, 8999993]	     [4499953, 4500047]
   10,000,000	   38,807	  [29, 9999971]	     [4999913, 5000087]
   11,000,000	   46,812	  [3, 10999997]	     [5499979, 5500021]
   12,000,000	   90,877	  [11, 11999989]     [5999947, 6000053]
   13,000,000	   53,398	  [3, 12999997]	     [6499841, 6500159]
   14,000,000	   62,026	  [19, 13999981]	 [6999997, 7000003]
   15,000,000	  110,140	  [19, 14999981]	 [7499939, 7500061]
   16,000,000	   58,383	  [11, 15999989]	 [7999913, 8000087]
   17,000,000	   65,592	  [37, 16999963]	 [8499979, 8500021]
   18,000,000	  129,501	  [13, 17999987]	 [8999777, 9000223]
   19,000,000	   71,656	  [3, 18999997]	     [9499811, 9500189]
   20,000,000	   70,730	  [19, 19999981]	 [9999739, 10000261]
   21,000,000	  177,440	  [23, 20999977]	 [10499963, 10500037]
   22,000,000	   85,476	  [23, 21999977]	 [10999811, 11000189]
   23,000,000	   83,727	  [7, 22999993]  	 [11499949, 11500051]
   24,000,000	  165,922	  [19, 23999981]	 [11999927, 12000073]
   25,000,000	   85,838	  [17, 24999983]	 [12499481, 12500519]
   26,000,000	   97,209	  [67, 25999933]	 [12999919, 13000081]
   27,000,000	  184,050	  [19, 26999981]	 [13499939, 13500061]
   28,000,000	  113,922	  [101, 27999899]	 [13999691, 14000309]
   29,000,000	  101,387	  [61, 28999939]	 [14499973, 14500027]
   30,000,000	  202,166	  [11, 29999989]	 [14999969, 15000031]
   31,000,000	  107,710	  [11, 30999989]	 [15499943, 15500057]
   32,000,000	  106,627	  [61, 31999939]	 [15999871, 16000129}
   33,000,000	  243,780	  [17, 32999983]	 [16499939, 16500061]
   34,000,000	  120,272	  [107, 33999893]	 [16999859, 17000141]
   35,000,000	  138,452	  [31, 34999969]	 [17499793, 17500207]
   36,000,000     237,050     [7, 35999993]      [17999959, 18000041]
   37,000,000     124,922     [3, 36999997]      [18499919, 18500081]
   38,000,000     131,603     [13, 37999987]     [18999973, 19000027]
   39,000,000     277,089     [7, 38999993]      [19499983, 19500017]
   40,000,000     130,164     [17, 39999983]     [19999697, 20000303]

Now Learn!

I am not defending the proof, which I have yet to provide. I’ve merely provided code to generate all the primes pairs for any n, as a demonstration of the underlying math that is the foundation of the proof.

I provided you a link to my papers. Have you looked at any of them?

You will not be able to understand the software or my proof without first understanding residues and modular groups. I don’t not use (or need) classical numerical techniques, so don’t expect to see them.

On the link to my Academina.edu page there is a link to Draft Chapter 1, of the book I’m writing on Prime Generator Theory (PGT). It is an abc to understand modular groups Zn, residues, mirror image symmetry, modular complement pairs (mcp), primorials|reduced, and Prime Generators.

It has lots of pictures and graphs, and is written so that anyone who can think should be able to follow it (no calculus algebra, probability, complex numbers).

If you take the time to read it, and my papers|code|video on Twin Primes, you will have the minimal mathematical understanding of what is explained for the GC|BP.

We are a quarter of the way thru the 21st Century, but most academic mathematicians (at least in number theory) seem trapped in the 19th|20th.

Polignacs (Twin Primes) Conjecture, the GC|BP are easily visually seen to be true, and explained, if you understand every odd prime exists as a residue, or congruent residue values of modular groups Zn. And they are deterministically and uniformally distributed along their residues.

The Primes DO NOT exist randomly.

So, if you take the time to learn about PGT you should be able to understand, and appreciate, the simplicity and clarity that proofs of all these conjectures provide when the primes are modeled and understood to be the residues of modular groups Zn.

1 Like

I provided you a link to my papers. Have you looked at any of them?

You will not be able to understand the software or my proof without first understanding residues and modular groups.

It has lots of pictures and graphs, and is written so that anyone who can think should be able to follow it

most academic mathematicians (at least in number theory) seem trapped in the 19th|20th.

I did indeed look into your papers, but sadly you didn’t answer my questions and instead used some slightly derogatory assumptions and ideas about others in your post.

This sounds way too much like the typical pseudoscientific “You have to follow me instead of all the ignorant fools”. I really hoped it wouldn’t.

I will therefore quit this discussion now. Have a nice day.

2 Likes

I don’t know any mathematical terminology, but a short reply like, “It’s almost done, so please wait until I upload it. But I will send you a draft via email.” might have made a better impression…

1 Like

It’s one thing to engage in an open and objective discussion on the technical merits of something.

It’s quite another to subjectively project your limitations to do something onto someone else you don’t even know.

FYI
I just received an email notice this morning that a paper on another old famous math conjecture just got published, whose math is also based on Prime Generator Theory (PGT).

On The Infinity of Twin Primes and other K-tuples

International Journal of Mathematics and Computer Research (IJMCR)

IJMCR Vol 13 No 1 (2025): VOLUME 13 ISSUE 01:

https://ijmcr.in/index.php/ijmcr/issue/view/143

Paper Link:
https://ijmcr.in/index.php/ijmcr/article/view/867

Paper PDF:
https://ijmcr.in/index.php/ijmcr/article/view/867/678

Kinda nice for an amateur! :smirk:

You are aware that the IJMCR is generally considered what’s called a predatory journal?

They seem like authentic journals, but they just take the processing fee, skip most of the review process and publish nearly anything that doesn’t look like total nonsense.

I’m truly sorry that you paid money for publishing there.

Really dude, it was only $75 and they published it within a day.

Do you know what real predatory is?

I submitted it to the Journal of Number Theory last year, and they jerked me around.

First they took their time saying they needed to get a reviewer. I asked that the person be versed in discrete and modular group math, and they contact me with any questions. So the editor says the person reviewing it wanted to see the code that generated the data, so I emailed him ALL the code… Then the editor says the reviewer didn’t know Ruby|Crystal, which most of the code was written in. So he asks me to submit “experts” in the languages who could review (at a cost), so I sent him Matz, the Crystal devs, and other people. So then the damn editor said the Ruby wasn’t a “serious” language and asked me to rewrite the code in Python. Oh bye the way, did I say he wanted me to pay $1,000 for this esteemed treatment.

Well at this point I figure they were just trying to churn some fees out of me because if the paper didn’t merit publishing why are you making ME jump through all these hoops. So I kindly told him you can KMA, I’ll publish it elsewhere.

So I consider the JNT, AMA, arXiv, et al, all to be pompous, elitist, and predatory.
So I put my stuff on Academia.edu, and publishers come to me asking to publish my papers, which is what IJMCR (et al) did for $75, and within one day.

You see, I do this for fun! I not in some publish-or-perish institution or job. I do it because I find its interesting, and I’m intellectually curious. Have been all my life. And I like to push myself to learn new things.

So if I were you, I would study the paper. Try to tear it mathematically apart if you can, but provide a rigorous mathematical explanation whyanything in it is wrong. No one has since I released the initial version in 2018.

And guess, what. No one will either when I release my paper on Goldbach’s Conjecture. Because I know what I’m talking about and can write software to demonstrate they everything I profess can produce empirical results to show it.

Excuse me for the rant, but I’m human too, and get tired of all the BS sometimes.

Predatory journals usually charge much lower fees than so-called “legitimate” journals. I think many of their authors are scientists from developing countries who don’t have much research funding. For them, these journals might feel like a safe haven.

Online, it seems that publishing companies are more aggressive in fighting predatory journals than the scientific community itself.

At the same time, these companies’ journals often have sky-high publication fees—much more than several predatory journals combined. Looking at it financially, calling them “apex predator journals” seems fitting.

I get it—keeping good editors and a solid peer review system running costs money.

But it’s hard to ignore how unfair it feels when these “apex predators” criticize newcomers by labeling them as “predators.” (Though I don’t quite understand why you think arXiv is predatory.)

I’m not smart enough to write academic papers, so this doesn’t affect me directly. Still, I want to be clear—everything I’ve said is just my personal opinion. I’m not trying to push it on anyone.

That said, I think it’s great that jzakiya made their paper public. I wonder if your proof of Goldbach’s Conjecture will be public someday too.

Even so, I doubt anyone here, myself included, thinks that making the paper public means the proof must be correct.

In mathematics, as seen in Shinichi Mochizuki’s case, peer review and publication are considered the standard for determining the correctness of a paper. There seems to be a belief that this process can create a world where only mostly correct papers are published. However, that’s specific to mathematics. In other fields, a published paper isn’t necessarily correct from the start.

I think the same will apply to this paper.

I hope that one day your “Proof of Goldbach’s Conjecture” gets uploaded somewhere and that an expert takes the time to explain exactly where it might be wrong.

Also, even if there are mistakes in your thesis, that doesn’t make you any less of a person. Writing and sharing a paper is an achievement in itself. It’s important to stay open to feedback and be willing to learn and improve.

(Translated from Japanese by ChatGPT)


That being said, if they are willing to review your paper for $1000, I think that’s a fair price, so I would rewrite the code in Python.

I think it might be helpful if I provide a bit more context about papers, journals and general science. This is already off-topic for some while, so if the mods think this should be moved or deleted, I’m totally fine with this.

First of all, 1000$ is not much for a processing fee. If you want your article to be openly accessible in one of the big ones, like Nature, you pay 12690$. This may sound “pompous”, “elitist” and “predatory” - but there’s an important caveat. If you’re fine with your article not being in open access, you usually don’t have to pay anything.

The prices are also usually clearly communicated on the journal websites, so I don’t understand your surprise and outrage. Why didn’t you just choose a free model that doesn’t include Open Access? If you just did such an outstanding thing like you claim to have done, why would this be an issue? In fact, if you think your proof to be correct, this would be the best (and cheapest) way to get mathematicians to read it.

But speaking of correct proofs - usually scientists should always look out for anything that disproves their hypotheses, especially if these contain groundbreaking claims. People tend to always favor their own hypotheses over others (and see them as their own work). This introduces some massive bias that needs to be accounted for.

So far, I don’t see a single bit of skepticism in what you wrote (including the papers). This is one of the biggest indicators for pseudoscience, so please forgive me being highly skeptical, especially after your derogatory comments towards other researchers.

It’s always great to see people going for interesting fields of math, and a proof for some conjecture may come out of nowhere. But ask yourself: What elevates your proof from the thousands of other proofs out of nowhere that reviewers and scientists are sent to over their lifetime - especially if you don’t even adhere to typical mathematical naming conventions?

This just makes your paper extremely hard to read and understand. Even your explanation with the clock face is just confusing, because you just invent terminology on the fly without further explanation. This will just drown your paper in the noise of other such submissions. Which would actually be a pity in case you’d actually have a valid proof.


Some other nitpicking: As @kojix2 alluded to, arXiv is not even a journal. It’s a simple storage place for preprint papers with some very generous reviewing. If you won’t even make it to there, your paper has some serious issues. Like the problematic naming conventions, at least.

Also yes, neither Ruby nor Crystal are commonly used in science (as sad this is). Python is the de-facto standard. You might have some luck with C/C++, Fortran or R, too. But everything else requires the reviewers (and readers) to learn an entirely new language. I would reject a paper, too, if the author presented me code for something groundbreaking in COBOL or something. This is the same issue as with the mathematical naming conventions.


Disclaimer: This whole essay might feel like a personal attack. This is not my intention.

3 Likes

I don’t think you can write a program to prove “the math works”.
You can write a program to prove “the math implemented in this program delivers expected results for the range of inputs provided”.

I’m also not sure if you should even use the word “empirical data” in the presence of mathematicians, they are highly allergic to it, and tend to spontaneously combust when they hear it. :smiley:

2 Likes

When running the code you may have encountered arithmetic overflow messages.
The characteristics of certain n values currently trigger these errors.
It’s an incorrect error messages. There’s no arithmetic overflow in the code.
The Ruby version runs fine, within your system’s memory limit.

The devs attribute the real error as an out of bound indexing issue.
Read about it here; for the github issue I submitted about it.

Here is the more efficient|faster updated code.
It eliminates arithmetic overflow possibilities, and is shorter|clearer.
Once the indexing issue is fixed, it should run until your memory is exhausted.

# Compile: $ crystal build --release --mcpu native prime_pairs_lohi.cr
# Run as:  $ ./primes_pairs_lohi 123_456

def prime_pairs_lohi(n)
  return puts "Input not even n > 2" unless n.even? && n > 2
  return (pp [n, 1]; pp [n//2, n//2]; pp [n//2, n//2]) if n <= 6

  # generate the low-half-residues (lhr) r < n/2
  lhr = 3u64.step(to: n//2, by: 2).select { |r| r if r.gcd(n) == 1 }.to_a
  ndiv2, rhi = n//2, n-2           # lhr:hhr midpoint, max residue limit
  lhr_mults = [] of typeof(n)      # for lhr values not part of a pcp

  # store all the powers of the lhr members < n-2
  lhr.each do |r|                  # step thru the lhr members
    r_pwr = r                      # set to first power of r
    break if r > rhi // r_pwr      # exit if r^2 > n-2, as all others are too
    while    r < rhi // r_pwr      # while r^e < n-2
      lhr_mults << (r_pwr *= r)    # store its current power of r
    end
  end

  # store all the cross-products of the lhr members < n-2
  lhr_dup = lhr.dup                # make copy of the lhr members list
  while (r = lhr_dup.shift) && !lhr_dup.empty? # do mults of 1st list r w/others
    ri_max = rhi // r              # ri can't multiply r with values > this
    break if lhr_dup[0] > ri_max   # exit if product of consecutive r’s > n-2
    lhr_dup.each do |ri|           # for each residue in reduced list
      break if ri > ri_max         # exit for r if cross-product with ri > n-2
      lhr_mults << r * ri          # store value if < n-2
    end                            # check cross-products of next lhr member
  end

  # remove from lhr its lhr_mults, convert vals > n/2 to lhr complements first
  lhr -= lhr_mults.map { |r_del| r_del > ndiv2 ? n - r_del : r_del }

  pp [n, lhr.size]                 # show n and pcp prime pairs count
  pp [lhr.first, n-lhr.first]      # show first pcp prime pair of n
  pp [lhr.last,  n-lhr.last]       # show last  pcp prime pair of n
end

def tm; t = Time.monotonic; yield; Time.monotonic - t end  # time code execution

def gen_pcp
  n = (ARGV[0].to_u64 underscore: true) # get n input from terminal
  puts tm { prime_pairs_lohi(n) }       # show execution runtime as last output
end

gen_pcp

Actually, there are now considered such things as computer proofs, or close to it.

My code gives the number of pcp prime pairs, and first|last pair values, for any n.
All of this in less than 40 lines of actual code.

In another place and time people would be really interested in asking:
How’d he’d do that!

This does not seem to be that place and time.

I’m not going to stereotype all (academic) mathematician, as I consider myself sorta one too. But data from other peoples physical experiments confirmed Einstein’s theory, as he performed none himself.

So if you propose a theory that produces no cofirmable real world results, it won’t last long.

But since I haven’t released the proof yet, it’s fascinating to see all the statements, discussion, and projections about something people have never seen.

But if you actually want to understand the proof (like for my Twin Prime Conjecture proof too) you’ll have to learn and understand the concepts, logic, language, and math of modular groups.

It’s really not that hard (much simpler than calculus) but its a different way of thinking. So if you really want to understand my proofs|papers, read Draft Chapter 1 or my book on PGT.

Is the idea that future chapters of the “There is No Last Twin Prime” will contain proofs for the assertions made in chapter 1?

The 1st Chapter will provide all the basic concepts, logic, language|terminology, properties, and arithmetic of modular groups that form the basis of Prime Generator Theory (PGT).

I put all the basic stuff there so I don’t have to repeat it when I write the chapters, and present code, on the various number theory problems (Polignac’s Conjecture, Goldbach’s Conjecture, Bertrand’s Postulate, Hardy-Littlewood Twin Primes Constant, Riemann Hypothesis, Zeta Function, et al) I will write on, and various applications, like my prime sieves (Sieve of Zakiya (SoZ), Twin Primes Segmented Sieve of Zakiya (SSoZ), Prime Pairs, et al).

Most of the math and properties provided in it is just basic modular math.

I’m not inventing any new math here. I’m just grouping stuff together and putting my name on things to formulate a consistent holistic framework I can refer to. This is old math, that unfortunately most people who’ve even completed graduate degrees in technical fields never get introduced to.

But the math and concepts of modular groups|forms is finally getting a push because, like here, it can more easily solve problems that classical numerical techniques can’t handle.

If you want to start seeing the real beauty|power of PGT, check out Draft Chapter 2.