I see the base of this GC is stop-the-world mark-and-sweep, but it supports some tweaking. Just read this post from Unity about incremental mode.
How’s tuned by Crystal?
I get the idea of how is the GC used in C, does Crystal intersperse those calls somehow in the generated binary? Does the GC run in its own operating system thread?
A brief look through the code and I do not see anything that enables incremental mode. From reading there home page is seems like boehm has incremental support enabled by default with OSs that support virtual memory. https://www.hboehm.info/gc/
It provides incremental and generational collection under operating systems which provide the right kind of virtual memory support. (Currently this includes SunOS[45], IRIX, OSF/1, Linux, and Windows, with varying restrictions.)
I didn’t know Unity uses boehm! That’s very interesting. Unity games usually run pretty well, though there are some hiccups from time to time, I wonder if it’s the GC…
I tried enabling incremental collection in the past. Here’s me doing it on Mac OSX:
# foo.cr
lib LibGC
fun enable_incremental = GC_enable_incremental
fun is_incremental_mode = GC_is_incremental_mode : Int
end
LibGC.enable_incremental
puts LibGC.is_incremental_mode
$ bin/crystal foo.cr
GC Warning: Memory unmapping is disabled as incompatible with MPROTECT_VDB
GC Warning: Can't turn on GC incremental mode as fork() handling requested
0
In Crystal, the GC provides a function to allocate memory, GC_malloc. We use that to allocate memory. Then the GC handles everything for us. When you call that function and there’s no allocated memory left, the GC will try to free some memory (this is when the GC runs). When there’s no memory to free it will allocate more and so on.
From the documentation, that is how it works in C too. You call GC_MALLOC() or similar, and it does the work as you described it is done in C. There is a GC_FREE() function in case you need it, but the point is to not use it.
So, for example, the code that creates a class object (where is it?) calls the Crystal function that ends up invoking GC_MALLOC()? Allocations like the ones for the unsafe internal buffers of Array do too?
The array buffer could be allocated in another way, and then we’d have a finalizer than would deallocate it. But we found it much simpler to have everything go through the GC and not worry about writing finalizers. Plus, if people use Pointer it’ll be safe because it goes through the GC and so on.
As far as I know crystal basically uses the “default” bdwgc compile (correct me if I’m wrong, it links against “libgc.a” at compile time so…you get whatever that was compiled with). However you could tweak some of them by compiling your own libgc and have it link to that.