In any case, you should really be able to just store the result of the first
@ht[ht.from]? call in a local variable and replace all further calls with that variable.
Oh now I get you. The newborn isn’t letting me any more sleep than a few hours, let’s blame it on that
I thought you wanted me to make a TechEmpower-style optimization and cache the first request so that any subsequent request would be returned immediately, effectively gaming the benchmark.
Updated the repo with your suggestions, thank you.
Go output became ~131 kb/s faster, Crystal output became ~61 kb/s slower? I bumped up the total amount of requests to 25k, and ran it several times on each server just to be sure.
This is a toy bench, but one of the things that attracted me to Crystal was the “as fast as C” part. I’d love to learn whichever optimizations that would move me towards that goal.
Is it perhaps the lack of parallelism? Go schedules all goroutines across all cores, IIRC Crystal uses single core concurrency at the moment, perhaps this is why?
That’s not protected though, so with concurrent execution, I believe the current code could already run into conflicts.
Perhaps in later on I’ll add a mutex to both implementations, it’s how you’d do it in Go at least (without breaking the API).