I’m not sure you do understand the use case, then. There are multiple tradeoffs associated with that approach.
One of them is that the person writing the query needs to write every type returned in the response. There could be a quite a few types involved in just one query, especially when you have edges { node { ... } }
for every connection between entities, which seems to be common with GraphQL. One of my simpler GraphQL queries involves 7 types in the result.
Another is that such an explosion of types has a huge impact on compile times. If you have 500 distinct GraphQL queries throughout your application, that could realistically add 5000 or more JSON::Serializable
types to compile unless you are very diligent about finding and reusing them, which comes with its own tradeoffs. The compilation times associated with that can be illustrated with this code:
require "json"
abstract struct Model
include JSON::Serializable
macro define(type, *ivars)
struct {{type}} < Model
{% for ivar in ivars %}
getter {{ivar}}
{% end %}
end
end
macro create_for_benchmark(type, count)
{{@type}}.define {{type}},
{% for i in 1..count %}
var{{i}} : String{% if i < count %},{% end %}
{% end %}
end
end
{% for i in 1..env("MODEL_COUNT").to_i %}
Model.create_for_benchmark Foo{{i}}, {{env("IVAR_COUNT").to_i}}
{% end %}
# Ensure the compiler compiles all of our objects' `initialize(JSON::PullParser)` methods
{% begin %}
json = {
{% for i in 1..env("IVAR_COUNT").to_i %}
var{{i}}: "asdf",
{% end %}
}.to_json
pp ({{(1..env("MODEL_COUNT").to_i).map { |i| "Foo#{i}" }.join(" | ").id}}).from_json(json)
{% end %}
Comparing compilation times of various permutations of number of types and average number of ivars per type:
# 10 ivars per type
MODEL_COUNT=10 IVAR_COUNT=10 crystal build -s examples/graphql_compile_time.c 2.28s user 0.32s system 167% cpu 1.556 total
MODEL_COUNT=100 IVAR_COUNT=10 crystal build -s -o example 3.22s user 0.43s system 151% cpu 2.404 total
MODEL_COUNT=500 IVAR_COUNT=10 crystal build -s -o example 18.27s user 0.94s system 187% cpu 10.251 total
# 20 ivars per type
MODEL_COUNT=10 IVAR_COUNT=20 crystal build -s examples/graphql_compile_time.c 2.85s user 0.44s system 196% cpu 1.673 total
MODEL_COUNT=100 IVAR_COUNT=20 crystal build -s -o example 7.30s user 0.50s system 155% cpu 5.002 total
MODEL_COUNT=500 IVAR_COUNT=20 crystal build -s -o example 35.89s user 1.27s system 167% cpu 22.131 total
# 30 ivars per type
MODEL_COUNT=10 IVAR_COUNT=30 crystal build -s examples/graphql_compile_time.c 3.06s user 0.45s system 183% cpu 1.908 total
MODEL_COUNT=100 IVAR_COUNT=30 crystal build -s -o example 10.06s user 0.58s system 133% cpu 7.975 total
MODEL_COUNT=500 IVAR_COUNT=30 crystal build -s -o example 61.72s user 1.84s system 150% cpu 42.331 total
Memory usage on the last compilation exceeded 5GB. This is just to compile the types for deserializing query results and doesn’t count the compilation cost for anything else the program does. I couldn’t run a benchmark for 5000 model types because it crashes the compiler.
None of this is to say it’s a bad approach. But there are tradeoffs to consider and I don’t think that’s being understood.
FWIW, I don’t like GraphQL. I think it’s unnecessarily complicated even in a dynamic language like Ruby and throws away 25 years of tried-and-true ideas using REST — for example, you lose HTTP Cache-Control
because everything’s a POST
, so GraphQL’s solution to caching is quite literally “build your own”. Unfortunately, the only API available for some of the things I need to do uses GraphQL.