class X < Exception
def foo
end
end
class Y < Exception
def foo
end
end
class Z < Exception
end
class Foo(T)
def initialize
pp! T
end
def run(&block)
yield
rescue ex : T
ex.foo
end
end
Foo(X | Y).new.run do
raise X.new
end
Error in line 26: instantiating 'Foo(Exception)#run()'
in line 22: undefined method 'foo' for Exception (compile-time type is Exception+)
X | Y turns into Exception+, while I expect it to be X | Y. Does it work as intended?
Yes, it works as expected. Any union X | Y where X and Y inherit from a base class (except Reference) gets turned into that base type. It might be a little unexpected, though, but it’s done like that to avoid having many multiple different union types across a program.
What you can do is define a base exception type from which X and Y inherit, say BaseEx, but then instead of writing X | Y just use BaseEx (because it’s the same).
Eventually X | Y should probably give a compile error saying "that’s the same as Exception".
@asterite, @bcardiff thanks for your responses. I totally get it and I understand that changing this behavior is unlikely. However, I’ve faced this issue in my use-case and I just want you to know about it.
As I’ve said before, I’m building (almost finished) a background job processing. In my design, a worker must be explicitly assigned with jobs it will perform. It allows granular scaling. For example:
struct JobA < ProjectName::Job
def perform
end
end
struct JobB < ProjectName::Job
def perform
end
end
struct JobC < ProjectName::Job
def perform
end
end
class GenericWorker < ProjectName::Worker(ProjectName::Job)
def perform(job : T)
job.perform
end
end
In the very beginning I use a single worker to perform all the jobs and it works nicely. But at some moment I start to see that JobA and JobB consume too much resources and I need another worker for these jobs and only them, because JobC has good performance with a single worker.
So it would be logical to create another worker like this:
class WorkerAB < ProjectName::Worker(JobA | JobB)
def perform(job : T)
job.perform
end
end
Unfortunately, JobA | JobB turns into ProjectName::Job and it covers JobC as well. That’s not what I want.
So I have to introduce a useless overhead like an empty module which JobA and JobB would include so the union doesn’t turn into ProjectName::Job.
Or I could not require "job_c", but such an approach is unreliable, because some other code could accidentally require it.
Without knowing how workers are registered/instantiated and jobs delivered it’s hard to make wrong assumptions and go into a wrong way regarding your scenario.
It’s a bit weird to me that a WorkerAB can perform JobA and JobB with the same perform definition.
It’s also not clear if a job can be potentially performed by multiple workers or not.
From those decisions a proposed solution will be different. So, do you mind telling a bit more what are you imagining?
Thanks for your help proposal, Brian I’ll get to you in a couple of days when I release the working version, so it would be easier to understand how it works (I can’t briefly explain it right now because the project is quite complex). I’ll stick with T because, as I’ve mentioned in the previous post, the issue is avoidable with empty modules…
I had been using exploring using Crystal to do some exploratory DDD modeling and this bitten me a couple of times to the point where I gave up using Crystal for this purpose.
It was particularly frustrating when I had a method that took an Array(A | B) and a literal Array of A’s and B’s but the compiler would complain because the method compiled to Array(Base) and it had inferred my array to be Array(A | B). The inconsistency was frustrating. The fact that to fix it I had to change my code to express something other than my intent was a deal breaker.
I must admit that I don’t understand the logic that Crystal is using when it automatically widens type parameters: if I wanted a method that took the base class, I’d write that. I didn’t, I added a type union of specific types because that was what I wanted.