HTTP::Client and fibers

So I’m fiddling with making a Zulip client (oh I wish there was a Crystal Zulip chat, oh well).

“Listening” is just making a HTTP request, which the server will then block for a while if there’s no new events.

So, I’ve made a class that just creates a HTTP::Client and loops over requests, works smoothly. But I spotted this in the docs:

WARNING A single HTTP::Client instance is not safe for concurrent use by multiple fibers.

Which I should take note of, as the idea is that I’ll be making other requests to the server while listening for new events. Of course, I could just use HTTP::Client.get, but it seems a bit excessive to create a new client for each request?

It does make sense that a single client instance can only handle one request at a time, but is that the gist of the warning, or is there something that makes it unsafe to use a client that’s been previously used in another fiber, even if that request is completed?

As the Zulip client will be firing of HTTP requests left and right, I would like some HTTP client reuse, to keep things tidy.

So what would the cool kids do?

Cannot perform concurrent HTTPS GET requests from a single client session · Issue #12412 · crystal-lang/crystal · GitHub covers some of the reasoning for adding that warning.

Otherwise you could prob leverage GitHub - j8r/ A simple thread-safe generic pool. to have a pool of HTTP::Client that are available to handle requests. I.e. so that each request would have its own client, while still being able to reuse them for efficiency reasons.

1 Like

crystal-db allows you to use a pooled http client also. Allow DB::Pool to be used a generic connection pool by jgaskins · Pull Request #131 · crystal-lang/crystal-db · GitHub

1 Like

Seconding @bcardiff’s suggestion of crystal-db here — I put in the PR that enabled that usage specifically because DB::Pool is the single best connection pool implementation I’ve seen in Crystal.

  • Thread-safe if you need that
  • Customizable elasticity via max_idle_pool_size, so you aren’t stuck holding onto hundreds of connections just because your service had a brief surge in work to do
  • Disposing of connections is simply a matter of closing them

The first is a matter of mutex usage, but I don’t think any other implementation in Crystal meets those last 2 needs.

It was designed for RDBMS clients, but I also use it for redis and elasticsearch. I also have a couple other clients for Google and Microsoft APIs (which I still need to extract from one of my apps) that use it.


Wonder if it would make sense to split it out into its own shard? Just feels kinda weird requiring a DB shard just to use it to create a connection pool.


So if pools work, it’s not that it’s fundamentally incompatible with fibers, it just can’t handle more than one request at a time. Which makes sense as there’s only one IO in there.

Oh, excellent. I was thinking of doing some sort of pooling, but this makes it pretty simple.

That’s handy.

Seconded. Would also improve visibility.

I created Extract `pool` implementation to dedicated shard · Issue #190 · crystal-lang/crystal-db · GitHub to track that idea.

1 Like

I must be missing something obvious here, but take this example:

require "http/client"
require "pool"

class Test
  @pool : Pool(HTTP::Client)

  def initialize
    @pool = {"", tls: true)


  def test
    @pool.get { |client|
      res = client.get("/")



This fails, because the compiler obviously can’t know that the @pool.get block is always executed, so it aren’t sure of the type of res. But if we move the res.body inside the block, test doesn’t return anything because @pool.get doesn’t have a return type. And it can’t have as there’s no way for it to know.

I figure that

client = @pool.get
res = client.get("/")
@pool.add client

Ought to work, but that’s a bit unpretty. And I could monkeypatch in my own variant of get that has a return type (or sub-class it, that would be a tad prettier). Or just a helper on the client using class.

Am I missing something, or is this just the price you have to pay for static typing?

What failure message? :)

Why not just do everything in the #get block? Or maybe try the DB shard based one.

 19 | res.body
Error: undefined local variable or method 'res' for Test

Defining the before the get block just exchanges that for Error: read before assignment to local variable 'res'. Which makes sense.

I believe that the DB one would behave similarly. And, well, doing everything in the #get block would hold on to the client for the duration of processing, which goes a bit against the original goal of efficiency. Not to mention that you’re pushing your business logic from the outside (ask for something and work with it) to the indside (which blocks helps out with, of course). And this is getting more theroretical, but you’re right, if we’re getting to the point where that becomes a factor, you’d probably do just the minimal handling of the result in the block and stuff something in a queue for further processing.

But a little helper method did wonders:

    protected def client(&)
      client = @pool.get
      res = with client yield
      @pool.add client

Which gives a nice little micro DSL:

      response = client do
        post("/api/v1/register", headers: @headers)

The DB pool implementation returns the value of the block.
I don’t know why GitHub - j8r/ A simple thread-safe generic pool. doesn’t do that as well. It arbitrarily restricts the return type to Nil, but that seems to be completely unnecessary.


Oh, you’re right.

Oh, but I do, I was experimenting a bit. Because it uses rescue ... else, the return value of the begin block is actually the return value of add resource. The comment explains why it’s not using ensure but think this should work, though it’s not as pretty:

      res = yield resource
      add resource
    rescue ex
      @total_resources.delete resource
      raise ex