The Crystal Programming Language Forum

Accessing a class property is significantly slower than passing it

This isn’t exactly something that needs help so much as attention. Hopefully this is the right place to post this.

Accessing a property of a class within a function with @ or self is much slower than passing that same value. In the example below the fast code runs about 900 times faster on my machine. Is this intended behavior? It seems crazy expensive for access and limits the benefits of using objects.

require "benchmark"

class Foo
  property bar : Float64
  
  def initialize(@bar)
  end
  
  def slow
    1000.times do |i|
      @bar = (@bar + @bar) / @bar
    end
  end 

  def fast(bar : Float64)
    1000.times do |i|
      @bar = (bar + bar) / bar
    end
  end
end

fooA = Foo.new 10.0
fooB = Foo.new 10.0

Benchmark.ips do |bm|
  bm.report "Slow modification" { fooA.slow }
  bm.report "Fast modification" { fooB.fast fooB.bar }
end

Slow modification 62.78k ( 15.93µs) (± 1.48%) 0.0B/op 825.11× slower
Fast modification 51.80M ( 19.30ns) (± 6.63%) 0.0B/op fastest

Those two methods don’t do the same thing.
#fast repeats the same operation with the same operands which is probably optimized away, while #slow depends on the value of @bar which changes on every iteration (it’s changed to the same value, but it still changes).
I think only #slow actually executes the #times block more than once.

1 Like

Look at this change:

    require "benchmark"

    class Foo
      property bar : Float64

      def initialize(@bar)
      end

      def slow
        1000.times do |i|
          @bar = (@bar + @bar) / @bar
        end
      end

      def fast(bar : Float64)
        1000.times do |i|
          bar = (bar + bar) / bar
          @bar = bar
        end
      end
    end

    fooA = Foo.new 10.0
    fooB = Foo.new 10.0

    Benchmark.ips do |bm|
      bm.report "Slow modification" { fooA.slow }
      bm.report "Fast modification" { fooB.fast fooB.bar }
    end
Slow modification 251.32k (  3.98µs) (± 0.18%)  0.0B/op        fastest
Fast modification 251.31k (  3.98µs) (± 0.11%)  0.0B/op   1.00× slower
2 Likes

You are not passing bar (which is what I think you mean by saying “it”). In fast, you are simply passing a Float64 value. Thus, fast is going to be far faster. slow is re-accessing, and re-assigning the value to the ivar on each iteration. You can see the difference here

@girng_github I’m afraid your example doesn’t show anything. First of all while you require "benchmark", there’s no actual benchmark. Second of all carc.in doesn’t turn on compile time optimizations, so it’s useless for benchmarking. And third your code only ever writes to bar inside slow, but never reads it, which is a prime example of an operation that compile time optimizer just gets rid of entirely.

@jhass My example shows passing by value, not a reference to the ivar. Which is what the OP presumingly means by saying “passing it”. bar is not being passed, it’s the value of it, that is being used in the iteration. Thus it’s far faster. Value = Copy

And of course I’m not doing a benchmark because I’m not using Benchmark.ips. I’m just showing an example. Here, let me edit my code and remove "require “benchmark”.

Here is the new link: https://play.crystal-lang.org/#/r/7qvd

Oof, I feel silly for not testing it with other values. You’re right, they’re not the same.

The example proves in no way that anything is faster and I don’t see how a 4 byte copy inside stack memory is obviously faster than a pointer dereference into heap memory.