As this is an initial release, I wouldn’t recommend using it in production just yet. However, if you use Amber and have some spare time, I would appreciate you trying it out and letting me know what you think!
A little info about the project:
Motion is a framework for building reactive, real-time frontend components. If you have used lucky, you’ll notice it’s very similar. And that’s because the actual html generation & DSL was pulled & modified from lucky. This project wouldn’t of been possible without @paulcsmith & LuckyFramework. So i would like to give those dev’s a thank you!
Furthermore, motion.cr components are designed to be reactive. All components can connect to the server via websockets and updates can be streamed to the frontend based on frontend events. This functionality was inspired by Phoenix LiveView but the design is loosely based on this gem.
Please let me know what you think along with what you think can be improved!
One thing I haven’t solved yet (but I’ve been noodling on it for quite a while) is how to scale it across multiple instances of the application. For example, mine breaks if the WebSocket request doesn’t connect to the same instance of the application as the original request for the HTML, so you have to have your load balancer route all requests from there same user to the same backend. Does motion.cr avoid this problem?
As an initial release, I haven’t got to distributed systems. If I am being honest, I have little experience with distributed systems. I would love to chat about this if anyone has some thoughts on this. My first thought would be to store the serilized objects in redis that other instances can access but idk how viable that is due to my lack of experience.
This is also how I was thinking of solving it. Basically making the LiveViews serializable (something like JSON::Serializable or maybe MessagePack::Serializable to minimize serialization overhead), giving them UUIDs, and storing their state as keys with a TTL in Redis — WebSocket pings would refresh the TTL and closing the WebSocket could delete the key explicitly. The TTL is mainly to pick up the case where an on_close hook doesn’t get invoked due to a dirty shutdown or something so we don’t leak data in Redis.
I haven’t gone with that yet for a couple reasons. The first is that my apps that use these live views haven’t needed to scale beyond a single core, though I would like the redundancy.
The other and, if I’m being perfectly honest, bigger reason is that any objects you reference from the LiveView would also need to be serializable using the same methods, which can get funky in nontrivial views. For example, if a view updates inventory counts as items are sold, we may store that item on the LiveView state:
<% items.each do |item| %>
<%= InventoryItemView.new(item) %>
<% end %>
If the live view is serialized to Redis via JSON, them the item needs to include JSON::Serializable, too. The way I write my apps, this is probably fine most of the time. But I don’t know what Lucky and Amber models do (I use plain DB::Serializable), so I’m not sure how to generalize this yet. I also haven’t measured how much space this would take up in Redis, especially for models backed by wide DB tables or ones with big text columns (for example, product descriptions, comment text, etc), so I’m not sure what it looks like at scale yet.
One doubt I have… and don’t take it the wrong way, I’m just clueless about this “live view” trend.
When using the calculator there’s a noticeable delay (well, a hundred milliseconds, but it’s noticeable). Is such delay also present in Phoenix LiveView, or is it an implementation detail of motion.cr? (I can’t find a calculator demo for Phoenix).
What I’m saying is… all of this is really fancy, but is it performant for users? I think doing as much as possible on the client side is the best way to go, because then it offloads some work from the server, plus it’s responsive.
I am yet to try out motion.cr, but sharing my experience with Phoenix Liveview.
The calculator is definitely bad and will be removed in future updates of the demo. The more I use this, the more I find there are particular use cases. One use case I love that I am still thinking about implementation is streaming model updates when they occur. For example, if you and I had a shared todo list, you adding a to do would render that new to do on my screen just because I am viewing that component/model.
I need to take a look at the calculator this weekend. I thought it was ping related but the form validation round trip is 50ms on my network. The calculator is taking anywhere from 100-400ms round trip. I don’t know elixir but I plan to start dabbling with LiveView since it’s the most mature. Should help me quite a bit.
With that said, its going to be trail and error along with researching other libs on what should & shouldn’t be “motions”. I plan to put something in the readme about this when I have more data & input from others. My goal with this release was to open it up to the community and start getting feedback.
As for LiveViewing everything, I can agree with that. I am not aiming to replace JS, just reduce the amount of work in JS that I have to do. In a years time, I may find some serious problem, like distributed systems, that I can’t overcome. I am just rolling with it until that kind of problem occurs or I know longer like working on the project. We will see where it goes.
For the redis part, that’s pretty much what I was thinking. The TTL would be tough to get right but would be very helpful. I already have a configuration class that can expose certain settings. Adding crystal-redis as a dependency and exposing some settings would probably be the best course of action. Its already a dep of amber so it wouldnt really add more overhead. Users can configure redis URL, TTL, and some other variables that may pop up during implementation. Or, you can go no redis and deserialized objs can sit on the server like it does now for smaller projects.
I am currently refactoring some classes and js. However, after thats done, i do want to look into this more. The next feature i want to add is streaming updates from models. Would give me some time to sit down and play with serializing models. I think after thats done ill start looking into redis.
Redis would also give us the ability to do a global key/value store for other components to access.
I also wonder how sqlite would perform. I think redis might be the goto but a sqlite may be better if models are huge. Im not sure if it would be fast enough though.
It requires a round-trip to the server, so responsiveness will definitely depend on latency. I would definitely not use it as a general-purpose replacement for a full client-rendered app, but I would definitely see it as a better solution than Rails-style UJS. The choice there isn’t between client rendering and server-rendering, but server-rendering a full page over cold HTTP or server-rendering a tiny portion of it over a WebSocket connection that has already gone through its TCP handshake, TLS negotiation, TCP slow start, and even authentication.
It’s tradeoffs either way. I can add more capacity server-side to handle load, but I can’t make my users’ devices faster, for example. If your app requires a round-trip to the server on a given interaction, both a client-rendered app and a server-rendered app will have poor perceived performance for those interactions unless the client-rendered app does things like optimistic updates (in which case, what do you do when it fails?) or UI transitions while the request is in-flight to reduce the perception of latency. Either way, it takes great UX designers to make that not terrible.
However, if a client-rendered app would not require a round-trip to the server for an interaction (data is cached, pre-fetched, or it’s a pure-UI update like an accordion control), I 100% agree that it would offer a far better UX than anything that would, live-view or otherwise.
My 2c on the topic.
The attractiveness of LiveView in Phoenix/Elixir doesn’t lie with just “no-js”. I think it’s partially, that it’s “self-contained” and doesn’t require any other dependencies, beyond Elixir. Adding Redis to the mix somewhat contradicts that. I’m not sure how they manage session managment for sockets, but they probably using one of the buil-in tools from OTP/Erlang (like ETS)
Having said that, I do not want to criticize this great attempt at LiveView for Crystal! I prefer other languages to JS when I have an option, myself. This is just to warn against feature/tool bloat. Replacing some JS with a more complex infrastructure requirement (such as maintaining Redis DB), maybe not a desirable trade-off. Or maybe it is. Just something to consider and keep an eye on. ( Saying this as a DevOps engineer, who has to juggle multiple requirements on a legacy Rails app )
I agree with keeping it simple. With that said, Phoenix LiveView does support redis, it’s just disabled by default from my understanding.
I had some free time yesterday and made some changes for redis support. I broke things into ‘adapters’ where the server adapter is the default. I certainly need to study this more before I begin making recommendations. However, I don’t see a need for a single server to use the redis adapter. No point in making more round-trips nor adding said overhead. I think this is the best way to go. It’s easy for people to get started and deploy but Motion.cr also support horizonal scaling when you need it.