Welcome to My Blog.

Here, you will find posts, links, and more about code (primarily Ruby), business (bootstrapped SaaS), and a little of everything in between.

Render Components from Turbo Broadcasts

I was surprised to learn there is no built-in way to render components from turbo broadcasts.

Components (Phlex or ViewComponent) still feel very underutilized in Rails. Perhaps that will change in the future, but we made 20 years without them, so perhaps not.

Either way, I wanted an easy way to re-use them when broadcasting changes.

The easiest option I could find was something that looked like this:

def broadcast_created
   component = Component.new(self)
   html = ApplicationController.render(component, layout: false)

    broadcast_append_later_to(
      "the_stream",
      target: "the_target",
      html: 
    )
end

This works fine for a one-off. I could always add something like .as_html to my base component class to make it less repetitive. But if I wanted to make the "component" just work, I added the following as an initializer in my Rails app.

module TurboStreamsBroadcastExtensions
  def broadcast_action_later_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
    render_component_to_html(rendering)
    super
  end

  def broadcast_action_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
    render_component_to_html(rendering)
    super
  end

  private

  def render_component_to_html(rendering)
    if rendering&.dig(:component)&.respond_to?(:render_in) && rendering&.dig(:html).blank?
      rendered_component = ApplicationController.render(rendering.delete(:component), layout: false)
      rendering[:html] = rendered_component
    end
  end
end

Turbo::Streams::Broadcasts.prepend TurboStreamsBroadcastExtensions

With this in place, I can simply add "component" to my broadcast_append_later_to

def broadcast_created
    broadcast_append_later_to(
      "the_stream",
      target: "the_target",
      component: Component.new(self)
    )
  end

Perhaps this will be built someday, but for now, I think it will do the trick.

If you want to try this, here is a gist with the code.

#

Solid Queue in Development

In Rails 8, SolidQueue is included by default. However, in development, you default to using the Async/in-memory job processor. This is OK for simple tasks, but I prefer running it in a separate process.

In a world over Overmind (and Foreman), this adds very little extra work to your development experience. In addition, knowing I have seen it all work together makes me much more confident about going to production.

I stumbled twice on this, so I figured I would document it for others (and likely my Google search in the future).

Here are the changes you need to make:

First, modify your database.yml file to specify the database you want to use. You could technically put this in your development database, but the cost of another database (especially SQLite) is zero locally, so I don't see why you would bother.

The core changes here:

  1. We specify the first database is now primary.
  2. We list the queue database
  3. We list the migration path for the queue database
development:
  primary:
    <<: *default
    database: storage/development.sqlite3
  queue:
    <<: *default
    database: storage/development_queue.sqlite3
    migrations_paths: db/queue_migrate

Next, you need to go into your development.rb environment file and specify we want to use the SolidQueue and the database it should connect to.

config.active_job.queue_adapter = :solid_queue
config.solid_queue.connects_to = {database: {writing: :queue}}

Finally, to make it easy always to start the worker process, add this to your Procfile.dev

worker: bundle exec rake solid_queue:start

Now, when you run bin/dev your jobs will run in their own process.

#

Plucking Nice

Put this under the more you know.

I had no idea ActiveSupport provided a pluck method like ActiveRecord.

Thanks to Standard, I now know.

pluck.png

This code cleans up nicely with:

prompts = message_prompts.pluck(:content).join(" ")
#

Keynote

A presenter is an object that encapsulates view logic. Like Rails helpers, presenters help you keep complex logic out of your templates.

This thing has barely seen an update in years but still works just as you would expect. It is likely the closest thing to done in my Gem file.

keynote: Flexible presenters for Rails.

#

Building a Multi-Step Job with ActiveJob

Most Rails applications start pretty simple. Users enter data; data gets saved in the database.

Then, we move on to something a bit more complex. Eventually, we realized we should not do all the work in one request, and we started using some form of job to push work onto a background process.

Great. However, as complexity increases, we realize we do too much work in a single background job. So, the next logical option is to split the background job into multiple jobs. Easy enough, of course, but then we run into some gotchas:

  1. Do any jobs require other jobs to be completed first? And, of course, do any of those sub-jobs require other sub-jobs (and so on)
  2. How do we mentally keep track of what is going on? How do we make it easy for someone to jump into our code base and understand what is happening?

Over the years, KickoffLabs has processed billions of jobs. Breaking tasks down into small chunks has been one way we have managed to scale. One of the things I have found challenging over the years is keeping track of when/what is processed in the background jobs.

So, when I started to experiment with a new product idea, I wanted to find a way to tame this problem (and eventually roll it back into KickoffLabs, too).

I had seen Shopify's JobIteration library before but never had a chance to use it.

Meet Iteration, an extension for ActiveJob that makes your jobs interruptible and resumable, saving all progress that the job has made (aka checkpoint for jobs).

It recently popped up on my radar again, and I noticed it supports iterating over arrays. This gave me an idea. Typically, this library is used for iterating over a large number of items in a job and tracking your place. If the job restarts (or even raises an error), you can safely resume where you left off.

With that functionality alone, it is likely a quite helpful library for most projects. But what if we used it to define a series of steps a job needs to take? This way, we can have a single job that handles all of the processing for a necessary task.

If things can be run in parallel, one or more of the steps can create new child jobs as well.

With that in mind, here is "SteppedJob":

class SteppedJob < ApplicationJob
  include JobIteration::Iteration
  queue_as :default

  class_attribute :steps, default: []

  class << self
    def steps(*args)
      self.steps = args
    end
  end

  def build_enumerator(*, cursor:)
    raise ArgumentError, "No steps were defined" if steps.blank?
    raise ArgumentError, "Steps must be an array" unless steps.is_a?(Array)
    Rails.logger.info("Starting #{self.class.name} with at cursor #{steps[cursor || 0]}")
    enumerator_builder.array(steps, cursor:)
  end

  def each_iteration(step, *)
    Rails.logger.info("Running step #{step} for #{self.class.name}")
    send(step, *)
    Rails.logger.info("Completed step #{step} for #{self.class.name}")
  end
end

This could also be a module, but I have it set up as a base class.

To use it:

  1. Create a job that derives from SteppedJob
  2. Define an array of steps
  3. Add a method for each step

Here is a sample job. This job is enqueued like any other ActiveJob: ProcessRssContentJob.perform_later(content)

From there, each job step is executed, and the content argument is passed along to each step.

class ProcessRssContentJob < SteppedJob
  queue_as :default

  steps :format_content, :create_content_parts, :enhance_content_parts

  def format_content(content)
    content.text = BlogPostFormatter.call(content:)
    content.processing_status!
  end

  def create_content_parts(content)
    ContentPartsForContentService.call(content:)
  end

  def enhance_content_parts(content)
    EnhanceContentPartsService.call(content:)
  end
end
#

Minitest::Mock with Keyword Arguments

I had a small gotcha that derailed my afternoon. Hopefully, a Google search (or via our new AI overloads) will tell you if you hit the same.

First, here is the method I am trying to mock: identifier.call(transcript:)

My initial approach looked like this:

mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"], [{transcript: "some text"}]

But I kept getting an error like this: mocked method :call expects 1 arguments, got []

Eventually, via some debugging (binding.irb for the win), I found that I could call my mock like this:

identifier.call(:transcribe => "some text")

So that led me to believe something was getting confused with the mock + method signature.

I tried to double-splat the arguments but ended up with the same error.

mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"], [**{transcript: "some text"}]

Finally, I went with an alternative way to set and verify the arguments of the mock:

mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"] do |args|
   args == {transcript: "some text"}
end

And we are back in business...well, onto the next issue. But we are almost all ✅ now. 😀

#

Still Learning While Using AI To Code - Or, Enumerable#chunk_while

For someone who thoroughly enjoys the Ruby language, one of the more rewarding aspects of using tools like Cursor and Supermaven has been exposure to some interesting Ruby methods I have not seen before.

Today, it was Enumerable#chunk_while.

The docs say:

Creates an enumerator for each chunked elements. The beginnings of chunks are defined by the block. This method splits each chunk using adjacent elements, elt_before and elt_after, in the receiver enumerator. This method split chunks between elt_before and elt_after where the block returns false. The block is called the length of the receiver enumerator minus one

Yes, that meant nothing to me as well. But here is a sample that should help. Let's take a grocery list in JSON and group all the items by the aisle they are found.

# Sample grocery shopping list
shopping_list = [
  { aisle: 'Produce', item: 'Apples' },
  { aisle: 'Produce', item: 'Bananas' },
  { aisle: 'Dairy', item: 'Milk' },
  { aisle: 'Dairy', item: 'Cheese' },
  { aisle: 'Canned Goods', item: 'Chickpeas' },
  { aisle: 'Canned Goods', item: 'Tomato Sauce' },
  { aisle: 'Snacks', item: 'Chips' },
]

chunked_items = shopping_list.chunk_while do |item1, item2|
  item1[:aisle] == item2[:aisle]
end

grouped_items = chunked_items.map do |aisle_group|
  {
    aisle: aisle_group.first[:aisle],
    items: aisle_group.map { |item| item[:item] }
  }
end

In the end, we have an array that looks like this:

[
 {:aisle=>"Produce", :items=>["Apples", "Bananas"]}
 {:aisle=>"Dairy", :items=>["Milk", "Cheese"]}
 {:aisle=>"Canned Goods", :items=>["Chickpeas", "Tomato Sauce"]}
 {:aisle=>"Snacks", :items=>["Chips"]}
]
#

Ruby Map With Index

Ruby has a built-in helper on Enumerable called - each_with_index

# An array of fruits
fruits = ["Apple", "Banana", "Cherry", "Date"]

# Using each_with_index to print each fruit with its index
fruits.each_with_index do |fruit, index|
  puts "#{index}: #{fruit}"
end

Unfortunately, there is no equivalent for the Enumerable#map.

['a','b'].map_with_index {|item,index|} # => undefined method map_with_index'`

There are easy ways to work around this, but the cleanest is by chaining with_index to .map

fruits = ["Apple", "Banana", "Cherry", "Date"]

result = fruits.map.with_index do |fruit, index|
  "#{index}: #{fruit}"  # Format the output with index
end

Why? My guess is to keep the number of methods smaller over time. Technically each_with_index could be deprecated since Enumerable#each.with_index is available. This way, we do not need a _with_index for everything on Enermable.

(1..100)
   .select
   .with_index {|n, index| (n % 2 == 0) && (index % 5 == 0)}

# [6, 16, 26, 36, 46, 56, 66, 76, 86, 96]
#

AeroSpace Tile Manager

In the never-ending pursuit of optimal App/Window/Space management on my computer, I recently switched to AeroSpace

AeroSpace bills itself as:

an i3-like tiling window manager for macOS

If you are like me and have never used i3, you may be asking, what is this?

Honestly, it is hard to describe, but in a nutshell, it has replaced my usage of three different applications:

  1. OS X Spaces (grouping of windows/projects/etc)
  2. Magnet - window spacing
  3. Alt+Tab - intelligent, quick app switching

These three apps work as expected (and I have nothing but praise for Magnet, which always did its job), but I find I work in a much more consistent and repeatable environment and spend far less time navigating between apps.

I recommend checking out this video for a detailed overview.

#

Turbo Streams -- append_all

While adding revisions (auto-save and version history) to PhrontPage, I needed a way to add a new element to the page. Using turbo_stream#append, turbo_stream#update, etc. would work, but only if there is an element on the page with a known dom id. Eventually, I would like this item (a toast component) to be more generally available, so I did not want to need to have anything hardcoded on the page.

In addition to append, update, etc., there are matching methods append_all, update_all, etc., that allow targeting one or more items with more flexible query selectors.

Append to the targets in the dom identified with targets either the content passed in or a rendering result determined by the rendering keyword arguments, the content in the block, or the rendering of the content as a record append_all

In my case, the Toast is added to the page, displays, and then removes itself.

Hattip to Matt Swanson for highlighting the returning component for the turbo_stream response.

  def render_success_toast
    component = Admin::Toast::Component.new(text: "Your changes have been saved")
    render turbo_stream: turbo_stream.append_all("body", component)
  end

toast.gif

#