Welcome to My Blog.

Here, you will find posts, links, and more about code (primarily Ruby), business (bootstrapped SaaS), and a little of everything in between.

Mise and Puma-Dev

Over on 🦋, I have been praising Mise.

Yesterday, I updated to the latest version of Mise and had a setback or two. The biggest one was that puma-dev stopped working.

I was greeted with the following on all my apps using puma-dev:

unexpected exit:
	bash: line 27: exec: puma: not found

Unfortunately, searching Google was not helpful. But then, a quick search for mise on the puma-dev repo delivered the answer.

Add the following to a .pumaenv file:

command -v mise >/dev/null && eval "$(mise activate --shims)"

It would be better if this file could be created once in the home directory, but it needs to be in each project using mise (at least for me). 🤷

#

BlueSky API - Domains

I have been experimenting with the BlueSky API.

I recently encountered what I thought was an authentication bug, but it turned out I was using the API incorrectly.

I find most of the BlueSky API to be overbearing, but they are just getting started, and if they can keep it open, it will be worth it in the end. Hopefully, in the long term, there will be some nice libraries that focus on the common paths and obscure most of the protocol stuff.

But for today, you need to be willing to work through a couple of things. One of those items is knowing which AP endpoint (domain) to use.

BlueSky has an overview here: API Hosts and Auth

The short version:

For unauthenticated access, you likely want to use: api.bsky.app (and maybe even public.api.bsky.app for cached access)

For authenticated access (i.e., you are querying your own feed app.bsky.feed.getAuthorFeed`), you want to use bsky.social.

#

Rails’ Partial Features You (didn’t) Know

Partials have been an integral part of Rails. They are conceptually simple to understand, but they pack quite a few smart and lesser known features you might not know about. Let’s look at all of them!

#

Render Components from Turbo Broadcasts

I was surprised to learn there is no built-in way to render components from turbo broadcasts.

Components (Phlex or ViewComponent) still feel very underutilized in Rails. Perhaps that will change in the future, but we made 20 years without them, so perhaps not.

Either way, I wanted an easy way to re-use them when broadcasting changes.

The easiest option I could find was something that looked like this:

def broadcast_created
   component = Component.new(self)
   html = ApplicationController.render(component, layout: false)

    broadcast_append_later_to(
      "the_stream",
      target: "the_target",
      html: 
    )
end

This works fine for a one-off. I could always add something like .as_html to my base component class to make it less repetitive. But if I wanted to make the "component" just work, I added the following as an initializer in my Rails app.

module TurboStreamsBroadcastExtensions
  def broadcast_action_later_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
    render_component_to_html(rendering)
    super
  end

  def broadcast_action_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
    render_component_to_html(rendering)
    super
  end

  private

  def render_component_to_html(rendering)
    if rendering&.dig(:component)&.respond_to?(:render_in) && rendering&.dig(:html).blank?
      rendered_component = ApplicationController.render(rendering.delete(:component), layout: false)
      rendering[:html] = rendered_component
    end
  end
end

Turbo::Streams::Broadcasts.prepend TurboStreamsBroadcastExtensions

With this in place, I can simply add "component" to my broadcast_append_later_to

def broadcast_created
    broadcast_append_later_to(
      "the_stream",
      target: "the_target",
      component: Component.new(self)
    )
  end

Perhaps this will be built someday, but for now, I think it will do the trick.

If you want to try this, here is a gist with the code.

#

Solid Queue in Development

In Rails 8, SolidQueue is included by default. However, in development, you default to using the Async/in-memory job processor. This is OK for simple tasks, but I prefer running it in a separate process.

In a world over Overmind (and Foreman), this adds very little extra work to your development experience. In addition, knowing I have seen it all work together makes me much more confident about going to production.

I stumbled twice on this, so I figured I would document it for others (and likely my Google search in the future).

Here are the changes you need to make:

First, modify your database.yml file to specify the database you want to use. You could technically put this in your development database, but the cost of another database (especially SQLite) is zero locally, so I don't see why you would bother.

The core changes here:

  1. We specify the first database is now primary.
  2. We list the queue database
  3. We list the migration path for the queue database
development:
  primary:
    <<: *default
    database: storage/development.sqlite3
  queue:
    <<: *default
    database: storage/development_queue.sqlite3
    migrations_paths: db/queue_migrate

Next, you need to go into your development.rb environment file and specify we want to use the SolidQueue and the database it should connect to.

config.active_job.queue_adapter = :solid_queue
config.solid_queue.connects_to = {database: {writing: :queue}}

Finally, to make it easy always to start the worker process, add this to your Procfile.dev

worker: bundle exec rake solid_queue:start

Now, when you run bin/dev your jobs will run in their own process.

#

Plucking Nice

Put this under the more you know.

I had no idea ActiveSupport provided a pluck method like ActiveRecord.

Thanks to Standard, I now know.

pluck.png

This code cleans up nicely with:

prompts = message_prompts.pluck(:content).join(" ")
#

Keynote

A presenter is an object that encapsulates view logic. Like Rails helpers, presenters help you keep complex logic out of your templates.

This thing has barely seen an update in years but still works just as you would expect. It is likely the closest thing to done in my Gem file.

keynote: Flexible presenters for Rails.

#

Building a Multi-Step Job with ActiveJob

Most Rails applications start pretty simple. Users enter data; data gets saved in the database.

Then, we move on to something a bit more complex. Eventually, we realized we should not do all the work in one request, and we started using some form of job to push work onto a background process.

Great. However, as complexity increases, we realize we do too much work in a single background job. So, the next logical option is to split the background job into multiple jobs. Easy enough, of course, but then we run into some gotchas:

  1. Do any jobs require other jobs to be completed first? And, of course, do any of those sub-jobs require other sub-jobs (and so on)
  2. How do we mentally keep track of what is going on? How do we make it easy for someone to jump into our code base and understand what is happening?

Over the years, KickoffLabs has processed billions of jobs. Breaking tasks down into small chunks has been one way we have managed to scale. One of the things I have found challenging over the years is keeping track of when/what is processed in the background jobs.

So, when I started to experiment with a new product idea, I wanted to find a way to tame this problem (and eventually roll it back into KickoffLabs, too).

I had seen Shopify's JobIteration library before but never had a chance to use it.

Meet Iteration, an extension for ActiveJob that makes your jobs interruptible and resumable, saving all progress that the job has made (aka checkpoint for jobs).

It recently popped up on my radar again, and I noticed it supports iterating over arrays. This gave me an idea. Typically, this library is used for iterating over a large number of items in a job and tracking your place. If the job restarts (or even raises an error), you can safely resume where you left off.

With that functionality alone, it is likely a quite helpful library for most projects. But what if we used it to define a series of steps a job needs to take? This way, we can have a single job that handles all of the processing for a necessary task.

If things can be run in parallel, one or more of the steps can create new child jobs as well.

With that in mind, here is "SteppedJob":

class SteppedJob < ApplicationJob
  include JobIteration::Iteration
  queue_as :default

  class_attribute :steps, default: []

  class << self
    def steps(*args)
      self.steps = args
    end
  end

  def build_enumerator(*, cursor:)
    raise ArgumentError, "No steps were defined" if steps.blank?
    raise ArgumentError, "Steps must be an array" unless steps.is_a?(Array)
    Rails.logger.info("Starting #{self.class.name} with at cursor #{steps[cursor || 0]}")
    enumerator_builder.array(steps, cursor:)
  end

  def each_iteration(step, *)
    Rails.logger.info("Running step #{step} for #{self.class.name}")
    send(step, *)
    Rails.logger.info("Completed step #{step} for #{self.class.name}")
  end
end

This could also be a module, but I have it set up as a base class.

To use it:

  1. Create a job that derives from SteppedJob
  2. Define an array of steps
  3. Add a method for each step

Here is a sample job. This job is enqueued like any other ActiveJob: ProcessRssContentJob.perform_later(content)

From there, each job step is executed, and the content argument is passed along to each step.

class ProcessRssContentJob < SteppedJob
  queue_as :default

  steps :format_content, :create_content_parts, :enhance_content_parts

  def format_content(content)
    content.text = BlogPostFormatter.call(content:)
    content.processing_status!
  end

  def create_content_parts(content)
    ContentPartsForContentService.call(content:)
  end

  def enhance_content_parts(content)
    EnhanceContentPartsService.call(content:)
  end
end
#