This is the SQLite search code example I started with for re-implementing search in PhrontPage.
Welcome to My Blog.
Here, you will find posts, links, and more about code (primarily Ruby), business (bootstrapped SaaS), and a little of everything in between.
ChatGPT - Work with Apps
The new ChatGPT VS Code connection is interesting.
I asked it a simple question about PhrontPage's BlogConfiguration concern.
"What is the purpose of BlogConfiguration?"
It may be a nice way to dig into an unknown code base.
Rails’ Partial Features You (didn’t) Know
Partials have been an integral part of Rails. They are conceptually simple to understand, but they pack quite a few smart and lesser known features you might not know about. Let’s look at all of them!
Render Components from Turbo Broadcasts
I was surprised to learn there is no built-in way to render components from turbo broadcasts.
Components (Phlex or ViewComponent) still feel very underutilized in Rails. Perhaps that will change in the future, but we made 20 years without them, so perhaps not.
Either way, I wanted an easy way to re-use them when broadcasting changes.
The easiest option I could find was something that looked like this:
def broadcast_created
component = Component.new(self)
html = ApplicationController.render(component, layout: false)
broadcast_append_later_to(
"the_stream",
target: "the_target",
html:
)
end
This works fine for a one-off. I could always add something like .as_html
to my base component class to make it less repetitive. But if I wanted to make the "component" just work, I added the following as an initializer in my Rails app.
module TurboStreamsBroadcastExtensions
def broadcast_action_later_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
render_component_to_html(rendering)
super
end
def broadcast_action_to(*streamables, action:, target: nil, targets: nil, attributes: {}, **rendering)
render_component_to_html(rendering)
super
end
private
def render_component_to_html(rendering)
if rendering&.dig(:component)&.respond_to?(:render_in) && rendering&.dig(:html).blank?
rendered_component = ApplicationController.render(rendering.delete(:component), layout: false)
rendering[:html] = rendered_component
end
end
end
Turbo::Streams::Broadcasts.prepend TurboStreamsBroadcastExtensions
With this in place, I can simply add "component" to my broadcast_append_later_to
def broadcast_created
broadcast_append_later_to(
"the_stream",
target: "the_target",
component: Component.new(self)
)
end
Perhaps this will be built someday, but for now, I think it will do the trick.
If you want to try this, here is a gist with the code.
Solid Queue in Development
In Rails 8, SolidQueue is included by default. However, in development, you default to using the Async/in-memory job processor. This is OK for simple tasks, but I prefer running it in a separate process.
In a world over Overmind (and Foreman), this adds very little extra work to your development experience. In addition, knowing I have seen it all work together makes me much more confident about going to production.
I stumbled twice on this, so I figured I would document it for others (and likely my Google search in the future).
Here are the changes you need to make:
First, modify your database.yml file to specify the database you want to use. You could technically put this in your development database, but the cost of another database (especially SQLite) is zero locally, so I don't see why you would bother.
The core changes here:
- We specify the first database is now primary.
- We list the queue database
- We list the migration path for the queue database
development:
primary:
<<: *default
database: storage/development.sqlite3
queue:
<<: *default
database: storage/development_queue.sqlite3
migrations_paths: db/queue_migrate
Next, you need to go into your development.rb environment file and specify we want to use the SolidQueue and the database it should connect to.
config.active_job.queue_adapter = :solid_queue
config.solid_queue.connects_to = {database: {writing: :queue}}
Finally, to make it easy always to start the worker process, add this to your Procfile.dev
worker: bundle exec rake solid_queue:start
Now, when you run bin/dev
your jobs will run in their own process.
Plucking Nice
Put this under the more you know.
I had no idea ActiveSupport provided a pluck
method like ActiveRecord.
Thanks to Standard, I now know.
This code cleans up nicely with:
prompts = message_prompts.pluck(:content).join(" ")
Keynote
A presenter is an object that encapsulates view logic. Like Rails helpers, presenters help you keep complex logic out of your templates.
This thing has barely seen an update in years but still works just as you would expect. It is likely the closest thing to done in my Gem file.
Building a Multi-Step Job with ActiveJob
Most Rails applications start pretty simple. Users enter data; data gets saved in the database.
Then, we move on to something a bit more complex. Eventually, we realized we should not do all the work in one request, and we started using some form of job to push work onto a background process.
Great. However, as complexity increases, we realize we do too much work in a single background job. So, the next logical option is to split the background job into multiple jobs. Easy enough, of course, but then we run into some gotchas:
- Do any jobs require other jobs to be completed first? And, of course, do any of those sub-jobs require other sub-jobs (and so on)
- How do we mentally keep track of what is going on? How do we make it easy for someone to jump into our code base and understand what is happening?
Over the years, KickoffLabs has processed billions of jobs. Breaking tasks down into small chunks has been one way we have managed to scale. One of the things I have found challenging over the years is keeping track of when/what is processed in the background jobs.
So, when I started to experiment with a new product idea, I wanted to find a way to tame this problem (and eventually roll it back into KickoffLabs, too).
I had seen Shopify's JobIteration library before but never had a chance to use it.
Meet Iteration, an extension for ActiveJob that makes your jobs interruptible and resumable, saving all progress that the job has made (aka checkpoint for jobs).
It recently popped up on my radar again, and I noticed it supports iterating over arrays. This gave me an idea. Typically, this library is used for iterating over a large number of items in a job and tracking your place. If the job restarts (or even raises an error), you can safely resume where you left off.
With that functionality alone, it is likely a quite helpful library for most projects. But what if we used it to define a series of steps a job needs to take? This way, we can have a single job that handles all of the processing for a necessary task.
If things can be run in parallel, one or more of the steps can create new child jobs as well.
With that in mind, here is "SteppedJob":
class SteppedJob < ApplicationJob
include JobIteration::Iteration
queue_as :default
class_attribute :steps, default: []
class << self
def steps(*args)
self.steps = args
end
end
def build_enumerator(*, cursor:)
raise ArgumentError, "No steps were defined" if steps.blank?
raise ArgumentError, "Steps must be an array" unless steps.is_a?(Array)
Rails.logger.info("Starting #{self.class.name} with at cursor #{steps[cursor || 0]}")
enumerator_builder.array(steps, cursor:)
end
def each_iteration(step, *)
Rails.logger.info("Running step #{step} for #{self.class.name}")
send(step, *)
Rails.logger.info("Completed step #{step} for #{self.class.name}")
end
end
This could also be a module, but I have it set up as a base class.
To use it:
- Create a job that derives from SteppedJob
- Define an array of steps
- Add a method for each step
Here is a sample job. This job is enqueued like any other ActiveJob: ProcessRssContentJob.perform_later(content)
From there, each job step is executed, and the content argument is passed along to each step.
class ProcessRssContentJob < SteppedJob
queue_as :default
steps :format_content, :create_content_parts, :enhance_content_parts
def format_content(content)
content.text = BlogPostFormatter.call(content:)
content.processing_status!
end
def create_content_parts(content)
ContentPartsForContentService.call(content:)
end
def enhance_content_parts(content)
EnhanceContentPartsService.call(content:)
end
end
Minitest::Mock with Keyword Arguments
I had a small gotcha that derailed my afternoon. Hopefully, a Google search (or via our new AI overloads) will tell you if you hit the same.
First, here is the method I am trying to mock: identifier.call(transcript:)
My initial approach looked like this:
mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"], [{transcript: "some text"}]
But I kept getting an error like this: mocked method :call expects 1 arguments, got []
Eventually, via some debugging (binding.irb
for the win), I found that I could call my mock like this:
identifier.call(:transcribe => "some text")
So that led me to believe something was getting confused with the mock + method signature.
I tried to double-splat the arguments but ended up with the same error.
mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"], [**{transcript: "some text"}]
Finally, I went with an alternative way to set and verify the arguments of the mock:
mock_identifier = Minitest::Mock.new
mock_identifier.expect :call, ["abc"] do |args|
args == {transcript: "some text"}
end
And we are back in business...well, onto the next issue. But we are almost all ✅ now. 😀
Still Learning While Using AI To Code - Or, Enumerable#chunk_while
For someone who thoroughly enjoys the Ruby language, one of the more rewarding aspects of using tools like Cursor and Supermaven has been exposure to some interesting Ruby methods I have not seen before.
Today, it was Enumerable#chunk_while.
The docs say:
Creates an enumerator for each chunked elements. The beginnings of chunks are defined by the block. This method splits each chunk using adjacent elements, elt_before and elt_after, in the receiver enumerator. This method split chunks between elt_before and elt_after where the block returns false. The block is called the length of the receiver enumerator minus one
Yes, that meant nothing to me as well. But here is a sample that should help. Let's take a grocery list in JSON and group all the items by the aisle they are found.
# Sample grocery shopping list
shopping_list = [
{ aisle: 'Produce', item: 'Apples' },
{ aisle: 'Produce', item: 'Bananas' },
{ aisle: 'Dairy', item: 'Milk' },
{ aisle: 'Dairy', item: 'Cheese' },
{ aisle: 'Canned Goods', item: 'Chickpeas' },
{ aisle: 'Canned Goods', item: 'Tomato Sauce' },
{ aisle: 'Snacks', item: 'Chips' },
]
chunked_items = shopping_list.chunk_while do |item1, item2|
item1[:aisle] == item2[:aisle]
end
grouped_items = chunked_items.map do |aisle_group|
{
aisle: aisle_group.first[:aisle],
items: aisle_group.map { |item| item[:item] }
}
end
In the end, we have an array that looks like this:
[
{:aisle=>"Produce", :items=>["Apples", "Bananas"]}
{:aisle=>"Dairy", :items=>["Milk", "Cheese"]}
{:aisle=>"Canned Goods", :items=>["Chickpeas", "Tomato Sauce"]}
{:aisle=>"Snacks", :items=>["Chips"]}
]