Welcome to My Blog.

Here, you will find posts, links, and more about code (primarily Ruby), business (bootstrapped SaaS), and a little of everything in between.

When a HTTP Post Becomes a Patch

I was implementing the previews in PhrontPage.

Part of this involves grabbing the current form and sending the data via HTTPost (using requestjs-rails) to the server and then letting turbo_stream do its thing and update the screen.

const form = new FormData(formElement)
const response = await post(this.urlValue, {
   body: form,
   responseKind: "turbo-stream"
})

Full source in preview_controller.js

This was wired up to a controller like this:

post "/previews", to: "previews#show"

When working with new Posts or Pages, everything worked as expected.

However, once I tried to preview an existing Post or Page, I started getting 404 errors. Thankfully, it did not take me long to spot the error.

I believe it was Rails 4 when the switch was made to use the Patch verb for updates. Browsers do not typically support Patch. To get around this, Rails (and other frameworks) add a hidden field called _method.

patch.png

The browser declares a post request on the form. However, when Rails spots the '_method` parameter, it knows to look for the matching route. In my case, this did not exist at the time.

There are some simple fixes for this:

  1. Remove the _method from my FormData
  2. Supply a different route to handle existing Posts and Pages
  3. Update the route to work for both Post and Patch

I went with option #3. For the preview feature, it does not matter if the content already exists. I need to take what you have on the form and send it to the server.

The updated route looks like this:

match "/previews", to: "previews#show", via: [:patch, :post]
#

Rails Direct Uploads to A Custom Folder

One of my must-have features for PhrontPage was drag-and-drop direct uploads to a plain Markdown editor. I have always liked this functionality on GitHub issues. Rails ships with Trix support for dropping files on an editor, but that is not how I want to write.

I had bookmarked this example by Jeremy Smith a while ago, and it was a great help to get this feature implemented.

However, after a bit of testing, I quickly found my R2 bucket to be a bit messy and wanted to, at a minimum, direct all my blog uploads to a single folder. Surprisingly, there is not a built-in way to do this.

I created a custom ActiveStorage Service that derives from the S3Service and provides an option to append a folder to all the uploads that go through the service.

require "active_storage/service/s3_service"
module ActiveStorage
  class Service
    class S3WithPrefixService < S3Service

      def path_for(key)
        "uploads/#{key}"
      end

      def object_for(key)
        bucket.object(path_for(key))
      end
      
    end
  end
end

The full PhrontPage implements can be seen here, with code to pull the folder from an ENV variable and handle any extra "/". The one below is just the basics to get started.

Once added to your project, all you have to do is add it to your storage.yml file, and you should be all set.

#

Stimulus Controllers with a Single Target

For Stimulus controllers with a single target, I have been defaulting to using this.Element instead of requiring a target.

However, some don't like this approach since your controller essentially becomes hardcoded to your markup.

I was updating the footer of this site earlier today and decided to use the following pattern:

  static targets = ['footer']
  footerElement() {
    return (this.hasFooterTarget && this.footerTarget) || this.element
  }
  1. If the target exists, use it
  2. If there is no target, then use the element.

This feels like a happy middle ground. There is no need for the wasted target declaration, but if the markup ever gets more complicated, no code changes are needed in the controller.

If you are curious, here is the entire controller.

import { Controller } from "@hotwired/stimulus"

export default class extends Controller {
  static targets = ["footer"]

  connect() {
    this.adjustFooter()
    window.addEventListener('resize', this.adjustFooter)
  }

  disconnect() {
    window.removeEventListener('resize', this.adjustFooter)
  }

  footerElement() {
    return (this.hasFooterTarget && this.footerTarget) || this.element
  }

  adjustFooter = () => {
    const footer = this.footerElement()
    if (document.body.offsetHeight <= window.innerHeight) {
      footer.classList.add('fixed', 'bottom-0')
      footer.classList.remove('relative')
    } else {
      footer.classList.add('relative')
      footer.classList.remove('fixed', 'bottom-0')
    }
  }
}

When I initially built the app, I had the footer permanently fixed at the bottom. However, when reading the site, I hated the extra space the footer was taking up being fixed. Now, we get the best of both worlds with the following stimulus controller. On pages with limited content, the footer is fixed at the bottom of the page. The footer goes back to being relative on pages with a full screen of content (who writes more than 200 characters at a time these days....😀).

#

Handling a Stuck Heroku Release Command (Maybe)

I am unsure how to title this post, but it feels like a good idea to share what happened so that you have some steps to follow if you are in a bit of a panic.

First, here is what happened. Earlier today, I deployed a change that would add a column to a table. There was no backfill on the column. There are no required indexes. Nothing. It's a blank column on a relatively small (20k rows) table. I even held the code that would use this table for a second deployment.

Our Heroku deploys have been taking quite a while recently (15 minutes or so). I haven't had time to dig into why yet, so I wasn't overly concerned about the duration initially. Then I got the first ping that something was offline (Sidekiq error queue). That one felt unrelated. Our monitoring is set to notify us anytime the error queue has more than 100 items. Shortly after, I received notifications that other services were unavailable (timing out). A quick check on the recent deployment shows that the release phase has been running for 20 minutes.

At this point, I decide it is time to kill the deployment. I commnd+D the terminal and checked the current processes on Heroku (heroku ps). I can see the release command is still running. The next thing to check is the database. I can console into the database and other apps that use this same database are all functioning as expected. In addition, as far as I can tell, all the background jobs for this app are still running as expected (we use Sidekiq+Redis, but ultimately all work is done against the PG database).

To be safe, I ran pg:diagnose and could see long-running queries against the table I was attempting to migrate.

Next, I focused on killing the release phase process. In nearly 12 years of using Heroku, I have never had to kill a running process. I find references to ps:stop and ps:kill. Both report they worked, but running us ps, I can see the process is still running. It turns out that you need to include the process type as well: heroku ps:kill release.2343. Better output here would have been helpful.

While this killed the process, the app's state did not improve. I restarted the processes, which again did not fix the problem. Finally, I figured something was off with the app state, so I rolled back to a previous update (note: the new deploy was never fully deployed and unavailable). This appeared to fix things for a few seconds, but everything on the main app again began to time out.

I checked heroku pg:diagnose again and could see the same long-running queries were still there. There were about 40 or so of them, but I couldn't get the output in a state where I could quickly grab the PIDs to kill each process, so I went ahead and ran heroku pg: killall. After this, I restarted the main app (and related apps), and everything appears to be working well.

So the takeaways:

  1. Never deploy before coffee. The mind is not ready for this kind of stress.
  2. My best guess is that the connection pool for the main web app somehow got into a bad state. Killing all the connections, I was able to reset it.

I still have to deploy again, but I assume this was a freak condition.

#

Adding Execute Permission to Script in Git

With my SQLite backup script, I mentioned you need to add execute permission after you deploy.

However, I cloned a Rails app off of GitHub today and noticed that the bin/setup worked as expected and had proper execute permissions. 👀

I eventually found my way to the git's update-index command:

Modifies the index. Each file mentioned is updated into the index and any unmerged or needs updating state is cleared.

That description is clear as mud. 😛

But digging further is this option: --chmod=(+|-)x

Set the execute permissions on the updated files.

So here is how to use it.

  1. Add a new script file or modify an executing one (even with just a comment). This is important because update-index will not take effect unless you commit to some change.
  2. Add the change to git: git add bin/backup
  3. Execute update-index: `git update-index --chmod=+x bin/backup
  4. Commit the change: git commit -m "Now with execute permission"
#

Ruby Sub vs. Gsub

A little Ruby distinction I had not seen (or remembered seeing) before.

In Ruby, both String#sub and String#gsub are methods used for string substitution, but they have a subtle difference:

String#sub: This method performs a substitution based on a regular expression pattern, replacing only the first occurrence that matches the pattern.

str = "hello world"
new_str = str.sub(/o/, "a")
puts new_str

Output: hella world

String#gsub: This method also performs a substitution based on a regular expression pattern, but it replaces all occurrences that match the pattern within the string.

str = "hello world"
new_str = str.gsub(/o/, "a")
puts new_str

Output: hella warld

Hat tip to ChatGPT, who answered this question for me.

#

Configuring the SQLite BackUp Script for Hatchbox

Getting the SQLite BackUp Script running on Hatbox took a little extra work.

First, to get access to the ENV variables (assuming you are not hardcoding), you need to add the following:

cd /home/deploy/YOUR_APP_NAME/current
eval "$(/home/deploy/.asdf/bin/asdf vars)"

Second, where should I put the script? I would like to put it in the Rails bin directory. This works for getting it up to the server. However, anytime you deploy the execute permission on the script is lost.

My next attempt was to add a folder called bin to the shared directory. I set up a symlink to the file ln -sf ../../current/bin/backup backup and then set the execute permission to chmod +x backup. This worked, but the execute permission was again lost after a deployment.

Ultimately, I copied the script to the shared/bin directory and reset the execute permission. If I change it, I must remember to update the copy, but now it works.

Finally, I went to the HatchBox cron page for my app and configured the following to execute several times a day:

(cd ../shared/bin ; ./backup)

HatchBox cron jobs start in your current directory. We need to navigate to the bin folder before we can finally execute the backup.

#

SQLite BackUp to S3

I recently moved HowIVSCode to HatchBox. As part of their setup, they provide a shared folder for each application persisted across deployments.

However, at this time, there is no option to back up that data.

Side Note: Digital Ocean provides server backups, which would likely work, but I would rather my backups exist outside the network managing my servers.

What I ended up doing was writing a script that does the following:

  1. Loops through all the SQLite files in a given directory
  2. Uses the SQLite .backup command to perform a backup safely
  3. Gzip the file
  4. Uses GPG to encrypt the backup
  5. Send the backup to a locked down bucket on S3 via curl (so no aws cli dependency)
  6. Cleans up when done

On S3, I have the bucket configured to delete any files older than 31 days. This should keep costs in check, and you should configure this to your needs.

Before the script, I want to give a big shout-out to Paweł Urbanek and his guide for doing this with PostgreSQL + Heroku. I have been running a similar setup for a couple of years now, and knowing my data is safe outside of Heroku is excellent. I also want to shout out this Chris Parson's gist, which paved the way for sending the data to S3 without needing to install the ASW CLI.

The script uses five ENV variables (although you can hard code your values at the top)

The one BACKUP_S3_DB_PASSPHRASE must be saved somewhere you will remember. This is the passphrase used by GPG. The only thing worse than losing your database is having a backup you cannot decrypt. 😁

Here is a gist of the script.

#!/bin/bash
set -e

s3_key=$BACKUP_S3_KEY
s3_secret=$BACKUP_S3_SECRET
bucket=$BACKUP_S3_BUCKET
backup_db_passphrase=$BACKUP_S3_DB_PASSPHRASE
data_directory=$SQLITE_DATABASE_DIRECTORY
# ensure each backup has the same date key
date_key=$(date '+%Y-%m-%d-%H-%M-%S')

function backupToS3()
{
  database=$1

  database_file_name=$(basename -- "$database")
  database_name="${database_file_name%.*}"

  backup_file_name="/tmp/$database_name-backup-$date_key.sqlite3"
  gpg_backup_file_name="$database_name-$date_key.gpg"

  sqlite3 "$database" ".backup $backup_file_name"
  gzip "$backup_file_name"
  gpg --yes --batch --passphrase="$backup_db_passphrase" --output "/tmp/$gpg_backup_file_name" -c "$backup_file_name.gz"

  date=$(date +"%a, %d %b %Y %T %z")
  content_type='application/tar+gzip'
  string="PUT\n\n$content_type\n$date\n/$bucket/$gpg_backup_file_name"
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${s3_secret}" -binary | base64)
  curl -X PUT -T "/tmp/$gpg_backup_file_name" \
    -H "Host: $bucket.s3.amazonaws.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "Authorization: AWS ${s3_key}:$signature" \
    "https://$bucket.s3.amazonaws.com/$gpg_backup_file_name"

  rm "$backup_file_name.gz"
  rm "/tmp/$gpg_backup_file_name"
}

for file in "$data_directory"/*.sqlite3; do
  backupToS3 "$file"
done

Quick Summary of the script

  1. Lines 4-8 - grab the ENV variables
  2. Line 10 - grab a date we can use to append to the file name and avoid collisions
  3. Line 12, declare a function backuToS3 we will use at the end to iterate over each database in the directory
  4. Lines 14-17 - extract the database file name. A significant benefit to SQLite is there is no harm in having many databases for individual tasks. For HowIVSCode, I use LiteStack, which creates separate databases for Data, Cache, and Queue.
  5. Lines 22-24 - backup, zip, and encrypt
  6. Lines 26-35 - send the file to AW3. If you have the AWS CLI installed, you could probably replace that with aws s3 cp "/tmp/${gpg_backup_file_name}" "s3://$bucket/$gpg_backup_file_name"
  7. Lines 37-38 - clean up the tmp files
  8. Lines 41-43 - loop through any .sqlite3 files in the directory.
#

11 Things I Learned Migrating HowIVSCode to Rails 7.1

For reasons I will share in another post, I was forced to move two personal projects off of Heroku and onto HatchBox. The first, ThocStock, went very smoothly. I deployed the code, set a couple of ENVs, verified everything worked as expected, and finally updated the DNS.

The second app, HowIVSCode, proved to be a bit more of a challenge. The first commits were about 4.5 years ago. Like many side projects with no revenue, apart from some gem security updates, it has not seen many changes over the last four years.

The app was primarily built on Rails 5, TailwindCSS, webpacker, Administrate, and Delayed Job. Deploying to HatchBox yielded errors related to Python via Webpacker and led me down the trail of ripping out Webpacker, CSS building, and JS bundling. After a while, it felt like I was running in circles, putting out new small fires. All solvable problems, but then it hit me. This app has a total of 3 models and a couple of controllers. There is little value in maintaining the source history (and I still have it if needed).

I wanted to try out LiteStack, so I told myself I could do this in an hour or two if I started from scratch and copied over the models, controllers, and views.

In typical developer fashion, 2 hours was not a realistic estimate (probably closer to 6 to 8), but I learned quite a bit along the way.

So here is what I learned upgrading a mostly kludgy Rails 5/webpacker app to a fresh Rails 7.1 app using LiteStack + ImportMaps + Avo.

NOTE: I call this a migration instead of an upgrade because I am starting mostly fresh and pulling in the relevant pre-existing parts.

Running Rails New with the --skip-bundle flag Has Potentially Unintended Consequences

I had initially mentioned on X that using import maps required I execute the following on my own.

bin/rails importmap:install tailwindcss:install stimulus:install:importmap turbo::install:importmap

This is something I would have expected Rails to just do based on the flags I had set when running new (and primarily based upon what was in my .railsrc)

--css=tailwind
--javascript=importmap
--database=sqlite3
--asset-pipeline=propshaft
--template=~/rails_template.rb
--skip-jbuilder
--skip-bundle

While writing this, I conducted several tests and discovered a few issues that may be bugs or inadequacies in the documentation. When using the --skip-bundle flag, the proper installation of Importmap, Tailwind, Stimulus, and Turbo is not facilitated. This might seem logical since installing them without bundling is challenging. However, I would have expected running the bundle manually later (or perhaps bin/setup) would resolve this issue.

It also appears that with the --skip-bundle flag, the Tailwind-Rails gem is not included when specified with the --css=tailwind flag.

Again, there is some chicken and egg here. It is on my list to dig into the source more to figure this out. But you will likely be up and running quicker if you do not use the skip-bundle flag.

ImportMap Limitations

The short answer here is that everything you typically bundle with JavaScript is always an option (today) with importmaps. This is something to consider beforehand, especially if you have a list of JavaScript libraries you need to use. In the case of HowIVSCode, I could not experiment with DaisyUI. However, the long-term benefit of an app that will not get many updates is too good to ignore.

OmniAuth Login 'Links' (Likely) Require an HTTP Post

The only way to log in or create an account with HowIVSCode is via GitHub + OAuth. OmniAuth now recommends adding the gem omniauth-rails_csrf_protection. I didn't think much about it and just added it.

However, once in place, you can no longer use a href as part of your sign-up flow. The reasoning for this makes sense, but in quickly trying to move to the most recent gem updates, I spun my tires here for far longer than I care to admit.

No Arrays in SQLite

When relational data is small and often just a word or two, I use an array in PostgreSQL.

For example, a migration for a small blog post table might look like this:

create_table :posts do |t|
	t.text :title
	t.text :body
	t.text :tags, array: true
end

I no longer need a separate tags table or a tags_in_posts related table. PostgreSQL provides the necessary functions to query this as needed.

Unfortunately, there are no Arrays in SQLite. However, all is not lost. SQLite does support JSON, so for now, I added a json column called data and the necessary arrays. This is just data I am recording, so we will have to see in the future if this holds up when querying by specific tags becomes necessary.

@apply warnings

For better or worse, I occasionally use @apply with my Tailwind CSS. VSCode kept complaining about (although it still worked) an Unknown Rule. The fix is to set the *.css file association to tailwindcss. Full details here.

Default Layout for SitePress

My markdown content views previously used the markdown-views gem. This time around, I decided to go with SitePress. Overall, SitePress has been great to work with. However, one thing I struggled with was how to use a different layout for my content pages.

With my SitePress pages, everything that is not the markdown body is set in a different Layout file. This way, I did not have to try and overly mix Markdown and ERB in my content (and as far as I can tell, you cannot even access SitePress's page variables

I tried various ways to make this work, but I settled on creating a new controller derived from Sitepress::SiteController. Then, in my routes file, I specified that this controller would be used sitepress_pages(controller: "content") for my pages.

Meta Tags with SitePress

Similar to the above concern, I wanted to be still able to set various meta tags via the meta-tags gem.

Again, trying to avoid any ERB in my Markdown as much as possible, I added a meta section to my frontmatter and wired it up like this in the SitePress layout.

<% if meta_data = current_page.data["meta"] %>
  <% set_meta_tags(meta_data.to_h) %>
<% end %>

Markdown Escaping in SitePress

I have a Stimulus controller that adds your API key to the clipboard when you click on it. Previously, I rendered a partial which wired up the controller <%= render partial: "auth_token" %>

The partial approach still worked (I know, ERB in MD), but the # in data-action="click->copy-auth-key#copy is causing RedCarpret (SitePress's markdown processer) to start escaping everything after. The markdown-views gem uses Commonmarker for processing markdown and doesn't appear to have this issue (I have tests to compare if someone is interested).

If I were using more stimulus in the project, I would probably need to dig deeper and/or swap Markdown libraries. My usage was simple enough; I just dropped the data-action attribute and wired up an event listener in my controller.

import { Controller } from "@hotwired/stimulus";
export default class extends Controller {
  copy() {
    navigator.clipboard.writeText(this.element.innerText.trim());
  }
  initialize() {
    this.element.addEventListener("click", this.copy.bind(this))
  }
}

Sometimes, being lazy is the correct answer.

Importing Data from PG to Sqlite

Sadly, there does not appear to be an easy off-the-self option to go in this direction. Most articles recommended doing a pg_dump to SQL and then massaging the file to work with Sqlite. Depending on the data complexity, this might be your best option. I was going from the old to the new app with roughly the same ActiveRecord models (minus the arrays).

I found generating a couple of JSON files from the original app and then looping over each of them with my new models to be the simplest repeatable option.

Configuring Where LiteStack Puts The SQLite Databases in Production

Hatchbox provides each app with a persistent storage location. Using the database.yml, it is simple to set this as the folder for your data.sqlite3 file. However, I wanted to be sure that the other Sqlite databases, such as queue.sqlite3, are also correctly persisted. For the litequeue.yml, there is a db path option, but this is relative to the main app configuration.

Looking at the LiteStack docs, there was no obvious answer. However, digging through the source, there is an ENV variable you can set: LITESTACK_DATA_PATH

The benefit of the ENV is it also handles the data.sqlite3 file as well, so I could remove the hardcoded production path from my database.yml.

Side note: It still sticks the files in a production sub-directory. It is not the end of the world, but I hope it becomes optional.

Bundle Only Supports "arm64-darwin"

I believe this is related to the non-bundle issue I had previously. Essentially, my bundle was only valid as is for M1 Macs.

The fix was as easy as bundle lock --add-platform x86_64_linux

If you watch the output from a rails new without the --skip-bundle flag, you can see the --add-platform flag is set.

#