Modernizing Pedalwrencher: whatever that means.

I’ve got a side project that I’ve maintained (badly) for the past couple of years, pedalwrencher.com.  It’s a pretty simple idea, if you ride bikes, and use strava.com, you can sign up with pedalwrencher and set up mileage based alerts. So if you want to replace you chain every 2000 miles, you can get an email or SMS message (via twilio) every 2000 miles with that reminder.  Pretty straight forward.

Architecturally, it was originally built as a single flask app backed by PostgreSQL, running on Heroku.  Separately there was a tiny EC2 box with a cron job running to do the actual batch processing to send out notifications once an hour.  This was a pretty simple way to get things up and running, but the reality of a side project is that it’s not getting babysat. If I get busy and am not riding personally, things can break, the cron job can fail and I may not know about it.  And that’s basically what happened. I didn’t have any kind of good monitoring or error reporting set up, so something silently failed for a couple of months before I started getting some user emails about it.

So I took a day a couple of weeks ago and decided to migrate the existing app onto a more “modern” deployment model. Mostly just to get some experience with newer technologies but also to get things back up and running and more observable.

The general plan was:

  • Dockerize the flask app
  • Migrate it onto AWS ECS
  • Move the domain to Route 53
  • Set up autoscaling on ECS cluster
  • Dockerize the batch processing job
  • Add it at a scheduled task in ECS
  • Migrate database from Heroku Postgresql to AWS RDS
  • Write a job to check database status for stale data, Dockerize and schedule
  • Forward logs from all 3 docker containers to Cloudwatch
  • Setup Cloudwatch alerts

It was, admittedly, a long day, but in all honesty took one day, so not the most difficult migration out there.

The docker containers gave me repeatable builds with pinned versions of everything, so that I could more accurately test performance of the app locally and have confidence in the deployments once out.  Moving onto ECS let me host those docker containers in a straightforward way with a nice build/deploy pipeline.  It also made autoscaling straight forward should I for some reason get users that needed it, but no way that actually happens with this project.  The schedule task option lets me just use the containers the few hours a day I need, instead of having a dedicated EC2 box for a little cron job.  Moving Route53 and the database over is more for simplicity of management than anything.

The most annoying part, was probably moving the domain, but once done it should be cheaper to maintain because of AWS offering free SSL certs, that was expensive on Heroku. All in all now, total cost is roughly the same, its hard to judge exactly because I have some other projects running on the same ECS cluster and RDS instance.

Most importantly, the app itself is up and running, the processing job sends me emails via Cloudwatch if it breaks, and the daily job to check database status emails me with any issues (stale data, no recent pulls from Strava, etc) nightly.

I know this has been light on detail, but over the next couple of weeks I’m hoping to dig into the architecture in a little bit more detail and with some code samples, to show how to deploy a python/flask app onto ECS cheaply and with background tasks/database/https/logging/autoscaling.  So suffice to say, more detail later.

If there is any particular aspect of this kind of architecture that you’re interested in, leave a comment below and I can take a look at that first.