My 2015 Crowdfunding Recap

I’m one of those people that occasionally gets swept up in the wave of crowdfunded projects. I don’t know it’s because I think I’m getting an incredible deal, or want to be a new product hipster (“I bought that back when it was on Kickstarter.”), or maybe just because a lot of people have cool ideas that I want to see made into reality. Luckily, I’m also capable of a little restraint, so even when every project on Kickstarter is screaming at me to buy it, I’ve managed to keep my purchases to (what I think) is a reasonable level. Since we’ve rolled into 2016, it seemed like a good time to review what has materialized at my doorstop as an actual product, and if I ultimately liked it. I’ll just review them in chronological order, so off we go:

Exploding Kittens

This one broke all sorts of records on Kickstarter (most backers, most funded game, etc.) and I was drawn in for a few reasons:

  • I’ve always enjoyed The Oatmeal and had purchased from there in the past (I had this coffee poster in my cube at work for a while.)
  • I thought the kids would possibly like playing it
  • I thought it would make me laugh

Well, although I’ve yet to play a game with the NSFW deck that came in addition to the regular one, the verdict is in: The kids love playing this game (even though they play a simplified version). They love it so much, that it always ends like this: someone’s gloating, someone’s angry, and someone’s crying. Which kid it is which varies by outcome, but this does not deter them from asking to play again, all the time. So, it’s been worth it, and might even be fun if I can get some adults to play some time! You can now buy it here if you’d like.

Dash 4.0 Wallet

I made the switch from full wallet to money-clip style wallet many years ago, and have never looked back. However, that wallet was getting pretty worn, and I had just started thinking about replacing it when I saw this project. It took a few days to break in, and now the cards slide in and out easily. It’s now my daily wallet, and I don’t miss the money clip at all. If you’re interested, the campaign is over, but it looks like you can preorder one here.

My Dash 4.0 wallet, loaded up for daily use.
My Dash 4.0 wallet, loaded up for daily use.

RuitBag Backpack

Because I’m an IT guy, I go to work like a school kid with all my tech-y goodness (laptop, kindle, tablet, assorted chargers and cables) stuffed into a backpack. My NorthFace backpack was starting to self-destruct a bit, and I was looking into sending it back for some repairs when I came across this backpack campaign. As a daily public transportation rider, I loved the idea of a backpack that wasn’t accessible from the outside. On top of that, the compartments inside the smaller of the bags (R10) looked well suited to my needs: basically room for everything I normally carry and not much more (which is great, because you may have noticed with the wallet, I’m trying not to carry anything more then I need. When it arrived, I loaded it up and began using it as my daily bag.

A look at the RuitBag all loaded up for my commute.
A look at the RuitBag all loaded up for my commute.

I think it’s been a month or so now now and I have no regrets. It’s a well made backpack, it stores all my stuff conveniently, and I’m a little less paranoid now when traveling through crowded places. The bag came with a couple of cards for the company, and I’ve given most of them away already to interested people, so I’m not the only one that thinks this idea’s got legs. You can preorder one for yourself here if you’re so inclined!

Olio Smartwatch

This was my only non-Kickstarter crowdfunded purchase. I don’t know if it’s truly crowdfunded, or if it’s just small batch, but the implications to me were basically the same, so I’m including it in my recap. I don’t remember how I found out about them, but their website completely sold me on the thing.

I soooooooo wanted to love this watch. It looked like a real watch, solid, with a nice band. I like the idea behind it: just the barest amount of notifications necessary, and most visible at a glance. I waited for months for my shipping notification, unboxed, and it fit great, and I thought despite being larger then my daily watch, was a good size for me.

The Olio watch, which I thought was a nice fit for me.
The Olio watch, which I thought was a nice fit for me.

Unfortunately, then the bumps started. Before I detail them, I will say that I wasn’t expecting in a magically perfect experience. I knew that it’s a startup, and that the software would be a work in progress, and that things would (hopefully) just continue to get incrementally better. As a software developer, I was willing to look past a lot of these (minor) bugs. So, I fire the thing up, try to follow the directions to pair the watch, and I’m met with a message saying I don’t own the watch. Without going into all the details, and with the help of their support, it took me, without exaggerating, half a day to properly pair this thing with my phone. At this point I was starting to get the feeling that maybe quality control/testing wasn’t what it should have been for a watch that does not come with a small sticker price. BUT! Given all that, it was paired with my phone and setup. The phone software could use a little polish, and make things like re-doing the initial setup a whole lot easier, but I was ready to roll.

So, I make it to day 1, watch having been on the charger all night, pop it on, and head to work. I was intentionally not messing with the thing more then I normally would have. Just glancing at the notifications as they came in, and the usual occasional time check. However, around lunch time I was started to get nervous about the amount of battery left, and by 1:30pm, it was completely dead. Unfortunately, this is it for me. I am willing to put up with a lot, but battery life is something I just can’t tolerate. I NEED my devices to make it through the day. Perhaps it was bad quality control, or I just have bad luck with batteries (my Samsung S6 edge experience ended in much the same way, couldn’t last a day), but that was it for me and my Olio. It went right back and the process was smooth (got an RMA from support, and received my refund after 10 or so business days.) Ultimately, it turns out I’m not the only one with these experiences, as there’s been enough that there’s even a parody twitter account out there injecting itself into conversations:

Wrapping it Up

At the end of the day (year?) I’m batting .750 on the crowdfunding purchases, two items which already get daily use. I’d call that a pretty good 2015! I’ve already got a few items lined up for 2016, so the fun will continue.

Outdoor Bar with Built-in Cooler / Accessories

The completed outdoor bar, leaf up, with wine bucket in place.
The completed outdoor bar, leaf up, with wine bucket in place.

Earlier this summer, Kristin and I were sitting outside, enjoying the patio discussing how it would be nice to have a bar out there that matched the outdoor furniture. Then, I saw cool little outdoor cooler at a friend’s house. Next, I made the mistake of searching pinterest for outdoor bars. Before you know it, I’m buying cedar and PVC trim and madly sketching measurements on scrap paper, and (of course multiple) visits to the hardware store later, we have our bar! The goal was to have something that not only matched our white furniture, but would pretty much require no maintenance. Therefore, everything you see on there is cedar, PVC or stainless steel held together with stainless steel or outdoor screws and nails. On to the build!

CoreOS and Docker on AWS — A Revised Adventure in Alpha

So, I put together an novel length post on CoreOS and Docker, and within a day a component of CoreOS (etcd) had a major revision (2.0) moved into the alpha channel. Based on a talk and some things I’d read since initially putting together our cluster, I thought it was a good time to rework things a bit. Unfortunately, it made my days-old post immediately outdated. So, rather then writing up all the changes based on what might not have been some best practices, I’m just going to do a 2.0 version of it and pretend like that previous post never happened. I’ll leave it there for historical purposes, and I apologize if you read the whole thing because this one is going to be based off that one, and be quite similar! See if you can spot the differences..

So, at work recently we’ve been playing around a bit with Docker and CoreOS (who isn’t these days?) and I thought it might be a good idea to write up how we built a small cluster to run a bunch of our non-critical systems. At the Wharton School, we’ve been switching over to Python as the preferred programming language for internal applications, and in my working group (Knowledge@Wharton), we recently decommissioned all our in-house infrastructure in favor of running on cloud providers. As we started reworking some of our small reporting and content generation tools to python, we needed a place to run them, and that place was going to be on AWS in some way, shape, or form. Because CoreOS and Docker seem to be hovering around the top of the “new hotness” list, this was an opportunity to try them out. Before diving into the details, how about a brief discussion of the high level parts:

Components

CoreOS

I really liked the idea of a self-updating OS. Having run a small number of servers fairly regularly, but not having that be the core of my job, updates are always a time-consuming and important part of running servers. However, it’s not something you can ignore when you get busy, and it’s often something that requires some coordination or scheduling. CoreOS removes that pain and the trade-off is that you just have to assume that any one of your servers may go down briefly to update at any time. You could pay for one of their products and have control over this, but my take was just to embrace it. In a good infrastructure, even though small, you SHOULD be able to weather any machine just taking a nap on you, and having this happen fairly regularly tends to make sure you don’t miss anything, or you start hearing about it from your monitoring systems.

Although I’m a big believer in the value of configuration management, the CoreOS static Cloud-Config system to configure your machines on boot eliminates a part of that need. The other part of that need is removed by being such a container-centric distribution, which effectively moves application requirements from the system level down into the containers. So, no configuration management systems needed at the OS level, which simplifies the setup. Speaking of containers..

Docker

Docker is all the rage these days, so it seemed like a good idea to take a look. Before even attempting this small cluster, we had begun experimenting with docker and fig as tool to run our WordPress development environments. We have had some success there and now use it to run our local development environments for the code that we end up shipping to a WordPress PAAS. Vagrant, which we used in the past, is still a part of this setup, but we’ve more or less moved from running a ton of vagrant boxes, to one vagrant box that runs CoreOS, and powers all the development environments.

Having the ability to run a variety of different setups was important because, as you just read, we’re primarily a WordPress shop when it comes to our core publishing mission. However, because the school as a larger entity is moving to Python, we also wanted to be able to run the small python/flask applications we’ve been building. Again, using docker in this scenario lets us move fairly freely back and forth without too much pain. Additionally, it is really easy to incorporate a variety of tools as needed for any given app. Memcached, redis, postgres and mysql, to name a few, are just a few lines of Yaml away, but that’s whole other blog post, and this one is going to be record length as is it!

AWS

I’d be remiss without also mentioning AWS. I don’t feel that running data-backed services is something that I’m ready to attempt with Docker yet. So, AWS lets us punt on this one as much as possible. For our small infrastructure we just use the appropriate AWS service for anything that houses data. This keeps the containers stateless, which is an ideal way to run containers, IMHO. So, as needed, we use Elasticache, RDS, S3, and ELBs. Additionally, when we do need some permanent storage, we’ll take advantage of EBS volumes. Finally, we’ll run the setup in a VPC so we can define and use a private IP range for all the instances.

A Disclaimer

Having run this for a bit now, there are things I like about it, and parts I don’t. Also, a LOT of stuff has come out in the past few months. However, when we started this, kubernetes wasn’t (as) easy to run, AWS hadn’t announced their EC2 Container Service yet, and a bunch of other stuff I’m sure I’m leaving out. Some of those systems would handle parts of what we’ve done better I’m sure, but I do value the learning experience of having to have built some of these things. So, my intention is sort of just take you down the path we tried, and you can decide if there are parts of it that could work for you, or if you think we’re just crazy for running an infrastructure on the alpha channel of a new OS. One thing I did want to get out there, though, was an example of running something that wasn’t the standard “WordPress on Docker” tutorial I see so often that, in my opinion, is a terrible example because it doesn’t take into account scalability, or any sort of concern for where you data lives, and I really hope isn’t the way people are actually running WordPress! I also believe that a lot of the systems I mentioned earlier all have limitations and drawbacks as well, and in the end, you’re going to have to build some pieces yourself. You platform/scheduler choices will just dictate which pieces they are.

So buckle up, this is gonna be a long one!

An Adventure in Alpha — CoreOS & Docker on AWS

Update: Just days after finally getting this all written up, a major component of CoreOS (etcd) had a major revision hit the alpha channel. Based on a talk I’d seen, and some other things I’ve read, I look that opportunity to refactor the setup to take advantage of some of new features, and rework the cluster architecture a bit. As a result, this post is now out of date (and not even a month old!). So, my apologies if you already read it, I’m going to keep this here for historical purposes, but what you really want to read is the revised version of this post!.

So, at work recently we’ve been playing around a bit with Docker and CoreOS (who isn’t these days?) and I thought it might be a good idea to write up how we built a small cluster to run a bunch of our non-critical systems. At the Wharton School, we’ve been switching over to Python as the preferred programming language for internal applications, and in my working group (Knowledge@Wharton), we recently decommissioned all our in-house infrastructure in favor of running on cloud providers. As we started reworking some of our small reporting and content generation tools to python, we needed a place to run them, and that place was going to be on AWS in some way, shape, or form. Because CoreOS and Docker seem to be hovering around the top of the “new hotness” list, this was an opportunity to try them out. Before diving into the details, how about a brief discussion of the high level parts:

Components

CoreOS

I really liked the idea of a self-updating OS. Having run a small number of servers fairly regularly, but not having that be the core of my job, updates are always a time-consuming and important part of running servers. However, it’s not something you can ignore when you get busy, and it’s often something that requires some coordination or scheduling. CoreOS removes that pain and the trade-off is that you just have to assume that any one of your servers may go down briefly to update at any time. You could pay for one of their products and have control over this, but my take was just to embrace it. In a good infrastructure, even though small, you SHOULD be able to weather any machine just taking a nap on you, and having this happen fairly regularly tends to make sure you don’t miss anything, or you start hearing about it from your monitoring systems.

Although I’m a big believer in the value of configuration management, the CoreOS static Cloud-Config system to configure your machines on boot eliminates a part of that need. The other part of that need is removed by being such a container-centric distribution, which effectively moves application requirements from the system level down into the containers. So, no configuration management systems needed at the OS level, which simplifies the setup. Speaking of containers..

Docker

Docker is all the rage these days, so it seemed like a good idea to take a look. Before even attempting this small cluster, we had begun experimenting with docker and fig as tool to run our WordPress development environments. We have had some success there and now use it to run our local development environments for the code that we end up shipping to a WordPress PAAS. Vagrant, which we used in the past, is still a part of this setup, but we’ve more or less moved from running a ton of vagrant boxes, to one vagrant box that runs CoreOS, and powers all the development environments.

Having the ability to run a variety of different setups was important because, as you just read, we’re primarily a WordPress shop when it comes to our core publishing mission. However, because the school as a larger entity is moving to Python, we also wanted to be able to run the small python/flask applications we’ve been building. Again, using docker in this scenario lets us move fairly freely back and forth without too much pain. Additionally, it is really easy to incorporate a variety of tools as needed for any given app. Memcached, redis, postgres and mysql, to name a few, are just a few lines of Yaml away, but that’s whole other blog post, and this one is going to be record length as is it!

AWS

I’d be remiss without also mentioning AWS. I don’t feel that running data-backed services is something that I’m ready to attempt with Docker yet. So, AWS lets us punt on this one as much as possible. For our small infrastructure we just use the appropriate AWS service for anything that houses data. This keeps the containers stateless, which is an ideal way to run containers, IMHO. So, as needed, we use Elasticache, RDS, S3, and ELBs. Additionally, when we do need some permanent storage, we’ll take advantage of EBS volumes. Finally, we’ll run the setup in a VPC so we can define and use a private IP range for all the instances.

A Disclaimer

Having run this for a bit now, there are things I like about it, and parts I don’t. Also, a LOT of stuff has come out in the past few months. However, when we started this, kubernetes wasn’t (as) easy to run, AWS hadn’t announced their EC2 Container Service yet, and a bunch of other stuff I’m sure I’m leaving out. Some of those systems would handle parts of what we’ve done better I’m sure, but I do value the learning experience of having to have built some of these things. So, my intention is sort of just take you down the path we tried, and you can decide if there are parts of it that could work for you, or if you think we’re just crazy for running an infrastructure on the alpha channel of a new OS. One thing I did want to get out there, though, was an example of running something that wasn’t the standard “WordPress on Docker” tutorial I see so often that, in my opinion, is a terrible example because it doesn’t take into account scalability, or any sort of concern for where you data lives, and I really hope isn’t the way people are actually running WordPress! I also believe that a lot of the systems I mentioned earlier all have limitations and drawbacks as well, and in the end, you’re going to have to build some pieces yourself. You platform/scheduler choices will just dictate which pieces they are.

So buckle up, this is gonna be a long one!

Adjustable Snare Cajón

The Finished Cajón

Recently I saw an acoustic trio performing, and was initially confused when I saw one performer just sitting on a box… and then he hit it! I was so pleasantly surprised by the sound I immediately googled the thing and realized that it would be pretty straightforward to build. Google a little more, and you’ll find a myriad of DIY plans for them, none of which was exactly what I wanted. But, we’ll start where I did, with a number of great plans:

I stuck with the basic idea of building a 1′ x 1′ x 1.5′ box out of plywood. I see a lot of plans online using birch plywood, and although I’ve used birch, and like working with it, what they had in the project panel sizes at Home Depot when I went was sande plywood. So, sande it was. If I build a second one, I’ll probably use birch and see if it sounds any different.
Measure twice and cut once, and all you need is a 2′ x 4′ sheet of 1/2″, and the same size of 1/4″ (and you’ll have enough for an extra face or two).

One of my biggest modifications was to put the box together with the Kreg jig and pocket hole screws. I’d put in a word of caution here as joining 1/2″ wood with the kreg doesn’t leave a lot of room for error, so you’ll want to hand tighten carefully! Also, be aware that they go right up to the edge of poking through, so you’ll see them again if you put any edge on the corners. The back was such a tight fit that I just tapped it in and glued it. Finally, to my planer’s dismay, I used the little 45º notch on the top edges (sheering the screw tips cleanly off in the process).

For the feet, I just purchased real low profile rubber feet, and mounted them on short pieces of some scrap 2×2 pine I had lying around. There are a number of taller feet you could probably find just as easily, but be aware of the screw length (I didn’t want the screw tips poking through into the bottom of the box.) One of the plans I saw suggested just tracing a CD for the sound hole, which worked great. I cut out the hole from that tracing with a dremel, and centered it 1/3 up from the bottom of the piece. Here’s the box at this point:

Tall Bread Cutting Board

This project came about because we got a Zojirushi bread machine at home a few months ago, and have been making a few loaves a week (it’s delicious, and much cheaper!). However, I’ve had a long-standing annoyance with my own inability to cut even slices, even with a good knife. There’s a few loaf-cutting cutting boards out there, but they’re either plastic-y, or not tall enough (or both). Although our machine makes a more traditional-sized loaf, it still gets quite tall (as tall as 7.5″ in the center). So, I’ve been kicking the idea around in my head for a while of making a cutting board specifically for our bread. You can actually trace the origins of the idea back to this sketch in the corner of a sheet of my notes:

The Initial Braindump
The Initial Braindump

Which ultimately led to this finished maple tall bread cutting board:

The Finished Product
The Finished Product

So, how did we do this?

Umbrella Stand and Base

We finally got some patio furniture this year, and it’s been great. Our patio gets afternoon shade, so when we envisioned sitting out in the evenings, or eating dinner outside, we pretty much just pictured sitting in the shade. It’s Polywood furniture (which is made from recycled plastic bottles) which thus far seem well made, sturdy, and will supposedly last forever. However, the ones we ordered are all white. Additionally: we have light stucco… AND a light colored patio. So, the combination is pretty much that you’re blasted with blinding white light from all angles if you find yourself out there at lunch time. So, we needed some umbrellas. The table has a whole and we bought a white plastic umbrella base for that one, but the one by the Adirondack chairs is freestanding, and we wanted to dress it up a bit. Which is how I found myself building this little umbrella stand/table:

Umbrella Stand

The Plans

Umbrella Stand

All the credit for this plan goes to Ana White, whose plan I followed with very minor changes. Her site is great if you’re into woodworking, tons of free plans, many Pottery Barn-inspired, which is probably how I would describe our decor here. Here’s the umbrella stand plan I used as my starting point. Here were my changes:

  1. Instead of wood, I used white cellular PVC trim. You can get it in 8′ lengths in the same sizes you’d find dimensional lumber in. It should be pretty much weather proof.
  2. I didn’t use any glue. Instead, I went 100% pocket holes / screws. Because don’t want to be dealing with rust, I opted for the slightly more expensive stainless steel 1 1/4″ pocket hole screws.
  3. This isn’t a substitution so much as a correction: you actually need 4 8′ 1x3s, not 3 like the materials list suggests.

My hope is that with the above changes, although it bring the materials cost up a bit, should also raise the weather imperviousness of the table considerably. Here’s a few build shots and notes:

A Quick Incoming Link Test for Site Migrations

Both at work, and personally, I’ve been involved with a number of site migrations. Obviously, you want to make sure that all your redirections are working before you pull the trigger on something like this. This is particularly important for SEO reasons, as you don’t want to hand the GoogleBot the HTTP equivalent of ¯\_(ツ)_/¯. Ultimately, it would behoove you to put together a comprehensive test suite, hitting all your urls, and checking the returned pages details, and this is what I’ve ended up doing in the past. However, you can also throw something together really quickly if you have your site in Google’s Webmaster Tools and Apache’s free JMeter tool either as the starting point for a test suite, or an additional SEO sanity check.

Get the Incoming Links

Webmaster Tools gives you access to a piece of Google’s picture of what’s linking to your site (a component of how they rank your site). We’re gonna need that data, so from the main menu on the left: Search Traffic > Links to Your Site. Then, under “Your most linked content” click the “more →”. That page will list a max of your top 1,000 incoming links. Click the “Download this table” button and you’ll get a a CSV file. Here’s a few screenshots if you need them:

Day 5: Tuning and Benchmarking

Whoops. Sort of let the last day slip away there huh? Well, no time like the present to try to wrap this up. As it turns out, just getting everything to a baseline state, and enabling the ability to start fine tuning things is more or less a whole day. So, we’ll adjust and just document that, with at least a baseline setup for tuning, and try to get a test is as well. Before we get to the fun part, the series for those just joining us:

Because we’re tuning specifically for the h1.4xlarge instance, we’ll do our tuning in a role and just for that purpose. Our roll tune-web-hi1-4xlarge.rb starting point:

name "tune-web-hi1-4xlarge"
description "Tune Apache/PHP for the h1.4xlarge instance"
default_attributes()
override_attributes()
run_list()

Day 4: Install WordPress with Chef/Git

Alright, so after a few days of vacation (and a few days to catch back up) we’re at it again. Today’s goal: finish the WordPress install. Previous steps:

For our purposes, I’m going to replicate one of our production WordPress installs, which is our Knowledge@Wharton High School site. I’ve run the site both locally on VMWare infrastructure, as well as externally. I feel this is a good example because I have a pretty good idea how to setup the site correctly for decent performance, which we also seem to get from our host. If you don’t believe me, here’s our Pingdom report the two weeks surrounding our cut-over from local to hosted:

Pingdom Cutober

Can you tell me which day we cut over? I’m just going to take this is a sign that we’re both “doing it right”.