Sometimes the Smart Decision Sucks

Or…

Pulling the plug… for now

This is a hard post to write. I have to admit defeat. It comes at the hands of circumstance. No one thing did it, but they all piled up into the perfect storm. For the second time in a row.

I’ve decided to pull from Ironman Texas. This is the second full Ironman I’ve entered and circumstance has led me to have to pull late in the game. My mind is my worst enemy. “You’re just not capable of pulling off a full Ironman.” “Two in a row because life got in the way? Sure sounds like a great story.” “You’ve got this, so why stop unless you’re just a quitter?” My mind can be a jackass some times.

I’ve decided to cash in and call it quits. The last six months has been a rollercoaster. I’ve been sick with respiratory illnesses twice that have lasted more than a few days. My stomach has rendered me useless multiple times for days at a time. I had a cronic sinus headache related to work for nearly two weeks. The cherry on top was a job search that got accelerated a bit faster than I intended. Ironman training is an uphill battle against what you’re body wants to do in the best of circumstances. This felt more like playing defense against a siege.

Two weeks ago I set a PR at Ironman Galveston. It wasn’t the PR I wanted, but it was progress nonetheless. I felt good. I paced and fueled as if I was doing a full. Everything clicked into place. I had planned to take an easy week with some light swimming, then kick back into gear the next weekend. Or that was the plan.

Saturday morning came and I had a stuffy head with a slight tickle in the back of my throat. Must be allergies plus talking too much on Thursday and Friday I thought. Checked in with the coach and decided to pull the plug on my Saturday ride. Better to take the weekend off from a long ride and get it in early in the week. I had the flexibility so why not take advantage of it? I took it easy but my body didn’t let up. By Sunday talking at more than a whisper took effort. I was popping the next cough drop before the current one finished, just to dull my throat. I spent the next three days in a menthol flavored haze of allergy medicine, decongestants, and expectorants. I was going to kick this and hit the next weekend full force with the Red Poppy century.

Saturday morning came and went. No century. I was still toast. I had weaned myself from most of the medicines and was able to breathe, but it still felt like I had rubber bands triple tied around the upper part of my lungs. I spent 30 minutes in a light spin Saturday morning. My heart rate spiked and my breath got shallow any time I picked up the pace beyond casual-ride-in-the-park pace. The handwriting on the wall was now clearly illuminated. Ironman Texas isn’t going to happen for me.

I feel like I could probably finish. That “probably” is the problem. I can definitely see the scenario where something happens, throws me for a loop, and I’m toast. Like my coach said, you can’t fake a full. Regardless of whether I finished, it’s going to destroy my body. Spending more than a half day propelling myself forward is going to take its toll. Given where my fitness and health is at right now, that toll is going to be bigger than it has to be.

I want my first full Ironman to be a race I’m proud of. It sounds arrogant and assholeish and all manner of wrong, but I don’t want it to be a race that I slog through just to get the medal and check off “full Ironman” from the bucket list. I want a race that when I get through I feel like I had the best race I could have prepared for. With everything that’s not gone according to plan, Ironman Texas isn’t that race for me this year.

I hate throwing in the towel. I’ve stayed in jobs way longer than I should have. I didn’t want to give up on the job I was hired for or the job I wanted to make for myself. I’ve stayed in relationships past their expiration date. “Relationships are hard” and they “require hard work” and all other manner of platitudes. I’ve shown up on the line at races ill-prepared and undertrained and still raced. Not starting meant I had to fess up to not having prepared – regardless of the reasons – to be ready for the starting gun.

My first Ironman isn’t going to be that. I’m going to do it. Hell, my eye keeps wandering to an Ultra Man. But I’ll be ready for those when they come. I’m not there right now.

Time for a reset and then… keep moving forward.

Mountain Bikes and Singletrack Focus

I’ve been mountain biking for a decade and a half. Seems crazy, but knobby fat tires have been a part of my life since the early 2000s. When I started, 29ers weren’t a thing, single speeds were the province of those crazy few animals who needed an extra challenge, and there was still a debate between full-suspension and hardtail bikes. Sure, having a spring – remember, this is the dark ages the air suspension setups weren’t common yet – helped smooth out the trail, but the loss in efficiency slowed you down. Everyone knew that.

Then someone decided to test this out. If memory serves, it was Giant Bikes around 2001-2002. Giant had two cross country (XC) racing bikes: their hardtail and dual-suspension. Most of their competitors had a similar lineup. Two models, both were super light, both had been engineered for speed.

Giant put men and women from their pro team on a loop course with both bikes. Team members would ride one lap with one bike, then switch, then repeat. They collected two important pieces of data: the actual time on the lap and the perceived effort from the racers.

Across the board, the pros thought they had been faster on the hardtail bikes. Across the board, they were wrong. They had perceived the bumpy ride – the feedback – of the hardtail bikes as proof that they were moving faster. Each root and rock they bounced off of gave them feedback. They were moving, and so fast they could barely maintain control. By contrast, the dual-suspension bike soaked up the rocks and roots keeping the wheel planted on the ground. This lack of feedback was perceived as slowness.

I love this story. It underlines something I’ve seen time and time again. Lack of feedback makes you think you’re moving inefficiently. That feedback comes in a lot of forms: rocks and roots on a mountain bike, how many unread emails you’ve got waiting in your inbox, or how many reactions to posts you’ve had since your last check.

The key here is that they thought they were moving faster, but in reality they weren’t. I know people who thrive on a phone (and now watch) that’s constantly buzzing. They feel connected. They feel alive. Like the pro mountain bikers before them, they often misread the constant feedback as proof they’re being efficient.

A stream of constant interruption might work for the Jack Dorsey’s of the world, many of us need a bit more space to gather our thoughts. That constant feedback that “life is happening and you’re a part of it” is fragmenting our attention. It’s drawing our focus away from the deeper, more meaningful work that we’re capable of.

My reading and listening this year has forced me to re-evaluate what I let grab my attention. I now have all notifications turned off on my phone, save the few things that I intend to allow as interruptions: SMS, phone, and so on. Social networking tools have all of their badge numbers and push notifications turned off. My home screen has only the apps I intend to use every day and the second screen has a handful of large buckets that all of my apps are stored in – the largest of which is the catch-all Extras.

I activated the do-not-disturb feature of my phone while writing this. Those few notifications that have come through (I just checked — there were a couple) will still be there when I’m done. This let me focus my attention on getting these thoughts down and edited into a cohesive post.

Interested? Set aside some time during your day for focused work. Turn off your phone and cut wifi. Even better, change your location to some place where you don’t have access to wifi at all to avoid all temptation. Figure out what you want to tackle, then dive in. It might seem odd at first, but having stretches of time to focus, intently without distraction is a huge productivity booster.

I’m not suggesting anything new on the technology side, but maybe this tale will help you reframe the issue and realize it’s pretty universal.

Open Source Science?

Let’s run a thought experiment. Imagine submitting a scientific paper for publication, then getting an email that reads something like this back:

Thank you for your submission to the Journal of Online Thinking and Futurism*. Your paper has been processed, but before we can proceed further with publication, please submit verfication that you have properly accessed the following cited papers:

… list of every cited work

Or what maybe it’s even more insidious. Maybe the letter reads like this:

Thank you for your submission to the Journal of Online Thinking and Futurism. Your paper has been processed and please note that this message is an attempt to collect a debt. According to our records, you have illegally obtained access to the following papers:

… list of every cited work they think you stole**

Please submit either:

  • Proof that you have legally obtained access to each of the papers cited above
  • Payment of $750 per article cited that you accessed illegally

Once this matter is cleared up, we will reevaluate your submission for publication consideration. If you do not respond within 30 days, this matter will be turned over to our legal department for prosecution under the United State Copyright law.

Two different articles on Sci-Hub have been making the rounds on social media this past week. Both of the above seem a bit fair fetched to me, but journal publishers like Elsevier are facing a literal existential crisis. If sites like Sci-Hub continue operating, what value do publishers provide to the market that lets them continue to operate? Maybe they don’t need to.

Thinking about these articles this morning over breakfast, the similarities between publishing a peer-reviewed paper and open-source software jumped out. Places like GitHub are filled with non-peer-reviewed crap code (just look at the 249 repos I have on GitHub, most of which shouldn’t be used at all), but the main projects are peer-reviewed, if not not in the traditional way.

Open-source software that is useful and used ends up with a peer review by folks who use and contribute to it. My thoughts this morning turned to ways that a distributed, open model like open-source software could be used to validate scientific papers. I have no idea if it could, but it’s an interesting thing to ponder.

*Note, the Journal of Online Thinking and Futurism is meant to be a joke. If I find it really exists once I go back online, well, the joke’s on me.

Increase Your Speed, Increase Your Focus

I recently started listening to Deep Work by Cal Newport. So far, I feel like I can sum up the book with this statement:

Focused work provides more value. Focused work requires effort.

It’s full of tips and tricks on how to get the most out of your concentration. Many of them are things you’ve probably heard of before or at least intuitively know. Things like keeping track of how you spend your time. Ways to try to remove busy-work and replace it with focused work. One great quote (emphasis mine):

In an age of network tools, in other words, knowledge workers increasingly replace deep work with the shallow alternative — constantly sending and receiving e-mail messages like human network routers

So great! I still have a few hours left on the audio book, but so far it’s going to make my list of highly recommended books from 2016.

The thing I want to focus on today, however, is the book’s recommendation of productive meditation. Newport’s explanation of productive meditation is:

… [taking] a period in which you’re occupied physically but not mentally — walking, jogging, driving, showering — and focus your attention on a single well-defined professional problem.

He’s basically suggesting that you create an environment to force the creation of those “ah ha” moments where you’re in the shower or walking your dog and solve the problem you’ve been trying to work through. I love the idea of mentally loading up your brain, then kicking into something routine and letting it wander.

Over the years I’ve inadvertently used this method to prepare talk abstracts for conferences, solve bugs, and figure out user interaction designs that were causing me grief. In fact, the very first conference talk proposal I came up with involved a solo afternoon mountain bike ride where I started letting my mind wander to ideas and from ideas to outlines.

Assigning this process a name and outlining the structure gives me a way to recreate it on demand instead of hoping it occurs, as it has in the past. This is great, but I’ve had another revelation while going through this section of the book.

I’m practicing productive meditation while listening to the book. Rather than focus on a problem, I’m focusing on learning. Deep Work has been the majority of what I’ve been listening to, but it also work for podcasts and such. It’s not that I’m passively listening, however, it’s that I’m listening on 2x speed. The increased speed is key.

Rewind to a few years ago. I started co-hosting the ATX Web Show and started listening to more podcasts to get ideas on how they were structured. A few friends talked about how they listened only on 2x. I tried it and couldn’t follow along. Things moved too quickly, the words were jumbled together, background music felt like it was from the Chipmonks. I abandoned the idea all together.

This past fall I started thinking I could creep up the pace a bit. I now use Instacast which support 1.5x, 2x, and 3x. I decided to bump up the rate a bit. 1.5x felt a lot better. Could still follow along, but I did notice it took a little more effort to keep track of what was going on. The focus felt good, but it wasn’t tiring.

This past January I started listening to Deep Work. The pace at 1.5x still felt a little slow, so I decided to jump it up to 2x. This time I was able to follow along, but I had to focus to keep up. One stray thought meant rewind 30, 60, or even 90 seconds to go back and get back on track. To keep the pace, I had to focus on what was being said.

Looking back over my progression the past few months, I realize that I’ve been training myself to focus. Now I look forward to taking the dogs for a walk so I can have 10 - 15 minutes to breeze through 20 - 30 minutes of an audio book. Walking the dogs doesn’t require much mental energy so it’s the perfect style of productive meditation physical effort to engage my entire mind and body and really learn.

I wish someone had suggested I slowly ramp up my listening speed. I also wish someone would have told me doing so would enhance my focus. I always took the 2x podcast listeners as simply hectic super busy people trying to get thru as much as they could. That might still be true for some folks, but I think more of them are focusing more intently by speeding up the pace to something that’s a touch beyond natural.

How to Compliment an Austinite

How The Iron Yard fits in with all the code schools in Austin

The Iron Yard has historically been in places that are not known as tech hubs. It was founded in Greenville, South Carolina and its early expansion was in North and South Carolina and Georgia – not areas known for as bustling technology sectors.

Austin is different than most of the other communities that The Iron Yard is a part of: it’s a tech hub with everything from startups to Visa, Oracle, and Dell and everything in between. Because of the vibrancy of the community there are a lot of code schools in the area – from the NYC-based General Assembly to Austin’s homegrown, part-time coding school, Austin Coding Academy.

This past week I was on a call talking about The Iron Yard and its place in the Austin tech community. The topic of other code schools came up, specifically how Austin is different in the family of The Iron Yard campuses since students have a lot of choices in Austin that they might not have in other markets that we’re in.

I think it’s a great thing that folks looking to ramp up their coding skills have a lot of options available to them here in Austin. The programs we offer at The Iron Yard are great, but we also don’t cover every conceivable option. Our structure is great and though I’m a bit biased and think it’s the best, I’m not arrogant enough to think that it’s the best for everybody.

For example, here in Austin we’re one of two The Iron Yard campuses that offer a UI Design course. It is a course that presents design principles and our graduates come out knowing HTML, CSS, and a bit of JavaScript. We touch on things like UX, but it’s not a primary focus. General Assembly, however, offers a UX-focused course. We’ve had students that apply at both schools to try to determine which route is best for them. Students who are more interested in user research and workflows have a great option in GA, those interested in building those interfaces have a great option here at The Iron Yard.

We’ve also had students get in touch about a part-time course. We currently don’t offer any part-time classes in Austin, but our friends over at Austin Coding Academy do. Just last week I sent an applicant to them so she can get what she’s looking for. This cohort, I have a student that ACA sent our direction after he finished the part-time program and wanted to do a full-time, immersive program.

This type of collaboration is a key to Austin’s success. We’re not a place that hordes information, be that knowledge or contacts. We’re all about the rising tide that floats all boats.

I’m not the first person to say this. Joshua Baer, mentioned this phenomenon back in 2012 when Dave Rupert and I interviewed him for the ATX Web Show. From Joshua’s perspective, one of Austin’s secrets is its collaborative nature. Instead of “oh, you shouldn’t do X because Joe Smith is already doing it” the conversation is “oh, Jane Smith is working on X too, y’all should talk – let me introduce you two!”

That mentality has stuck with me and is integral to how I try to interact with the world.

So what does all of this have to do with compliments and such from the title? On that phone call I mentioned earlier, I explained what I thought of all of the coding schools we have here through the lense I just described. Their response: “Wow, that’s great. That’s a very Austin way of looking at it.”

Want to make an Austinite’s day? Tell them their approach to the world feels like an Austin-way of doing things!

JavaScript Is Eating the World

Ok, not really, but JavaScript is the best place to start programming. I can hear the sound of the “true” programmers whipping their noses into the air as they read that last sentence, but hear me out.

JavaScript started as this quick hack to add a little bit of inteactivity that was needed for the browser, but now it’s deployed around the world on several billion devices. And it’s not a bad language. All languages have their quirks and those that do type conversion like JavaScript – 2 + “2” anybody? – have their share plus some but it’s a solid language to start. Why, you ask? Read on for my take.

Ease of deployment for testing

When you’re starting out, getting your code to run somewhere is the hardest part. That was the appeal of PHP. Write your code, copy it via FTP to your server, reload your page. The whole idea of starting a server is simple to us programmers who have done this for awhile, but not to someone starting out. That increased the cost of entry for tools like Rails and Django. You had to have a mental model for how you loaded your code. For PHP you wrote a file, you put a file on a server, you loaded that file through the server. You were done. With JavaScript it’s even easier.

  1. Save your file to your computer
  2. Refresh your browser
  3. There is no step three, you’re already looking at the result

Rise of JavaScript on the server

Server-side JavaScript wasn’t created by Node, but Node was the first thing to make it usable and fast. Taking the same skills you use to interact with events from a user and making those interact with events from a database or a caching layer means one less thing you have to learn. Yes, deployment of that application is a bit more involved than working with the browser, but you’re learning about deployment, not deployment and a new framework and a new language.

The other thing that’s often discounted by folks in the development community is how important native Windows support is. Yes, you can run Python or Ruby or PHP on Windows, but the thought of deployment is nearly laughable. The thing that makes Node a killer platform is that you can run and deploy it inside the enterprise without having to change all of your computers.

JavaScript is here to stay. Even if only a target for other languages like CoffeeScript or TypeScript. It’s a great language to start with since it’s situated right in the middle of the web development stack – that space between design and backend development. It’s easy to get started but challenging to truly master. And it runs on just about every computing device created in the past decade.

The Next Chapter

About a year and a half ago I started looking for my next thing in a post-Tribune world and my first email was to my friend Peter Wang. A few months after we closed down Quickie Pickie talking about the future of Continuum Analytics and data science I joined as the Web and UX Architect. During my time there I’ve had the opportunity to contribute to almost every product with a UI that the company ships. Tools like Conda and Bokeh are changing the way people deal with packaging and visualization. Under Peter and Travis’ leadership I’m sure the brain trust that is assembled at Continuum will continue to redefine the space, but an opportunity has come up that I can’t pass up.

I was once asked in an interview to give advice to people starting in data journalism. I said, “become an expert, then start over.” I’m taking my own advice. I’m not starting over completely, but I am stepping out of my comfort zone. Starting the end of June I’m leaving the world of programming and design to become the Campus Director of The Iron Yard in Austin.

The team at TIY is full of some great people (including my good friend SamKap) and is doing something really important, providing an alternate route for becoming a professional programmer or designer. To say I’m stoked is an understatement. I’m sure I’ll have plenty to say over the coming months, but for now I’ll leave it with, see ya in Austin in a couple weeks!

Workflow With Git

I’ve been toying with my Git workflow the past year at Continuum and have come up with a good workflow for handling semantically versioned software inside Git. This post is my attempt to catalog what I’m doing.

Here’s the TL;DR version:

  • master is always releases that are tagged
  • Code gets merged back in to develop before master, all work happens in feature branches off of develop
  • Bug fixes are handled in branches created from tags and merged directly back in to master, then master is merged to develop.

That’s the high level overview. Below is that information in more depth.

master of code

The master branch always contains the latest released code. At any time, you can checkout that branch, build it, install it, and know that it was the same code you would have gotten had you installed it via npm, pypi, or conda.

Merges into master are always done with --no-ff and --no-commit. The --no-ff ensures a merge commit so you can revert the commit if you ever need to. Using --no-commit gives you a chance to adjust the version numbers in the appropriate meta data files (conda recipe, setup.py, package.json, and so on) to reflect the new version before committing. For most of my commit releases, I’m simply removing the alpha suffix from the version number.

There should only be one commit in the repository for any given version number and every commit that’s in master is considered to be released. Keep in mind, that means you can’t use GitHub’s built-in Merge Pull Request functionality for releases, but that’s ok by me. You have to go to the command line to tag anyhow.

With the appropriate changes for versions, the next step is to create the commit and then tag it as vX.Y.Z immediately. From there, you build the packages and upload them or kick off your deployment tools and the code with the new version is distributed.

Managing Development with develop

Now you need to start working on the next feature release. All work happens in the develop branch and it should have a new version number. The first thing you should do is merge master in, then bump the version number to the next minor release with a suffix of some sort. I use alpha, but you can change that as needed depending on your language / tools.

For example, I just released v0.8.0 of an internal tool for testing yesterday (no, it’s not being used in production yet, thus the 0 major version). Immediately after tagging the new version, I checked out develop, merged master into it via a fast-forward merge, then bumped the version number to v0.9.0alpha. Now, every commit from that point forward will be the next version with the alpha suffix so I can immediately see that it was built from the repository.

Managing Branches

Everything is developed in branches. New features, refactoring, code cleanup, and so on happens off of the develop branch, bug fixes happen in branches created directly from the tagged release that the bug fix is being applied to. Let’s deal with feature branches first, they’re more fun.

I’ve gotten into the happen of adding prefixes to my branch names. New features have feature/ tacked on at the start, refactor/ is used whenever the branch is solely based on refactoring code, and fix/ is used when I’m fixing something. The prefixes provide a couple of benefits:

  • They communicate the intent of the branch to other developers. Reviewing a new feature requires a slightly different mindset than reviewing a set of changes meant solely to refactor code.
  • They help sort branches. With enough people working on a code base, we’ll end up with a bunch of different types of changes in-flight at any given time. Having prefixes lets me quickly sort what’s happening, where it’s happening, and prioritize what I should be looking at. I generally don’t want any fix/ branches sitting around for very long.

Some people like having the developer name in the branch as well to provide a namespace. I can understand this approach, but I think its wrong. First, Git is distributed, so if you truly need a namespace for your code to live where it doesn’t interact with other’s code, create a new repository (or fork if you’re on GitHub).

The second, and much more important, reason I don’t like using names in branches is that they promote code ownership. I’m all for taking ownership of the codebase and particularly your changes. It’s part of a being a professional: own up to the code you created and all its flaws. What I’m not for is fiefdoms in a codebase.

I worked at one company where I found a bug in the database interaction from the calendar module. I fixed the bug in MySQL, but didn’t have the know-how to fix the bug in the other databases. I talked to the engineering manager and was directed to the developer that owned the calendar. I explained the bug, my fix, and what I thought was needed for the other databases to work and they were to fix it. When I left the company six months later, my fix still wasn’t applied and none of the other databases had been fixed. All because the person who owned the calendar code didn’t bother to follow through.

Having a branch called tswicegood/fix/new-calendar-query gives the impression that I now own the new calendar fix. Removing the signature from that is a small step toward increasing the team ownership of a code base and removing the temptation to think of that feature as your own.

Managing Bugfixes

So what about bugs? You want the bug fix to originate as close to the originally release code as possible. To do this, create the branch directly from the tag, bump the version number, then work on your fix. For example, let’s say you need to find a bug in v1.2.0 that you need to fix.

1
2
$ git checkout -b v1.2.1-prep v1.2.0
... adjust version number to v1.2.1alpha, then commit 

The -b v1.2.1-prep tells Git to create a branch with that name, then check it out. The v1.2.0 at the end tells Git to use that as the starting point for the branch. The next commit adjusts the version number so anything you build from this branch is going to be the alpha version of the bug fix. With that bookkeeping out of the way, you’re ready to fix the code.

For projects that have a robust test suite (which unfortunately isn’t all of them, even mine), the very next commit should be a failing test case by itself. Even when you know the fix to make the test pass, you should create this commit so there’s a single point in the history that you and other developers can check out and run the tests to see the failure. The next commit then shows the actual code that makes the test pass again.

Once the fix has been tested and is ready for release it’s time to merge back in to master. You should do this with --no-ff, and --no-commit and remove the alpha suffix before committing just like making a feature release.

Once you’ve merged and tagged the code, you need to get develop up-to-date with the bug fix. Since master and develop have now diverged — remember, develop has at least one commit bumping the version number — you have to deal with a merge conflict.

Hopefully, the merge conflict is limited to the version number. If that’s the case, you can just tell git merge to ignore those changes by with this command:

1
2
$ git checkout develop
$ git merge -X ours master

The -X command tells git merge which strategy option to use when merging, and using ours tells it that the code in the branch you’re merging into wins. You need to be careful with this, however. It means that any real conflicts would be swallowed up. Hopefully you know the changes well enough to realize if there’s a larger conflict, but if for some reason you don’t know, you can always try this approach:

1
2
3
4
5
$ git merge master
… ensure that the only conflicts are around the version 
… numbers and that the develop branch code should be used
$ git reset --hard ORIG_HEAD
$ git merge -X ours master

You’ll have to manage any merge conflicts manually (or use git mergetool) if the conflicts are larger than the version number change. If you do confirm that you don’t need any of the conflicted changes, you can use git reset --hard ORIG_HEAD to reset the working tree back to its pre-merge state, then the git merge -X ours master to pull the changes in ignoring the conflicts from master.

On develop versus master

I’ve gone back and forth on this. My preference is to release often. Sometimes multiple times a day. In that case, master is just a quick staging ground. Create a branch, bump the version, write one feature, merge it, bump the version number, rinse, then repeat.

There are a few problems with this approach. First, not every team or for that matter project can work that way. Sometimes the code needs more testing across multiple platforms or configurations. Sometime’s there’s an integration test suite that takes awhile to run. Sometimes releases need to be timed to coincide with scheduled downtime giving you time to implement a few features while waiting for your release window.

Second, it doesn’t scale. One branch that merges one feature is fine, but if you have a team of developers working on a project you probably have multiple things being worked on in parallel. Having them all branch off master, all bump their version number, and all coordinate for an octopus merge (or merge and release separately) is a nightmare.

Having everyone branch and merge off of develop provides a base that keeps in sync with the rest of your code base. Your feature branch exists by itself, and all it needs to do to stay in sync is occasionally merge develop.

Compared to git-flow

This is very similar to the workflow called git-flow. There are a few differences.

If my memory serves, it used to call for branch names with the author’s name in it (a re-reading of it now doesn’t show that though). That’s what remote repositories are for, so I don’t want to use that.

Correction, nvie just confirmed that it’s never been there, so one of my biggest gripes with it wasn’t founded. Oops. :-/

Next, hot fixes or bug fixes in git-flow are merged to master and develop instead of only master. I want the versions going through master then back out to develop. To me, it’s a cleaner conceptual model.

Versions, a thing I’ve written about, are important. I want develop to be installable, but I don’t want it confused with any released version. There should only be one commit, a tagged commit at that, in each repository that can be built for any given version.

I don’t call out release branches in my description because my hope is that they aren’t necessary. Of course, if your project has a long QA cycle that’s independent of development or you’re trying to chase down a stray bug or two before a release, then a release branch is great, I just don’t make them required.

In Closing

The most important thing is to create some process to how code moves through your repository, document it, and stick to it. Everyone always committing directly to master is not sustainable. It also makes it much harder to revert changes if something makes it in by accident as you have to go find all the relevant commits instead of reverting one merge commit.

Worst than a free-for-all in master is the hybrid. Committing some of the time directly to master and other times to a feature branch means there’s no pattern to how your code is used. What’s the threshold for creating a feature branch? Is it based on how big the feature is, or how long it’s going to take? Answering these questions distracts you and future contributors. Providing a solid pattern of how contributions flow through your repository is an important step in making your project more accessible to fellow contributors regardless of whether those are in the open-source community or an office down the hall.

Some of the things outlined here might seem like a lot of overhead, but in the end they save you time. Most importantly, they’ll scale beyond just you.

On Versions

Versions are dead, long live versions

What version of Chrome are you using? Beyond the major version number, what version of your operating system are you on? If you deploy using Linux code, what version is your Linux Kernel?

My answer to those questions: I don’t know. Or didn’t. I just checked and I’m on version 42.0.2311.39 beta for Chrome, 10.10.2 for OS X, and 3.16.7-tinycore64 for my Docker VM I use for testing images. My life isn’t better for knowing that information, though.

The same is true for most of the software you create. The version number doesn’t matter, but to this day software developers don’t want to mark their software as version 1.0. 1.0 carries a lot of weight. To a lot of developer’s it means you’re done. It means you’re confident in it. It means things aren’t going to drastically change.

The Python community is afraid of 1.0. The only reason I can understand why is because it’s the largest case of collective imposter’s syndrome I’ve ever seen.

Don’t believe me? There are 61,564 Python packages that have been released according to this page. Of those, 40,489 have a version number that begins with 0. That’s two-thirds of the packages that I can’t tell anything from those version numbers.

For example, is virtual-touchpad more stable than Werkzeug? The former is at version 0.11 while the latter is only at 0.10.1. Of course, Werkzeug is almost certainly more stable. The download numbers seem to tell me that with it’s more than 20,000 downloads in the last day. Werkzeug runs a huge chunk of the web that’s powered by Python. Flask doesn’t exist without it.

Statements like the one in the previous paragraph that begin with “of course”, however, are only obvious with the correct reference. If you’re coming from world outside of the Python community, you don’t have that reference.

Sane Versions

Enter Sematic Versions. It can be described in a tweet, but here’s the slightly more expanded version.

  • Versions begin at 0.x. Anything in 0.x hasn’t been deployed anywhere and you’re still turning it into something useful. You make no guarantees about it.
  • The first code that’s used in production is 1.0.0. Production means it’s being used and not just written.
  • Versions follow Major.Minor.Bugfix.
  • Major version numbers are for backward compatibility. If this number changes, the API has changed and it means that code written against the old version won’t work with the new version in at least one case.
  • Minor versions are for new features. Nothing should break between versions 1.0.0 and 1.1.0 or 1.101.0.
  • Bugfix versions are for bugfixes. No new features are added here, just corrections to code to make sure it does what it’s supposed to.

It’s really that simple. When I install your software package at version 1.2.0 I know that I can run anything before version 2.0.0 and it should all continue to work.

There are some devils hiding in the details. For example, how many back versions do you support? If you find a bug in version 1.3.0 that was present all the way back to 1.0.0, do you patch versions 1.0.x, 1.1.x, and 1.2.x as well? Does each new feature mean a minor version bump?

That’s up to you as a maintainer. There are no right answers to those questions: the main point is to make sure that code that works in one release doesn’t break in the next. If it does, and sometimes it needs to, bump the major version number.

Also, it’s ok to break. SemVer gives you the opportunity to convey to the users of your code that something needed change in ways that weren’t compatible with the previous code.

To the Python Community

Please consider adopting SemVer. What’s stopping you? Is it because you don’t think your code is ready to be called 1.0? I promise you, it is. It’s actually awesome!

All I want is for you to quit worrying about getting it perfect. Get it close to right, make it so people can use it. Then release it. If you get something wrong or need to fundamentally change the API, do it, but bump the major version number so everyone knows at what point their code might not just work™.

Software is just that: soft. It can, and should change. Don’t be afraid of v1.0 or v2.0 or v20.0.

Looking Toward the Hub

This past fall a (new) good friend offered to marry Brandi and I as we traveled to Terlingua to share our vows with each other, our families, and close friends. As Sharron prepared, she asked for a favorite author or two of each of ours so she could find a quote to use at the ceremony.

There are few things that will make you question your reading than to be marrying a professional writer and being asked who your favorite author is. I read a ton, but have had very few authors who are my go-to when looking for inspiration. I’m also horrible with specifics. I remember general themes, but things like names don’t stick with me. Since I drew a blank on inspiring writers, I went with my gut: Terry Pratchett.

Regardless of the where I’ve been in life the past handful of years after discovering him, I’ve reached for Terry Pratchett’s books as my release of the previous day’s activities. It’s been the thing that lets the energy expended or pent up during the day relax into a soothing sleep. His humor and view on the world is calming.

I told Sharron that Pratchett was my favorite author, not expecting her to find much of anything. His humor is great, I knew that. But something that would fit in a wedding? That’s a different story.

The day before the wedding, we arrived and she told us what she had found:

Why do you go away? So that you can come back. So that you can see the place you came from with new eyes and extra colors. And the people there see you differently, too. Coming back to where you started is not the same as never leaving.

Emphasis mine.

Having just left Austin, having just left my family and friends, having grown up as a rolling stone, and having returned to a place dear to my heart for this special occassion, this quote carried special meaning for me.

It’s been with me ever since, and even more so this last 24 hours. #RIPTerryPratchett