·

photo credit: Lyon via photopin (license)

The art of deployment – part 2

Let me be honest with you: a year ago I uploaded my websites with FTP and updated databases with PhpMyAdmin. I had very limited knowledge of SSH and how servers work in particular. For years, my main focus had been on writing good code and improving my skills as a webdeveloper.
Today, not even a year further since I switched jobs, I’m constantly improving the process of publishing and updating websites. I’ve learned a lot about this subject in the time, and I want to share my knowledge on this subject with you.

In the beginning, there was FTP…

2478197231_3cb45bed1a_m

When I first deployed my sites, I did it like we all did: upload the files with FTP. The sites where relatively small and updates were manageable, but we all encountered problems or mistakes like:

  • Forgetting to upload some files to the server.
  • Forgetting to delete some old files from the server.
  • Accidentally overwriting configuration files.
  • Forgetting to restore file permissions.

These are just some various classic mistakes and I’m sure we all had our share of them. Fact is, that this process of uploading is too manual: it all needs to be done by a human being, and humans make mistakes. I’m not even covering the whole aspect of testing the website, or working with multiple developers who are overwriting each others’ files on the server, overwriting previous done adjustments.

Patching your websites

4470265309_c1ef2663ed_m

I found this method first when I had to upgrade a Magento installation. It turns out Magento offers a whole set of diff-files you can use to patch your Magento installation. So why not use this method to patch your own sites? At this point in my development career I was already pushing my project changes in a Git repository, so a diff file was easy made. The next thing to do was upload your site to your server, log in with SSH, and patch it.
This solved the problem of forgetting to upload some files to the server and removing some others, and it provided a nice rollback-feature with the -R -option. At this point it seemed like the holy grail to me. But it still had some drawbacks:

  • You still could accidentally overwrite configuration files.
  • File permissions could also provide some problems.
  • A developer applying a patch on a live server should know what he’s doing. So in a team with more developers, logging in on SSH and messing with terminal commands could provide unwanted results.
  • A diff expects a file to be in a given state. So if that file is changed by a patch of another developer you’re left in a world af patch-hurting.

Yeah, this method was nice, but to be honest I haven’t used it very much. Although I am still using it today to upgrade Magento installations.

Capistrano

CapistranoLogo

So not too long ago I wrote an article on how to deploy your website with Capistrano. I must say, I am still very excited about Capistrano. At last it looked like I found a platform that I could trust my deployment to. Capistrano users a user-defined branch in your Git repository, checks that out on a remote server and allows you to provide some tasks during and after this process (like flushing caches or renaming configuration files).
This allows you to set it up once, and never have to worry about updating your site again! Bye bye FTP! Still… whilst integrating Capistrano in the workflow at our company, we ran in some problems:

  • Each developer need to have Capistrano installed and run a $ cap master deploy -command from their terminal. This could still be a bit complicated for your co-workers, but also for future co-workers or interns.
  • With tools like SASS, you compile your SASS-files to extended, source-mapped CSS files when developing, but on the live site you want to have compressed CSS files. You could do this in your repository before publishing, but that would mean swapping configuration files and make unnecessary commits.
  • The same goes for other assets that are better of concatenated and uglified (like JavaScript for example).

So here we are in the right direction, but we still have some ‘problems’:

  • It should be simple for everyone.
  • We want to do some stuff to our source code before it goes to the production server.

What we need is a build server. What we need is Continues Integration!

Jenkins

jenkins

Meet Jenkins: the right man for the job. I’ve heard about Jenkins before, but never had the time or use case to look into it. But now I have. So I installed Ubuntu on a spare computer, installed Jenkins on it (it’s really easy actually), and in less than half a day I had my first job running. The job was very simple and worked as follow:

  1. Poll the master-branch of my repository for changes every 5 minutes.
  2. Do a $ cap master deploy  as soon as anything changes.

So that was easy. I made a test setup with a repository and some webspace on one of our VPS’s and it worked as expected: Jenkins did the deployment for me. But let’s go further. The next thing I did on a change was:

  1. Checkout the build -branch (I added that one)
  2. Reset it (hard) with origin/master
  3. Perform a grunt task I wrote, that concatenated a.css  and b.css  into a file called c.css
  4. Delete a.css  and b.css.
  5. Commit the changes and push the build -branch.
  6. Now do a $ cap build deploy .

And what would you know… Now as soon as I push changes to my repository, the production server gets the c.css -file, not the a.css  and b.css  that I have in my source code. Magic!
This is a very simple example, but it immediately shows the power of what Jenkins can do and what tasks it can take away from you as developer. Boring repetitive tasks that are fun to setup, but boring to perform each time and can be forgotten easily (remember what I said about the human factor earlier?).

Let’s go full-blown

total-recall-arnold-schwartzenegger-mind

And this is the part where I am now: a full-blown deployment protocol that runs each time I (or any other developer from my team) pushes their changes to Git: everything goes automagically from there. And it’s scalable: for each project we can add as much actions as we like; unit tests for example, or frontend testing with CasperJS.
Here is an example of how a deployment of a site of a customer of us looks now:

  1. Checkout the build-master -branch (Since we also use it for the staging-server).
  2. Resets it to origin/master .
  3. Run a Grunt-task that does the following:
    • Install Composer vendor files
    • Install Bower vendor files
    • Concatenate and minify all CSS-files in to a new CSS-file.
    • Concatenate and uglify all Javascript-files into a new Javascript-file.
    • Optimize all jpg/png and gif-images.
    • Optimize webfonts.
    • Modify some html-files to remove development-code (like references to all separate CSS- and JavaScript-files) and replace them with the minified and uglified equivalents.
  4. Delete al unnecessary files.
  5. Commit the changes and push the build -branch.
  6. Do a $ cap build-master deploy .

Now that’s what I call magic! The published website is exactly the same as my development one, except that everything is smaller and faster! And I only had to set it up once. I have to give credits here to my colleague Ferry Brouwer for setting up 99% of the Grunt build-script. I only had to modify it slightly so it would fit in.
I’m also working on an experiment where the following steps are added after our grunt build -action:

  • Spawn a Vagrant box (since our Vagrant configuration is in each projects’ repository).
  • Run some unit tests in it (with Vagrant SSH and PHPUnit).
  • Perform some CasperJS tests with it.

The final result would be the core principal of Continues Integration: each push to the master branch (so: each attempt to make modifications to the live site) would trigger a whole set of instructions that take care of optimizing, testing and deploying your website. It’s pure magic!

One final note

Don’t see this article as the holy grail on ‘how should I deploy my websites’. In my opinion, each kind of project requires it’s own type of deployment. And even then there are millions way on achieving the same goal.
Like I said, only a year ago I was still uploading sites with FTP so the whole aspect of Continues Integration is quite new for me. Even now I’m still not sure of the steps I make are the right one. They work for me now, but I wonder in a year from now when I look back at this approach if I’m still deploying websites as I am doing today. But then I’ll just write a part 3 of this blog post.

Image credits: ajmexicoGreg McMullin

Visitors give this article an average rating of 2.0 out of 5.

How would you rate this article?

Leave a Reply