This blog is part 1 of a series:
We have a client who has a complex webshop. Each is made up of 3 web applications which must be deployed, the front end website and two admin websites. There is also a webApi web service and a master admin website from which all other websites synchronise. Then there are 8 copies of this website deployed on the server each with different branding (this means different css, images and web.config settings) but the same code, the same bins to deploy and each website also has a staging website.
We started out with just one website which meant it was fine to deploy the site manually. This involved logging in via remote desktop to the server, backing up the current web folder in a zip, copying the new website across, then logging onto the database server also by remote desktop, backing up the current database and then running the change script. But as the number of websites grew deployment started to take more and more time. Also there was a higher possibility of mistakes as it is all done manually, maybe the wrong path in the webconfig or something.
The solution, Octopus Deploy
I had this in mind whilst I was at a conference. On the last day there was a chance for small discussions. There was one on continuous integration which I joined. It was there that someone mentioned that they used Octopus as their deploy strategy. It sounded great to me so after the conference I took some time to check it out.
This blog series of 8 posts (because an octopus has 8 legs of course) details our journey to an automated deployment strategy which works for us. The great thing about Octopus deploy is that it is easy to learn and start using quickly, but is amazingly customisable, using PowerShell scripts you really can get it to do anything.
Octopus Deploy Basics
Octopus deploy is very user friendly. It is easy to learn the basics and start deploying your website in no time. Here I give an overview of the process and the features which we use.
The Octopus deploy system is comprised of two parts, the Octopus and the Tentacles. The Octopus is the main server, this is where you configure all your deployments. The tentacles are installed on the machines that you want to install to. There can be many tentacles, but only one octopus. The tentacles are lightweight which is good.
To test the software at first I installed the octopus server on my local machine and the tentacle on the server which hosts the websites. You need to link the tentacle to the server, but Octopus makes this very easy to do with the concept of a unique thumbprint for the tentacle, so then you simply use the IP address of the server and make sure the thumbprint matches, and you are connected.
Octopus works best when it is deploying nuget packages. In order to create a nuget package of your website there is a simple plugin which has been developed by the people at Octopus Deploy. It is called Octopack and can be downloaded from nuget into your visual studio solution.
There is a built in nuget repository which comes with the Octopus server. To use this you can create a nuspec file containing the path to the Octopus nuget server and the api key. This means that when you build the nuget package will be pushed directly to the Octopus nuget server ready to be deployed.
Once this is done, you need to create a project inside the Octopus Server. A project will map to the visual studio solution which needs deploying. What I did while I was experimenting with Octopus Deploy was create a project for one of the websites on our server which needed deploying. Then a project can consist of many steps which make up the deployment process. In my case I created 3 steps, one for each web application in my website, the admin site, the prices admin site and the frontend website. Octopus Deploy made it easy to deploy all 3 one after another. Then for each step I simply selected the nuget package which I wanted to deploy (the relevant web application) and then set the path for the custom install directory.
Environments and roles
We have staging and live environments for each website so that any changes can be tested before going live. Octopus makes this really easy to handle too. I created two environments under the Environment tab. One called staging and one called live. On the machine settings page Octopus lets you select the environments which map to the machine, so if, as in our case, both staging and live websites sit alongside each other on the same physical machine that is no problem. I just set both environments to the machine. On the machine settings page there is also a Roles setting. Here I entered each project as an individual role, this meant that all projects could be deployed to the same machine, both in the staging and live environment. Octopus makes all this so easy to configure.
Another very useful concept which Octopus Deploy introduces is Lifecycles. With lifecycles it is easy to manage how a deployment should progress between environments. So in our case I changed the default lifecycle so that it consisted of 2 phases, the first was to deploy to the staging environment and the second to the live environment. Just a really simple rule, but it is powerful and means that no-one is able to deploy to the live environment without first deploying to the staging environment.
My first experiments with Octopus Deploy were very positive, it was very simple to get started with, to create nuget package and deploy it to the tentacle on the server. However I was just scratching the surface of what Octopus enables and this simple test was not good enough to replace our manual deploy process yet.
Next: Build scripts
In the next blog I will describe how I wrote a build script to further tailor the build and nuget package creation.