Dedicated or shared hosting? There was a time when that was the only decision you needed to make when it came to your hosting needs. While that decision still needs to be made, it no longer ends there. There are now many more choices that need to be made about what’s under the hood, based on your requirements, budget and roadmap. Containerisation is one of the latest cause of headaches in data centres across the land. Would containers suit our needs? What kind of server hosting is required to run containers? And what is ‘serverless’?
THE GOOD OLD DAYS
In the ‘old days’, you would simply run an application via a web hosting package, or if you were a little more sophisticated, you would acquire a dedicated server with an operating system and a complete software stack. But now, there are far more options.
Containerisation, or operating-system-level virtualisation, uses a platform such as Docker to run isolated instances known as containers. A container is a package of software that includes everything needed for a specific application, functioning like a separate server environment. Sharing a single OS kernel, multiple containers can run on one server or virtual machine (VM) without affecting each other in any way. To the user, a container feels like its own unique environment, irrespective of the host infrastructure.
WHY CHOOSE CONTAINERS?
Containers are a very efficient way to structure your data centre because they can perform tasks that would otherwise require a whole server or VM, while consuming fewer resources. In fact, while most of our customers won’t realise it, our CloudNX Apps & Stacks services are already built on container technology, and we continue to take what we’ve learned and apply it to all our products going forward. We use these technologies internally – we’ve been the guinea pigs ourselves – and our underlying platforms have become more resilient as a result, with the additional benefits of self-healing. In the years to come, the development and adoption of containers will likely continue to accelerate.
We intentionally chose containers as the platform for these new services because they’re lightweight and agile, which allows them to be deployed, shut down and restarted at a moment’s notice, and they can be easily transferred across hardware and environments (i.e. from one datacentre to another). Because containers are standalone packages, they behave reliably and consistently for everyone, all the time, regardless of the local configuration. There’s a reason why Google pioneered containers and uses them across its global infrastructure too.
Containers, just like VMs or servers, need to be managed, or orchestrated. Kubernetes is perhaps the most popular orchestrator for containers (incidentally it was Google that built Kubernetes in the first place). There are several out there, but Kubernetes is the leading container orchestration tool, filling a vital role for anyone who needs to run a large number of containers in a production environment – on one or more dedicated servers, for example. Kubernetes automates the deployment, scheduling and management of containerised applications. It automatically scales containers across multiple nodes (servers or VMs) to meet current demand and perform rollouts seamlessly, while also enabling containerised applications to self-heal: if a node fails, Kubernetes restarts, replaces or reschedules containers as required.
AN ENVIRONMENT THAT’S RIGHT
As with traditional web hosting solutions, you can choose whether to run your containers in a shared environment, where you will likely get the best value for money if you have relatively small workloads that will not fully utilise resources of a whole cluster of nodes (VMs or servers). But if you have larger workloads or regulatory obligations to meet, a dedicated environment, or even your own cluster, may be required.
In serverless computing, the orchestrator will automatically stop, start and scale the container on the infrastructure best placed to handle the demand at that time. This means that the developer has even less to be concerned about; code runs automatically, with no need to manually configure the infrastructure. Costs are also minimised, with all instances of a container automatically shut down when demand for it disappears.
‘Microservices’ is another term often used when discussing containers. Simply put, a traditional application is built as one big block, with a single file system, shared databases and a common language across its various functions. A microservices application reveals itself behind the scenes, where functions are broken down into individual components; for example, a product service, a payment service, and a customer review service. Containerisation technologies like Kubernetes provide platforms and management tools for implementation, enabling microservices to be lightweight and run anywhere.
Microservices can technically be built on traditional server hosting, but the practical reality of creating and maintaining a full microservices architecture demands a container platform like Docker, and an orchestration tool like Kubernetes.
As a business, we are building all our new services on these systems, with container technology firmly placed as the platform of the future. The question is, is it the platform for yours?