Cloud containers are increasing in popularity in the IT and Security industry. The world’s
top technology companies, including Microsoft, Google and Facebook, all use them.
Containers promise an efficient, easy-to-deploy and secure method of applying specific
infrastructure requirements and they also offer an alternative to virtual machines.
“A Certa Cloud Container is an instance that can be deployed in under 50 seconds with high availability, larger resource pools than standard VPS and has the ability to be deployed in 7 locations worldwide.”
Containers create a border at the application level rather than at the server level. This means
that if anything goes wrong in a single container, it only affects that individual container and
not the whole VM or whole server. Containers stops compatibility problems between
applications that exist in the same operating system (OS). So far, cloud containers have
predominantly been the domain of Linux-based servers, but Microsoft Windows Server have
introduced Windows Server containers and Hyper-V containers and integrated them into
Growth of Containers
Research conducted by 451 Research found that this year is to see a growing use of container
technology, with revenue produced by the adoption of containers expected to hit $1,107bn.
This growth in containerisation can be linked with the benefits it can bring for development
and operations; for businesses, it is seen as a very practical way to develop a development and
operations strategy, which should lead to software being released more efficiently and frequently.
Containerised systems are noticeably more efficient for providers as they begin to be made aware
that they can be used as part of their overall data centre infrastructure. While the adoption of
container technology is one factor noticed in the data centre industry, the rise in service providers
offering more modular equipment is another.
The key difference with containers is the minimalist nature of their deployment. Unlike virtual
machines, they don’t need a full OS to be installed within the container, and they don’t need a
virtual copy of the host server’s hardware.
Containers are able to operate with the minimum amount of resources to perform the task they
were designed for; this can mean just a few pieces of software, libraries and the basics of an OS.
This results in two or three times as many containers being able to be deployed on a server than
Once the container has been created, it can be deployed to different servers very easily. From a
software lifecycle perspective this is great, as containers can be copied to create development,
test, and integration and live environments very quickly. With a software and security testing
viewpoint; this has a large advantage, because it ensures that the underlying OS is not causing a
difference in the test results.
Containers problem is the process of splitting your virtualisation into lots of smaller chunks.
When there are just a few containers involved, it’s an advantage because you know exactly what
configuration you’re deploying and where. However, if you fully invest in containers it’s quite
possible to soon have so many containers that it becomes difficult to manage.
Problems of container management are a common complaint, even with container management
systems such as Docker. Virtual machines are generally considered easy to manage, primarily
because there are significantly fewer VMs compared to containers.
Containers are deployed in one of two ways:
- Creating an image to run in a container
- Downloading a pre-created image, such as from Docker Hub.
Although Docker is by far the largest and most popular container platform, with plenty of large
companies using its solution, there are alternatives, such as LXD and Rocket. However, at this
time, Docker has become popular for containerisation. Originally built on a technology called LXC,
Docker has become the predominant force in the world of containers. The library of pre-created
images in Docker Hub is large, and should allow most standard requirements to be met with
Safety & Security
When cloud containers became popular, one of the biggest issues were how to keep them secure.
Docker containers used to have to run as a privileged user on the underlying OS, which meant that
if key parts of the container were compromised, root or administrator access could potentially be
obtained on the underlying OS, or vice versa. Docker now supports user namespaces, allowing
containers to be run as specific users. There is the issue of security of images downloaded from
Docker Hub; by downloading an image (which other users have created), the security of the
container could not necessarily be guaranteed. The images can also be scanned for vulnerabilities.
Containers for sensitive production applications should be treated in the same way as any other
deployment when it comes to security. Inside the container is software that may have vulnerabilities;
although this might not grant access to the underlying OS of the server, there still may be issues such
as denial of service that could disable a MySQL container and therefore knock a website offline. It’s
also important not to forget the security of the server hosting the containers.
Their portability, both internally and in the cloud, joined with their low cost, makes them a great
alternative to full-blown virtual machines. However, it will be the case that in most companies there
are use for both containers and virtual machines. Both have their strong points and their weaknesses,
and can match each other rather than compete.
Want to know more? Certa’s Cloud Containers start from just 10p per hour! (Not incl. VAT). Our premier cloud hosing platform has layer upon layer of security in place, protecting your site from cyber criminals and malicious bugs with ultra-secure cloud Linux hosting.