Container Hosting and its Advantages compared to Cloud Servers

Container Hosting enables quick and easy Scaling for more Performance and Availability

Container hosting is becoming increasingly popular. It is simple and protects against errors. However, the conventional variant for hosting applications and websites in companies is often a classic infrastructure. This involves the use of a cloud server. A web server or an application server with a database (3-tier architecture) is operated on the server. Cloud servers generally work very well and offer many advantages compared to on-premise servers. However, cloud servers are not maintenance-free and updates, for example, have to be installed at regular intervals. Logically, this is a frequent source of errors and problems.

How does Container Hosting work in Detail?

If you want to run your applications using container hosting, you must first provision the necessary containers. The storage is then connected to it and the setup is basically the same as for a classic infrastructure. Further more, there is also a dynamic load balancer and an SSL certificate.

Scale quickly with Container Hosting

In a simple dimensioning, both variants work very well. But there is a crucial difference when it comes to scaling. When using a conventional cloud environment, you can increase resources by setting up an additional cloud server. Here too, the data must be available on the second web server (a solution for data synchronization must be installed) and access to the database must be enabled from each cloud server. This means that you have scaled your system by an additional web server but not your database. In comparison, scaling with container hosting is quick and easy: you duplicate the containers and then have doubled your resources. Thanks to the in-built load balancing, no further intervention is required to put the additional containers into operation. The entire system scales upwards almost linearly. This process can also be automated if the monitoring recognizes that the load is increasing and the existing containers are too busy.

Fast Upgrades with Container Hosting without complications

With a normal cloud server, it is necessary to install updates from time to time. Normally this is not a problem, but it takes a long time. If you want to update a web server, take a snapshot of the cloud server. Once the server has been successfully updated, the snapshot can be deleted. In many companies this is done manually, in some it is automated. This process often takes an entire working day to test whether the new version works. And this is the reason why updates are unnecessarily delayed and take place too infrequently, which in turn can jeopardize security.

With container hosting, updating to a new version is done very quickly. All you have to do is to initiate a re-deploy and the latest container images are then downloaded. The old ones are stopped and the new ones are started up. This usually takes a few minutes. Afterwards a test is carried out and once this has been completed, the deploy is confirmed. If this hasn´t worked out properly you can perform a roll-back. (Delete the new containers and start up the old ones). This means there is no risk, as each version can be restored immediately. You can automate the process and than it takes place on a weekly basis without the need for extra working hours. This allows that the latest security patches are always available.

Cloud-independent thanks to .yaml file

The setup of the container is described in a .yaml file. This contains all the important information, such as how many containers there are, which storage is mapped, which links there are to the database, etc. You can use this file to duplicate this exact setup with another Kubernetes provider in minutes at any time. The data is then integrated. Another advantage is the clean separation of the systems: one container delivers the website and other containers serve as a database, for example.

Container Hosting requires a Rethink when Analyzing Data

Containers are transient. Compared to web servers, where data remains forever until it is actively deleted, containers can be destroyed at any time. This means that the data is deleted unless you integrate storage into the container. It is therefore important to decide which data is persistent and which data is volatile. To determine this, we analyze the data at the start of a project, and another crucial point is the required images. Many of these are already available, but there are also options for building them. We often create customized images for customers based on standard images. This is easy to do by making changes and has a major advantage: if the manufacturer of the image updates its version and you create your own image, you automatically receive the new version to which your own changes are applied.

Low Storage Consumption with Containers

Container technology requires significantly less memory than traditional infrastructure. This is because with cloud servers, the operating system is installed directly on the server or is based on the hypervisor, which requires high RAM and CPU capacities. Another advantage of container technology is that significantly less data needs to be backed up and therefore much less memory and time is required. With a cloud server, the entire server including the operating system, all data and processes must be saved. This requires a lot of RAM and CPU. This quickly adds up to 100 GB, of which around 1 GB is user data. By comparison, in this example, only 1 GB of user data is stored in a container.

We use an all-flash system from Pure Storage to provide the data. This is an absolutely high-end product that gives you high data throughput with low latency times. We rely on Rubrik for backups for classic cloud servers as well as for backing up the external databases that are connected to the containers; containers do not need to be backed up. You can destroy and recreate them at any time. The configuration of the infrastructure is described in the .yaml file and can be restored in exactly the same way; the data is immediately available with the paths to the storage. The configuration itself is saved in Kubernetes and can be exported.

Setting up a Kubernetes Cluster

A cluster can, for example, consist of four (worker) nodes on which hundreds of containers run. Each node corresponds to a cloud or root server. It is important that the cluster has access to data on shared storage. If a node fails, the failed containers are automatically restarted on another node. The impact on operations is minimal, and virtual machines can also be clustered. However, recovery after a failure is more complex. For example, after the failure of a cloud server in a cluster, the data can be restarted on a second, physical server. This corresponds to a reboot and takes a correspondingly long time, depending on the technology.

Very fast, secure Deployment

Thanks to standardisation and automation, Continues Integration/Continues Delivery/Continues Deployment (CI/CD) are no longer unattainable goals. Changes can be implemented, tested and rolled out very quickly. CI/CD is not only very resource-efficient, but also very secure. The process is clearly structured and reversible at every stage without conflict. At the beginning, the developer enters the changes into the source code management (commit). Processes such as build, test and deploy are then triggered automatically. The fully automated deploy can trigger further processes such as redeploy. This updates containers automatically. This means that a change in the production system can be achieved fully automatically simply by incorporating a change. It is also possible to automate conventional cloud servers, but the process is much more complex and the changes are not so easy to undo. This means that complex and time-consuming troubleshooting is inevitable in many cases.

Why doesn't every Company switch to Container Hosting?

From our experience, we can say that container technology has so many advantages that it is usually the preferred option for our customers. Why not all companies switch to containers straight away can mainly be explained by a lack of expertise and a general lack of time and personnel. In order to familiarise yourself with new technologies, you need sufficient time and personnel resources. These are in short supply in many companies. This is why we have decided to offer Container as a Service. This gives our customers access to an affordable service that can be implemented quickly and allows them to benefit from the transfer of expertise.

If you have any questions, please contact sales@timewarp.at or find more information at https://timewarp.at/docker/ or https://timewarp.at/en/kubernetes-as-a-service/

New Articles:

Kubernetes Lösung H&F Solutions
Scaleable Kubernetes Cluster and Managed Services for H&F Solutions
Timewarp übernimmt ShareVision
Timewarp takes over ShareVision
Klimaneutralität als Ziel
Timewarp on the road to climate neutrality

Social Media

TIMEWARP Newsletter

Would you like to be informed about news right away? Then register here for the newsletter: