Sie sind hier:
News, Cloud, General
There are usually three things that IT managers want from an external host when outsourcing databases:
Let’s take a closer look at how these three goals in database hosting can be achieved through the right structure of the infrastructure and the optimal selection of high-quality components:
The structure of the infrastructure is strongly related to the desired availability and performance of the database service, as well as its scalability. Dedicated single-container databases are therefore popular. These have a high performance, can be easily scaled and are therefore very flexible.
So in the single container setup, the container dynamically receives more compute resources (more CPU, RAM and DiskIIO) and can therefore be dynamically scaled. If a host on which the container is currently running fails, it is automatically rebuilt on another host within seconds and the database is available again.
The availability with this model is 99.9% and is therefore the same as with most large public cloud providers. So for many companies, this is a cost-effective method of operating databases fully managed from the cloud.
For the upper middle class and the enterprise segment, we usually offer 3 or 4-node clusters over several data center locations because of their need of more availability and performance. In addition to the flexibility and the high performance, this structure guarantees maximum reliability in the area of 99.99%.
In many companies, the databases have to be available 24/7. That is why the operation is very labor-intensive. It is therefore advisable to have the operation of hardware and applications, including monitoring and backup, carried out by a cloud service provider. In the long run, this is more cost-effective and security is greater, even with 9×5 operation. With us, these services are included in database hosting.
In addition, there is also the option of adapting the monitoring to customer-specific parameters and thus offering the greatest possible relief, which is also tailored to individual needs.
Incident and capacity management are important pillars of the operating offer, which is also partially automated (e.g. automatic provision of containers when the load increases).
The backup runs with the Agents category and saves data consistently. In principle, we save all backups in a second data center. This means that if one location fails, the data can be reproduced on another. We have thus also included a disaster recovery as standard. With all these measures, our goal is to keep the convenience factor and security for our customers as high as possible.
Especially when a lot of data is processed, the issue of performance is brand new. High performance is important to us, which is why we use NVMe storage. However, performance has to be fixed in several places in the IT system. Before you even turn a screw, we recommend doing a performance stress test, which we offer free of charge in the basic version on our website. You will also receive a detailed analysis of the resilience of your web platform. There are different versions of the stress test for customer-specific requirements. Click here for the stress test https://timewarp.at/stress-test/.
The concept of the database and the structure of the platform are also decisive for very high availability. With two or more nodes in a cluster over at least two locations, the availability can be increased to 99.99%, which is usually higher than with classic public cloud providers. For maximum availability, a VMware cluster stretched over two data centers is the best choice, but Hyper-V and KVM are also possible as hypervisors.
Then please contact Rainer Schneemayer (firstname.lastname@example.org).
Would you like to be informed about news right away? Then register here for the newsletter: