13. February 2025 By Thomas Schumacher
Withdrawal from the cloud?
Taking stock
For years, companies have been moving their software to the cloud, driven by the promises of ‘security’ and ‘cost savings’. For some time now, some of these companies have been wondering whether this step was the right one or whether it would be better to bring data and software back into the company, i.e. ‘on-premises’ or ‘on-prem’ for short.
Where does the disappointment come from when promises are not or only partially kept? I will answer these questions in my blog post, point out possible solutions and explain how adesso can support you in this context.
Security aspects
In the areas of financial services, health and public sector systems, there is a maximum demand for security due to the high level of data protection required. Data loss or data theft means a loss of reputation, company secrets and ultimately money.
The major hyperscalers, i.e. Google, Amazon and Microsoft, are all based in the USA, and even if the European server locations offered suggest that the data is secure, there are still some US laws such as ‘RISAA’, ‘Cloud Act’ and ‘Patriot Act’ that allow the secret services to access data if there is a ‘justified interest’.
In view of the election result in the USA in November 2024, it is not to be expected that anything will change for the better (from a data security perspective).
Even if it is unlikely that confidential data will be analysed in the name of ‘US national security’ in a specific case, those responsible for data security still have an uneasy feeling.
In some cases, this has led to individual parts of the application, such as an LDAP server with customer data, remaining on-premises in the company, i.e. a hybrid cloud approach is pursued. This approach in turn increases the traffic between the components in the cloud – more on this later.
A schematic representation of the hybrid cloud approach is as follows:

Do you want to rethink your cloud strategy?
Our experts analyse your requirements and develop a tailor-made solution that perfectly suits your company.
Cost factors in the cloud
What are the cost drivers of a cloud solution? The following points should be considered:
1. CPU time used
The type of CPU can be booked by performance at the provider. High-performance CPUs cost proportionately more, so it must be considered individually in advance for each application whether it makes more sense to run more service instances with a smaller CPU or fewer service instances with a large CPU.
These considerations should be discussed, evaluated and decided in the early project phase by the respective architect together with the stakeholders.
2. Main memory used over time (RAM seconds)
In contrast to CPU time, there is no ‘more or less’ here, only ‘fits’. A microservice with a footprint of 500MB (including heap, if it is a JVM language) requires 500MB (+ offset for the OS) RAM.
As a rule of thumb, RAM is relatively expensive, so it pays to look for ways to save here.
3. Storage (S3, database, volumes, etc.)
It should be noted that different storage classes are usually offered, which differ in latency/availability. The higher the availability, the higher the price. So you should have a close eye on your own use case when making your selection.
4. Traffic
This refers to data traffic to and from systems outside the cloud. This usually means http requests or API calls. In the case of hybrid architectures, communication with on-premise services is also added here.
5. Operation / Support
There are costs for supporting the infrastructure components. Depending on the service level, i.e. the desired response time and availability, a certain amount is due here. Added to this are the costs for application support, which usually has to be provided by the company's own personnel.
The reasons for excessive cost development can be directly attributed to these points:
- 1. CPU time: The architecture may not be optimised for cloud operation. The services could be incorrectly defined or components could be running continuously, even though they are only needed once a day for batch operations or for data delivery.
- 2. RAM seconds: The memory footprint of the microservices could be unnecessarily high, for example due to an ill-chosen Docker base image or the use of a programming framework that is not designed for cloud natives.
- 3. Storage: There are significant differences between directly available storage (‘online’) and storage that is intended for archiving and has longer latency times for access, but is cheaper.
- 4. Traffic: the hyperscalers charge a good price for external traffic. While ‘normal’ traffic from user interaction via http is relatively manageable, data exchange in a hybrid cloud approach can quickly add up.
- 5. Operation/support: when migrating to the cloud, it is often forgotten that the finished solution also needs to be competently managed in production. Existing operating teams that have previously only dealt with server and application management ‘on premise’ often lack the expertise for the cloud solution. This also includes rapid scaling or targeted intervention in the production environment in the event of faults.
Solution approaches
First of all, you should always have experienced architects on board for migration projects who can develop non-functional requirements in advance and use these to define the architecture, in particular the service interface and service availability.
Our software architecture experts at adesso have both the technical experience and the specialist knowledge to develop and implement the appropriate architectural proposal for the customer's business.
An architecture review and, if necessary, refactoring can also help with existing projects. Likewise, the storage class used can also be checked. As mentioned above, an archiving service does not require online storage.
The following figure shows (in a highly simplified way) a problem with the service interface:

On the subject of RAM: To keep the service footprint small and thus the resource consumption low, the Docker base image should be chosen wisely. This is where a quick win can often be achieved, even with existing projects, by exchanging this image – a one-liner per container definition, usually without further adjustments to the service itself.
However, it is much more important to choose a framework with the ‘cloud native’ label. Prominent representatives of this category are Quarkus and Micronaut, which generate very small images when compiling to the native target platform and start the containers incredibly quickly.
It is important to note that we are not talking about a few percentage points here, but a difference in footprint by a factor of ten or more. There are similar advantages in terms of startup times, which also has an impact on the speed of software development.
Unfortunately, frameworks that are well known and widely used in the company are very often used, but they generate ‘fat’ deployment artefacts and, as a result, high costs in terms of RAM seconds. In this case, a subsequent migration to one of the frameworks mentioned is complex and expensive, but can pay off in the long term. This is because RAM seconds are one of the main drivers of cloud costs.
The following figure shows the magnitude of the RAM footprint:

If you can't or don't want to compile natively, another quick win in terms of memory consumption can be the use of modern runtime environments, such as a JDK from version 21. A highlight of this version is the implementation of ‘virtual threads’, which allow an enormous number of threads to be created without significant memory consumption.
This is particularly interesting for frontend components, where user access via http opens a separate thread and is therefore always operated in a container with ample memory reserves. This reserve can be significantly reduced by switching to a modern Java.
Regarding the topic of ‘operation’: the operating team must be able to master the technology used and, if necessary, quickly solve problems. Expert knowledge is key here, but it cannot be built up quickly, and no IT manager likes ‘learning in production’.
Our software architects and developers at adesso are proficient in various languages and frameworks and are happy to support customers in making their selection.
High security and lower costs?
You may have noticed that the topic of security was not addressed in the last section. As already explained, there is no absolute guarantee that the hyperscalers will not pass on data to secret services, since they are all subject to the applicable US law.
The range of cloud providers in Germany and Europe is limited, but they do exist nonetheless. Some of them rely on the open-source software ‘OpenStack’, which provides a range of standardised cloud services, for example OVHCloud, ElastX, OpenTelekom Cloud or FugaCloud. Other companies take the approach of only offering virtual servers, where the customer then has to organise deployments of the infrastructure components independently – in other words, at a very low level.
A variant that lies between these two levels is ‘Managed Services’, where Kubernetes and database systems are administered by the provider.
I would like to single out one of these providers from the last category – admittedly not without self-interest, but definitely out of conviction: the adesso Business Cloud, or ABC for short.
‘Okay, but what's so great about it?’ I can hear you asking. Here are the key features that make ABC particularly interesting for applications and data requiring a high level of protection:
- C5-certified
- Fulfils the ISO 2700
- Georedundancy: Distribution across two locations (Frankfurt and Karlsruhe) with good data centre connections
- The data definitely remains in Germany
- Cost advantage for CPU / RAM / storage. The costs remain significantly below the hyperscaler offer
- No costs are charged for traffic
- Managed Kubernetes is supported
- Support for managed services and applications
Maximum security, full control
adesso Business Cloud
Increase your IT sovereignty with the adesso Business Cloud – the secure, high-performance and cost-effective alternative to US hyperscalers. Hosted in German data centres, C5-certified and ISO 27001-compliant, it offers the highest data protection and managed services that take the pressure off your IT teams. No hidden costs, no risk – just a cloud that suits you.
Conclusion
Opting for a public cloud solution is usually beneficial, but it needs to be well prepared and planned. In particular, architecture decisions and the choice of the right framework are of great importance for performance and costs.
Even with existing applications, the measures described above can achieve further savings in running costs.
If you are satisfied with an architecture based on containers and Kubernetes, the adesso Business Cloud can be an interesting, secure and cost-effective alternative to the hyperscalers.