Developing the series of articles on the design of CloudReady applications, i continue with this second entry.
If you don’t read the previous entry or don’t remenber it, may be it’s a good moment to read it again before go on.
The Elasticity Challenge:
One of the most interesting features of the Cloud is to be Elastic, scale up/down when needed to decrease costs.
If we have done our homework and our application is stateless, we can support without any problem (as long as our provider does) grow and decrease, so we will have already taken the first step.
Our second step will be to use an element that allows us to perform that growth / decrease in an elastic way and that will typically be a load balancer.
In the following image I illustrate an application that meets the requirements of being stateless and has the architecture resources (Load Balancer) necessary to scale and decrease and that is experiencing a moment of overload:
The solution to this problem is to define scaling rules, which create computing resources to process those requests when necessary:
The containers meet the Cloud, The challenge of portability and simplification:
Very few environments are designed with the webscale in mind and in a few places we can achieve this as easily as in the Cloud.
In general when we start talking about scalable architectures, etc., etc. and if we are up-to-date with the state of the art, we will begin to take into account the concept of container.
What is a container ?, explained in a simple way is an intermediate step between virtualization (where we create a complete virtual server, with its software and virtual hardware) and the traditional physical server. The following image with Docker (the most used container solution) perfectly illustrates the existing difference.
the underlying idea is that if we are able to “package” our applications in containers, we can deploy these in a very simple way in the cloud.
If we choose to use containers, we can choose to deploy the Docker engine ourselves in IaaS mode or use it as PaaS services if our Cloud provider offers it.
If you want to know more about Oracle Cloud Container Native Service, you can do it in the following link.
The container management challenge:
When we begin to feel comfortable with the use of containers, it’s common that we see possibilities of deploying them everywhere and that we suffer an episode similar to the one occurred with the massive adoption of virtualization, that is …. Containers like mushrooms start to appear and their management starts to get complicated.
To manage the complexity of a massive container environment, it is advisable to rely on management tools. There are several:
- Docker Swarm
the most used today is Kubernetes.
As with Dockers we have the possibility to deploy our own Kubernetes cluster in IaaS mode or if our cloud provider allows us to use this service as PaaS.
OCI has a managed PaaS to deploy and use Kubernetes called OKE. You can get additional information in the following link.
Be careful about vendor Locking:
If we opt for a container solution, whether offered by our cloud provider or installed by us on IaaS, it’s important that we take in mind whether we are implementing the OPEN options of Docker and Kubernetes or some type of fork of them.
The risk of using a fork is to fall into some type of vendor locking. Using proprietary forks does not have to be bad (perhaps in our case this is justified), but it is important to do it consciously.
In the case of OCI NO proprietary forks are used, so the user is free to enter / leave the cloud in a easy way, but there are other providers that use proprietary of the original projects.
The Serverless Challenge:
With our application running in the cloud, containerized, scalable, estateless … etc, etc, it is time to make it 100% native Cloud and this we will achieve by using serverless.
The serverless is possibly the last frontier of the Cloud adoption.
The underlying idea is very simple, I’m interested in executing a certain workload, when I need it, not having an active server waiting to execute that load.
For this, our Cloud provider will offer us a pool of resources, capable of “eat” those workloads and returning a result and will only generate expenses for the processing time of that workload.
There are multiple Cloud providers that offer Serverless services, again the most important thing is not to fall (at least not unconsciously) into a vendor locking (OpenSour it’s our friend).
One proprietary serverless example are the AWS Lambda, if we use AWS Lambda to develop our application, we must to be concerned about those workloads only can be runned on AWS and we will need to rewrite the application to move those workloads to other cloud provider.
So if we develop based on Lambda, we must be aware that these workloads can only be executed in AWS and that if some day we want to remove those loads from other Clouds, we will have to rewrite them.
Today one of the OpenSource most used project is FN project that is for example the one used on OCI OCI.
at this point we finished this second post of Cloud Ready application design. I hope you have found it useful and interesting.
Cloud Ready Application Design (1 of 3)