Serverless and FaaS are the first cloud computing technologies that truly deliver on the promise of the cloud:
To have computing power become similar to utilties such as water and electricity.
While FaaS and Serverless are often referred to as the same concept, FaaS is actually a subset of Serverless that further separates software developers from infrastructure considerations. Both technologies allow businesses to distance themselves from concerns relating to backend services and IT infrastructure. Serverless allows website and application features to be executed on-demand, with billing precision down to the milisecond.
Where the microservices architecture make it possible to split large applications and services into multiple smaller and easier to maintain microservices, the serverless architecture allows microservices to be further divided into individual functions. This allows for even more granular control over each aspect of the development and operation of backend services.
Using event based triggers such as client requests, Serverless platforms quickly spin up containers to run functions and power them down when the job is done. While each individual function is limited in scope and complexity, they can be designed to work together in such a way as to form comprehensive backend services. When developing fully fledged serverless applications however things do get more complicated and special backend and infrastructure services may be required.
When using a 3rd party serverless provider there are a few caveats. Serverless solutions generally benefit from the ‘pay-per-use’ billing scheme when dealing with peak and off-peak scenarios. If internet traffic and server load is relatively constant then 3rd party offerings often turn out to be more expensive than conventional microservice deployments. This can be mitigated however by deploying a self-hosted serverless platform instead. This can also alleviate concerns about vendor lock-in when using Free and Open Source Software as the platform of choice, such as Knative and Kubernetes.
Coduity not only offers FaaS software development services, we also offer comprehensive FaaS infrastructure solutions. Our solutions allow for next level efficiency in infrastructure utilisation, whether it is destined for internal company use or a public facing service.
Coduity FaaS solutions are based on high performance native code from the functions themselves down to every layer of the platform infrastructure. Our FaaS offerings have the lowest latency, offer true scaling to zero and provide the most efficient levels of server utilisation.
Our basic tier is focused on Edge Computing solutions such as those offered by Content Delivery Network providers. Content Delivery Networks such as Cloudflare offer Serverless worker solutions in each of their content delivery locations, ensuring low latency and high speed execution.
Websites can generally enjoy serverless computing free of charge until a significant amount of traffic and load becomes commonplace. Simple services and functions such as contact forms and quote generators can easily be built in this tier, but more complex and feature rich services are best built at Tier 1 or 2.
While the free offerings of CDN providers is certainly interesting, costs can rise considerably once the free tier is outgrown and then there is the matter of vendor lock-in. Proprietary Serverless platforms require applications to be built using their non-standard libraries and APIs, making migration a costly affair further down the road.
Tier 1 of our Serverless offering is based on the lightweight K3s distribution of Kubernetes, which can be deployed on either a single virtual server or a cluster of servers. The Serverless component in our offerings is based on Knative, a Kubernetes native solution.
Both are 100% Free and Open Source Software solutions, no hidden costs down the road, no subscription or licensing fees regardless of scale.
Knative is a Kubernetes native platform, allowing the same familiar APIs to be used throughout. It supports a wide range of programming languages for functions, including Go and Rust. With functions based on Go and Rust deployments gain a number of key benefits.
Our final tier is based on a full Kubernetes distribution instead of K3s, while still using Knative as Serverless platform. It offers all the benefits of tier 1, while offering several additional enhancements and capabilities such as:
Migrating from K3s to Kubernetes is seamless, save the added complexity in maintenance and the additional infrastructure cost of a full scale Kubernetes deployment. The gains that a multi-cloud / location deployment can bring to global scale services however is beyond question.
We offer tier 0 as a low cost entry for small scale projects with limited functionality. The main reason for this is to avoid vendor lock-in, which can cause considerable financial pressure further down the road.
Tier 0 is generally useful for basic functionality such as contact forms, opinion polls, rendering 3rd party information and the like.
For more comprehensive functionality however we refer to tier 1, which can be deployed onto a single low tier VPS with easy expansion should the need arise.
When using relatively standard features then we will make migration between tier 0 and 1 or 2 seamless, by offering a Knative equivalent function as those used in tier 0.
If full scale applications are built on tier 0 however then software migration will be necessary. That is why tier 1 is almost as low entry as tier 0 as it is best to get started properly.
Migration between tier 1 and 2 however is 100% seamless. Serverless applications are best deployed on tier 1 or 2 from the start, basic functionality can be deployed on any tier at no extra cost.
Certainly. We also offer solutions based on direct interaction with Kubernetes API’s as well as supporting Ansible / OKD, Dapr, Porter and others. We class these as custom solutions however, so they are not included as default tiers.