Developers can use a bunch of popular programming languages to run serverless functions: Java, Python, NodeJS, Go, C# to name a few. Since these are popular languages, this makes the transition from traditional deployment models to serverless a bit easier. Practical examples help to clarify the concept:
- Real-time data transformation. Imagine you want to put a watermark on each of your proprietary photos. Serverless enables you to do this quickly for example each time a user uploads a photo to your website. The upload event triggers the function to add the watermark. Batch processing of large amounts is handled very well, since the required compute power scales dynamically.
Transactions in an eCommerce web shop can be taken care of entirely by serverless functions. Saving items to a favorites list, adding them to the basket, checking out and paying for a basket can all be run as functions in a serverless environment.
Not having to learn a new language is a big plus to a development team. More benefits from different perspectives are the following:
Remove the operational and people overhead of setting up and maintaining infrastructure. Don’t operate a Virtual Machine, don’t patch it, don’t secure it and don’t monitor it. This helps your teams focus on other, more important things in your organization.
Performance boost when needed: serverless infrastructure scales (almost) indefinitely. No need to add extra Virtual Machines or add more memory when your application needs it. This also means: no service windows and/or downtime for your applications.
- Eco-friendly. The only time the infrastructure (components) run is when the serverless functions are executed. No servers remain idle unneeded (e.g. in the weekend). Thus reducing the consumption of electricity and production of heat so less cooling power of your datacenter is needed.
- Cost efficient: you only pay per use. No costs for running servers which are only used for a few days a month. E.g. generating a monthly sales report.
- Secure your infrastructure. No need to secure (e.g. harden & patch) your servers since you do not maintain them anymore. But don’t forget about security in a serverless world. More to come later in the article.
This list of benefits is just the beginning. It acts as a starting point for your organization to decide to migrate traditional applications or not.
One of the most important benefits of serverless is the cost aspect. Before your organization can benefit from serverless, it’s essential to understand how the actual costs for serverless functions are calculated. If you don’t know about this, serverless functions can eat up a lot of your cloud budgets, thus ruining your investment.
Vendor lock-in equal high price
Different Cost Model makes estimating hard
And the pay-per-request model of serverless is fundamentally different. Cost estimations require a different approach: knowing how, and how much, your application is used. And to know that, you need to know how the design of your application is impacted by serverless pricing.
For instance, estimating cost of a simple ‘guestbook’ application requires you to decompose the application on paper, figuring out which functions and methods the application exists of. How many enterprise architects or business analysts do you know who can do that?
Next, figuring out the average number of calls and requests to each of the functions. That requires you to know the transactional flow users take through the application, like logging in, viewing the guestbook, adding entries, etc.
Determining the average runtime for functions. Functions are billed based on their execution or processing time, so knowing how long each function runs, time the number of times it is run, is roughly the cost of each function. Similarly, there are bounds and limitation to memory usage.
But the story doesn’t end here. There’s associated costs to make functions available for consumption through API gateways, retrieving and storing data, network transfer costs, and more.
As you can see, there is a whole lot of information needed to estimate the costs of running your application. You can’t just refactor your application and lift and shift it to the cloud.
A famous dutch phrase is “meten is weten” (roughly translated: to know it, you should measure it). Probably this information is not available in your organization, since most of the items are not so relevant when running your own infrastructure. However, without this information it is impossible to know if using serverless for this application is beneficial or not. Find more use cases and examples at the website of simform.
Drilling a little bit deeper into current applications, reveals a big organizational impact. We all know the big monolithic application with the big chunks of difficult to manage code bases and slow release cycles. Also we know about micro-services which are like applications that are split up into multiple smaller pieces all with it’s own lifecycle. And now we are talking about individual functions.
Serverless functions should be designed and coded in the most optimal way to really benefit from the “pay per usage” cost model. The functions need to “collaborate” together to make up the final application. It requires a good overview of which functions are needed, in which order, for each (business) feature of the application.
This brings us to logging and tracing of applications. This becomes much harder since there are much more “moving parts”. Make sure logging and auditing for all functions is done in a similar manner to quickly pinpoint any errors. Debugging of applications is also considered fairy difficult, since not all the well known Integrated Development Environments (IDEs) can be used.
Besides this, what about the role and tasks of the “traditional system administrator”. We already saw the importance of SRE and the changing role of this personal in the organization. With serverless and virtually no servers and systems to maintain, what will be their role in the organization? Maybe this role will disappear in the future, or shift to the service provider entirely? We don’t know yet. Think of this in case you employ a group of system administrators. It’s good for them to know about the move to serverless and the impact it might and will be on their jobs.
In public cloud, security is still extremely important. Security aspects change when moving to serverless. It’s true you do not need to execute security patches on Virtual Machines anymore. But security is not fixed. You definitively need to think about security for your applications.
Security concerns shifts from the infrastructure layer to the application layer. Developers need to be aware of it since the code they create is run directly in the cloud, there is no “extra layer” to provide protection (e.g. a server firewall or cross side scripting protection). The boundaries of the outside world and the organizational (inside) world are not so clear anymore. So security still remains a serious topic.
Some considerations help to understand the impact of it:
- Bigger attack surface. This item is linked to the previous section. All functions which are accessible are directly exposed, thus making them more vulnerable for attacks. Traditional applications have a limited amount of endpoints. A lot of code is “hidden” from the outside.
- Security scanning for serverless functions is difficult. As of now there are not so many tools available which can scan serverless code for vulnerabilities. Developers need to check a lot of things manually. This increases the risk of security threads which remain undetected. Virtual Servers are inaccessible since the cloud provider maintains them.
- Big dependency of third party libraries. Serverless functions rely on a lot of other (open source) libraries. Those libraries can (and probably will) contain security vulnerabilities. If those are not patched (and you don’t control nor see them easily) it makes your code vulnerable. Some tips to help you in this matter:
- Build a list of dependencies and their versions.
- Remove any unneeded dependencies/libraries.
- Update the packages regularly and scan them using specialized tools.
Security aspects for serverless become a new chapter for your teams. A big effort is required from the developers since there is a lot of manual work involved. It’s also about a change in mindset. Shifting security left shifts responsibility: from infrastructure teams to development teams. Therefore it is important to be prepared as the people involved will understand what to expect.
For more serverless computing risks, be sure to check out the OWASP serverless top 10.
Serverless is an interesting proposition. It promises even faster development cycles, no time wasted on managing servers, containers and orchestrators, while only handing in a little flexibility. The Pros outweigh the Cons for many use cases.
Slowly, the early adopters are saying ‘build serverless first. If needed, move to containers‘. There are some major points to take into consideration, though. Cost and lock-in of adjacent services are two of the major ones, but the cloud-native architectural approach requires a sometimes steep learning curve. It’s just so different than monolithic apps on a traditional on-prem environment.
Luckily, serverless isn’t going to take over the world anytime soon. Just like with containers, adoption is ramping up, but will be nowhere near complete in the 2020s. There’s ample time to learn serverless.