In the first part of this post, I mentioned five main drives to go serverless:
- Scalability
- Cost reduction
- High development velocity
- Resiliency
- Security
Like any other tool, serverless architecture should be used correctly to be effective. In this article, I’ll briefly mention a few gotchas that we found along our serverless journey in this article, and I plan to go deeper into the solutions that we developed for them at Altitude Networks in future posts.
Learning Curve
First off, serverless has a learning curve. In my experience, developers who have experience with service-ful, event-ful, state-less architectures can pick up serverless patterns quickly. But there will still be some new concepts (such as infinite scalability via concurrent executions) that require getting used to.
Traceability and Debugging
Next, monitoring and debugging serverless applications have been a common pain point for early adopters like us. In a serverless architecture, you run your business logic in more “modules”, which makes the tracking of some bugs more challenging. There are third-party solutions as well as internal tools such as AWS X-Ray that can be used for distributed tracing. Some of these tools require making changes to the way your application formats and emits the logs. This issue is not unique to serverless applications, and is common across microservice architectures. There are solutions such as correlation ids that address traceability issue, and can be successfully implemented in serverless applications, as we have done at Altitude Networks.
Managing the throughput of concurrent executions
Surprisingly, serverless with its on-demand and infinite scalability, can be “too fast” when interfacing other services. This can result in throttling exceptions with external and internal services, and overwhelming database connections. Such issues can be managed by implementing the correct application logic to slow down the job, or by moving some calls to asynchronous patterns. Again, this issue is common in eventful architectures, and is not isolated to serverless.
Vendor lock-in
Lastly, I want to address a “non-gotcha” that is commonly mentioned when serverless is discussed: vendor lock-in. The reality is that the “vendor lock-in” applies to all cloud platforms and services. Even bare-bone services such as EC2 have vendor lock-in. This is especially true as cloud providers keep adding to features across all of their services, in an effort to provide more value to their customers (aka developers). Serverless services are not exempt from this either. However, the industry has come a long way to abstract out vendor-specific elements of serverless. For instance, there are deployment frameworks like Serverless framework, or Terraform, that can make serverless apps provider-agnostic. In a future post, I will share how at Altitude Networks we developed a code architecture that minimizes the risk of significant code refactors in case of a vendor migration or considerable changes to the AWS Lambda service.