Skip to content

Why serverless is good for business

As a Founder and CTO, I have two equally-important customers: our users (external) and our engineers (internal). The happiness of these two groups of customers are tied to each other: keeping my engineers happy and productive is the best (arguably the only) way to build a great software that will eventually make users happy and loyal. Users want a secure, stable, speedy application that serves them with the solutions they need. The best (again, arguably the only) way to build such an application is through iterative improvement. That brings us to this point:

The single most important metric that a startup engineering team should optimize for is agility and iteration speed.

In traditional application development, there are two distinct stages: writing the software, and deploying the application. The first stage consists of a group of developers writing the code and testing it locally. The second step involves following a strict workflow and logic to deploy the application to (on-prem or cloud) infrastructure. There are two issues with that workflow: first, the disconnect between software development and deployment often results in delays, unexpected bugs, and rework. Moreover,, the second stage (deployment) is for most companies a “cost center”; there is hardly anything company-specific to deployment, and for most companies deployment is not closely tied to their business logic or competitive advantage.

Serverless architecture aims to address those issues by enabling a truly modular continuous deployment which is abstracted out of the development cycle and offered as a service. This enables software development teams to focus on what matters the most: implementing the business logic and focusing on end-user needs without having to spend their time on writing and maintaining commodity deployment code.

There are other reasons why serverless is good for business, which I explain in this article.

Scalability

Serverless does not mean there are no servers to run the application. It means that the developers can define event triggers, define business logic that goes into event processing, and leave the infrastructure requirements to the cloud platform. The cloud platform delivers the required amount of compute, storage and memory, and the developer does not even have to think about it (or code for it). For instance, if your app suddenly gets a spike in usage, the cloud platform provides more compute (and if needed, data store) units to the application on demand.

In some cases, simply throwing more resources at a problem does not fully resolve it. For instance, not all database systems are infinitely scalable, and you need to provide a way to queue the requests and respond to them asynchronously. Serverless integrates well with both asynchronous and synchronous patterns: it can be triggered synchronously via an API call or asynchronously via queues, streams, or scheduled events. At Altitude Networks, we use AWS Lambda functions that are triggered directly via API Gateway calls, or indirectly via SQS queues, CloudWatch Event Rules, SNS Topics among other triggers.


Cost reduction

In serverless, you get billed only for the duration of your usage. In addition, your resources will scale up and down dynamically and natively. You do not need to optimize your auto-scaling settings and policies to manage the delicate tradeoff between availability and cost efficiency. This is particularly attractive for jobs that run intermittently; but, it can also be a cost management solution for long-running jobs. For instance, at Altitude Networks, we use lambda functions in different steps of our data ingestion process. While data ingestion can be a long-running process, it does not mean that we need uninterrupted compute units at all times; instead, lambdas can be triggered when new data has become available, stay on only for the duration of time they need to process the data, and immediately shut down without incurring costs for any unused duration.

While we are on the cost subject, I should point out another important consideration. For a startup, the single biggest cost item is developers’ salary. Any conversation about infrastructure cost optimization is meaningless unless we also factor in the cost to develop, monitor, and maintain proposed solutions. By abstracting away the commoditized logic of running infrastructure, serverless enables your developers to focus on business needs and developing company’s IP instead of systems management.


Development velocity

I started this post by emphasizing the importance of continued iterative improvement for a startup. Along the same line, anyone who has founded a startup or worked at one knows that the biggest adversary of a startup is time. For any day spent on a non-essential task, competitors get ahead and capture more market share by shipping more features. Similarly, the market needs and sentiments may shift rapidly, depriving a startup from the chance to ride a macroeconomic wave. Most importantly, a startup’s fixed costs (salaries, office space, baseline infrastructure cost, etc.) will still apply. In short, at a startup, the mantra of “think it, build it, ship it, tweak it” is not just a catchy motivational slogan, it’s a survival guide.

Serverless contributes to that iterative improvement in a few ways. First off, it frees up the false constraint that engineers need DevOps support to put their code in production. At Altitude Networks, our engineers are well-versed in the configuration and deployment of lambda functions, so they can make smart decisions about settings such as memory size, VPC usage, timeout time, etc. Everything else (including all the mechanics of deployment and updates) are abstracted away from them, so they can focus on more critical tasks.

More importantly, serverless architecture promotes breaking down the work into independent chunks that can be thought, built, shipped, and tweaked in a modular way with small update sizes and low overhead. This in turn enables parallel work with less coordination risk.

Lastly, with little to no infra to manage, serverless promotes smaller DevOps teams. At a startup, this translates to not only longer runway, but also the opportunity for DevOps to work on more business-critical tasks. At Altitude Networks, our DevOps team works on mission-critical projects such as automation of customer onboarding, instead of worrying about server uptime or autoscaling policies.


Resiliency

I mentioned small update size as a key benefit of serverless architecture that contributes to faster development cycles. Another key advantage of small update size is that if (when) things go wrong, they have a relatively smaller impact on the application. This is known as small blast radius in IT operations and is one of the key metrics that an organization’s IT infrastructure is measured by. In a serverless architecture, each piece of business logic can be implemented as its own function, and deployed on its own schedule. This means that when one component such as a lambda function causes a regression, the overall impact on the application can be contained quickly. With AWS Lambda, you get an almost infinite history of each lambda function via lambda versions. This means rollbacks are as easy as changing the reference to an earlier version, which does not even require a new deployment.

Serverless architecture can also be a great fit for long-running processes. By breaking down long processes into smaller chunks that run in an eventful workflow, we can maintain full control over the process. For instance, at Altitude Networks, we have defined actions such as pause, rewind, stop, restart for long-running processes by managing the event triggers that invoke lambda functions. This has made us incredibly agile with ingesting large amounts of data from external sources.


Security

Serverless architecture abstracts away the manual work of patching and OS updates, reducing the risk of human errors or delays in keeping the infrastructure up-to-date. There are no servers that would require access management (via SSH, for instance), taking away one more potential source of human error.

Furthermore, each execution of a serverless function, by definition, is short-lived. There are no long-running processes, so while you would never want an unauthorized piece of code to access running memory, you also have a dramatically smaller window of opportunity since password or secrets in memory would not be exposed for a long time.