This is a guest post from Jack Ellis, who has a lot of experience in AWS and, specifically, working with serverless. Here's how he used serverless technologies to stop having to worry about servers.
Back in mid 2018, Fathom Analytics was deployed across a series of dedicated / virtual servers. The team paid a good amount of cash for these servers, and they thought it would lead to good uptime, great redundancy and a life of very little server maintenance. Unfortunately, that wasn't the case. The servers would go offline, run into various issues, and require attention from a member of the team regularly. They were on the verge of hiring a freelance DevOps chap ($1k / month) to be available in case of an emergency. At that time, they couldn't really afford that, so it meant they had to take responsibility and handle the issues themselves.
Fast forward to late 2018, the main developer (Danny) left the Fathom team to become a teacher, and Paul Jarvis was left with a hard decision. Should he attempt to keep Fathom going or should he shut it down? I remember the phone call. Myself and Paul were working on Pico (acquired by Ghost), and Paul asked if I wanted to join the Fathom team. And I immediately said yes, because I saw the potential it had if a skilled developer was to be brought in.
We rebuilt the system in Laravel. It took us a few months, working part time on it, and we deployed it to Heroku in no time. Fast forward a few more months, as we were pricing out Heroku and considering how we would grow with their rigid tier jumps, Taylor Otwell announced Laravel Vapor, a serverless deployment platform for Laravel. We immediately jumped onto the waiting list and, when it launched, we were one of the first companies to deploy our entire application on it. Since then, we've grown to handle millions upon millions of requests each month.
What is this, a history lesson? I came here to read about why you choose serverless. Okay, calm down Jonathan. Here's why we choose to use serverless infrastructure:
We don't want to manage servers
I'm sure you've all experienced this. It's 2AM in the morning, and you're disoriented as your phone rings. Who could be calling at this time? Oh, PingPing has pinged our web hook and we're now getting a phone call. Great.
“Are you ok?” the wife asks. “I hate servers” you reply. “Why don't you just go serverless?” your German shepherd inquires.
You fix the issue and try to get back to sleep. But the adrenaline prevents that. 4 hours sleep will have to do, time to start your day.
We don't know when we'll land that whale
Most of our customers are in the < 1 million page views range. But what happens when someone emails us asking for 500M page views per month? We can't say “Sure, just let us know when you're going to add the javascript code to your website because our infrastructure won't be able to handle it”. Imagine how that would make them feel. Nobody wants to be your biggest customer.
Comparatively, we fear nothing these days. 500M page views? Bring it on, Lambda auto scales and SQS has a practically infinite capacity for queued jobs. When someone asks me if we can quote them for 500M page views, I say “Is that all you're going to need. Let me know if you need more, we can provide quotes for up to 5 billion” page views. I'm only joking, but that's how I feel.
Our service needs to be highly-available
People don't want their analytics to go offline. We've always taken uptime incredibly seriously. Paul once told me that before I joined, he would get regular “website down” emails. We haven't received a "website down" email in a very long time, and we wouldn't have it any other way. With Laravel Vapor, it provisions lambda functions, which means we have incredible redundancy and availability. If a lambda functions “breaks”, it's just replaced with another one.
Deployment is unbelievably simple
We use Laravel Vapor, so everything is managed from a simple YAML file (I have a lesson on this in my course). We can provision environments in less than 2 minutes. We can also tweak memory, worker memory, concurrency, attach databases, caches and more, all from a single YAML file. It's a truly wonderful way to manage deployments. I thought Heroku was great with their dyno slider, but Vapor takes it to a whole new level. I remember how much I laughed when I spun up a staging environment in under 60 seconds. Doing that in DigitalOcean would take so much longer. Even tweaking memory settings is a joke on DigitalOcean compared to Vapor
Varying server load
I mentioned the whale problem above (which we no longer have) but we also don't have to worry about spikes & drops. Imagine that during the day we're hitting 100M requests p/h, and then it drops to 1M requests p/h at night, we need to set-up some auto scaling. We really don't want to be spending time setting the scaling up, and we don't want to pay to be over-provisioned "just in case".
And I could keep writing reasons for why we use serverless until the cows come home.
We've been using Vapor since it launched in 2019 and we sleep incredibly well knowing that we don't have to spend time thinking about servers. Everything is managed. RDS, Elasticache, DynamoDB etc. are wonderful. We provision these services with Vapor, and then we let the AWS team do their thing. And yes, these add-on services do need to be scaled, but there are ways to control things like database load, and set-up modest thresholds for notifications when it's time to scale. Oh, and scaling is handled with zero to minimal downtime with high-availability set-ups, which is huge to us.
I've said enough. If you want to become an expert at using Laravel Vapor, I can help you with that. I've spent hundreds of hours in the field and have used it at high scale since it launched. My course (Serverless Laravel) is currently only $149 ($100 discount during launch). If you're reading this article after the launch is over, drop me an email and we'll see what we can do. Anyway, this course will save you from running into common gotchas, and is an express route to Vapor mastery. Do you have any questions? Are you mad that I contradict myself because we do need to scale our cache / database etc. at some point? Are you angry at me and you want me to know that you're angry? I'm @jackellis on Twitter