February 24, 2025
Version en español aquí.
In the first chapter of this journey, we explored why migrating to AWS Serverless was the best decision for my Laravel application. We talked about automatic scalability, cost savings, and freedom from traditional maintenance. But like in every great adventure, the path has its obstacles.
One of the concerns shared by most readers is that since serverless does all the magic for you and scales automatically, your wallet would also need to do so.
And I must be honest with you: I fell into the trap of the magical serverless. While I celebrated the absence of servers, my code wandered through the cloud like a tourist with an unlimited credit card: enthusiastic, but wasting resources at every step.
But hey! That's why I'm here today—to reveal what no tutorial warned me about: the art of writing PHP conscious of Lambda.
Let's dive into how to turn our Laravel application into a true serverless warrior. Yes, we will tackle the necessary optimizations and the precautions we must take.
But let's start lightly with some new concepts.
AWS Lambda is, in short, the chef that cooks your code just when needed. Its event-driven model means it only activates upon requests (HTTP, SQS events, cron jobs, etc.), and like an a la carte kitchen, you only pay for the time the stove is used. Additionally, Lambda's free tier allows you to start without worries, as long as you keep an eye on usage limits.
But: Did you know that AWS Lambda, among all the languages it natively supports, PHP is not one of them? 😤
For our peace of mind, Lambda allows us to define our own custom runtimes.
Although I'd love to have created my own solution to run PHP on Lambda like some suggested (and thanks in advance for trusting my abilities so much), sometimes it's not necessary to reinvent the wheel.
That's where Bref comes in, an essential tool that allows us to run PHP applications on Lambda without rewriting all our code.
Great! But, how does it work?
Personally, I love knowing the inner workings of things, but to keep it simple, Bref operates as follows:
When a request enters your application, it is captured by API Gateway and sent to the Lambda function, where Bref's runtime starts PHP-FPM in the background and redirects the request via FastCGI. Once the result is obtained, it is returned to API Gateway, which in turn sends it as a response to the end user.
Simple, right?
Can you imagine the amount of configurations needed to set up a Lambda function with a custom runtime that runs Bref (and with it, PHP-FPM) so that your Laravel code can work?
Well, you can relax: the Serverless framework does all that for you. You just need to define the necessary components in a serverless.yml
configuration file and, with a simple serverless deploy
, you're in the cloud! 🚀
First, you need to understand that migrating to a serverless environment means that Laravel must go into "nomadic mode". This means adapting it to be stateless.
Do you remember when I mentioned that Lambda is ephemeral? The simplest way to imagine Lambda is like talking to someone with a very short memory: every time you start the conversation, you have to reintroduce yourself (e.g., session data). Do you give him a notebook to write something down and then return it to you (file uploads)? Forget about it, you won't even find it in the limbo of Inception.
So, some essential adjustments are necessary:
/tmp
directory. So, file uploads need to be stored in S3 (or similar)..env
file in production; you'll need to use variables defined in the serverless.yml
configuration file.To make these adjustments easier, the Bref community has created a package called bref/laravel-bridge
that automates most of the changes mentioned above.
It's time to get our hands dirty.
If you want to follow along step by step, the requirements from this point on are:
Let's start by creating a new Laravel project [^4]:
# Installing Laravel
laravel new laravel-above-the-clouds
cd laravel-above-the-clouds
# Create and migrate the database (sqlite by default)
php artisan migrate
# Install Node.js packages (Vite.js, Tailwind CSS, etc.) and compile assets
npm install && npm run build
Install the bref/bref
and bref/laravel-bridge
packages [^5]:
composer require bref/bref bref/laravel-bridge --update-with-dependencies
The bref/laravel-bridge
package, as I mentioned earlier, resolves the necessary adjustments for Laravel to work in a serverless environment.
Bref already includes a preconfigured serverless.yml
file that we can use as a starting point for our first deploy:
php artisan vendor:publish --tag=serverless-config
Think of the serverless.yml
file as a set of LEGO instructions for AWS: it's the guide that allows you to set up your application flawlessly.
Deploy the project:
serverless deploy
When finishes, in the console you will get the API Gateway URL to access your application on AWS!
The first thing you'll notice is that the assets (CSS and JS) don't load correctly. This happens because, in a serverless environment, we don't have Apache or Nginx to serve static assets; and here I want to make a brief pause.
Taking our application to a serverless environment implies changing our mind-set that a single server is responsible for performing all the tasks to make our application work. In a serverless environment, each component must be managed by a specialized service.
In our case, some of the main components in a Laravel application are as follows:
Component | Dedicated Server | AWS Serverless |
---|---|---|
Routes, Controllers, ... | PHP-FPM | Lambda |
Static Assets _(images, javascript, css)_ | Apache / Nginx | S3 |
Session Data | Database / Cookies | RDS / DynamoDB |
Cache | Local Disk | DynamoDB / Redis |
File Upload | Local Disk | S3 |
Scheduled Tasks _(Schedule)_ | crontab | EventBridge |
Event Queue | SSH $ `php artisan queue:work` / maybe supervisor | SQS |
`artisan` Commands | SSH $ `php artisan ...` | Lambda |
Application Log | SSH $ `tail -f storage/logs/*` | CloudWatch |
But let's take it easy, solving one point at a time.
From the points mentioned, the bref/laravel-bridge
package already solves application logs and session data. A log group is created in CloudWatch for application logs, and session data is stored in cookies (we will later see how to move session data to DynamoDB).
To serve static assets, we have two alternatives:
The hard way: Add the creation of a S3 Bucket to our serverless.yml
configuration, obtain the bucket name, build the public URL, and assign it to the ASSET_URL
environment variable.
The easy way: Use the serverless-lift
plugin ^6, and let the magic flow.
Although I learned a lot the hard way (I've been following Bref's growth since 2020 ^7), you can take advantage of community-developed tools that simplify our lives.
We install the serverless-lift
plugin and add a new section ^8 within the serverless.yml
file:
serverless plugin install -n serverless-lift
service: laravel
- # ...
functions:
web:
handler: public/index.php
runtime: php-82-fpm
- # ...
artisan:
handler: artisan
runtime: php-82-console
- # ...
+ constructs:
+ website:
+ type: server-side-website
+ assets:
+ '/build/*': public/build
+ # Add here any file or directory that needs to be served from S3
plugins:
- ./vendor/bref/bref
+ - serverless-lift
Within the constructs
section, we have defined a website
component of type server-side-website
. This tells the serverless-lift
plugin to create a CloudFront distribution, which will act as a CDN to serve static files from an S3 Bucket, and also as a reverse proxy, redirecting requests to your PHP application's routes through API Gateway => Lambda.
serverless deploy
This deployment will take an additional 5-7 minutes only once, as CloudFront distributions are created globally. Subsequent deployments will be faster, as long as you don't modify configurations that affect the CloudFront distribution.
In the console output, you will see that you now have two URLs, one from the API Gateway (which we will no longer use), and another from CloudFront, which will serve static assets from the S3 Bucket, and the rest of the routes will be processed by Lambda.
And now the CSS loads correctly! Congratulations, you have deployed your first Laravel application in a serverless environment.
You may have noticed that the first time you accessed your application, it took a few extra seconds to respond.
As I mentioned in the previous article, this is called a cold-start and occurs when Lambda initializes a new instance to run your application. This initialization can take 250ms or more, especially if your application is large.
Lambda keeps an instance alive for up to 10 minutes after processing a request, then it is automatically destroyed.
We can address this from several fronts.
In low-traffic applications, it is normal to have periods of inactivity greater than 10 minutes, so we may have a higher percentage of requests processed in a cold-start.
Bref provides a special event that we can use to keep a Lambda instance alive. We simply need to add a schedule
event in our serverless.yml
configuration file with the payload {warmer: true}
. Bref will recognize this special event and respond instantly with status code 100 ^10 without executing your code, thus keeping the instance alive.
service: laravel
- # ...
functions:
web:
handler: public/index.php
runtime: php-82-fpm
events:
- httpApi: '*'
+ - schedule:
+ rate: rate(5 minutes)
+ input:
+ warmer: true
During a cold-start, AWS Lambda downloads the application package, decompresses it in a temporary environment, and loads the runtime with all the necessary dependencies and configurations. This process, although optimized, adds latency to the first request. Therefore, reducing the package size (for example, by removing development dependencies or unnecessary modules) can significantly shorten this time.
The default serverless.yml
file from Bref already ignores certain directories that are not necessary in PHP, such as node_modules
.
Before deploying, we can uninstall development packages. This will significantly reduce the size of the vendor
directory, and consequently, the total size of the code sent to Lambda.
composer install --no-dev
In most of my applications, the composer package that takes up the most space is aws/aws-sdk-php
(required by bref/laravel-bridge
).
The good news is that the community has developed a composer script to clean up unused AWS service packages in our application ^13. We just need to specify the services we use in the composer.json
file:
{
"name": "laravel/laravel",
"type": "project",
...
"require": {
"php": "^8.2",
"bref/bref": "^2.3",
"bref/laravel-bridge": "^2.5",
"laravel/framework": "^11.31",
"laravel/octane": "^2.8",
...
},
"scripts": {
+ "pre-autoload-dump": "Aws\\Script\\Composer\\Composer::removeUnusedServices",
"post-autoload-dump": [
"Illuminate\\Foundation\\ComposerScripts::postAutoloadDump",
"@php artisan package:discover --ansi"
],
...
},
"extra": {
"laravel": {
"dont-discover": []
},
+ "aws/aws-sdk-php": [
+ "DynamoDb",
+ "S3",
+ "Sqs"
+ ]
},
}
The cleanup script will run every time you run composer install
or composer update
.
Even when cold-starts occur, Laravel Octane kicks in to keep the application in memory and significantly reduce response times. By avoiding a full Laravel boot on each request, the user experience is greatly improved.
Laravel Octane acts as an accelerator that "keeps the application warm". Once Lambda initializes the application, Octane takes care of keeping it in memory, so that subsequent requests can be served almost instantly. It's like starting a car engine and leaving it ready to go at any moment, rather than starting it from scratch each time.
Steps to implement Laravel Octane:
composer require laravel/octane
php artisan octane:install
Once installed, update the serverless.yml
file to use Laravel Octane as the handler ^11. This will ensure that your application runs with the improved performance that Laravel Octane offers:
service: laravel
- # ...
functions:
web:
handler: Bref\LaravelBridge\Http\OctaneHandler
runtime: php-82
- # ...
Finally, deploy your application with the changes made:
serverless deploy
Applying all the improvements mentioned, we have reduced the initialization time (cold-start) of our application. And with Laravel Octane, we have not only improved the time of the cold-start, but also improved the overall response time of our application.
We can visualize the response times in the CloudWatch logs that were already configured automatically for us.
Cold-start without optimizations (499ms):
Cold-start with Laravel Octane and optimizations (190ms):
Note that when using Laravel Octane, Laravel execution is kept in memory, so be careful of memory leaks ^12.
You may have noticed that there is a function called artisan
in the serverless.yml
file. This function uses the "console" version of the Bref runtime ^14, which allows us to run Laravel Artisan commands in AWS Lambda.
Since we don't have SSH access to the Lambda instances, Bref provides a bridge to execute cli
commands using the bref:cli
command from serverless.
serverless bref:cli --args="<artisan command and its options>"
serverless bref:cli --args="route:list"
A very important point I want to reinforce is that we must change our mentality when developing applications for serverless environments. In this new paradigm, it is crucial to design our processes and workflows considering the unique characteristics of serverless computing, such as the ephemeral nature of functions and the billing model based on usage.
Serverless is not expensive... but poorly adapted code is.
In my first month migrating one of my applications to a serverless environment, I made the mistake of not thoroughly reviewing certain processes. One of them was responsible for synchronizing the products of the online store with an external billing system, including updating product images. And as I mentioned in the previous article, some processes are not ideal for execution in a serverless environment. In this case, I was paying for idle time, during the image download, my code in Lambda was not doing anything more than waiting 5-8 seconds per image. With 65,000 items in the store and two daily synchronizations, this resulted in a bill of ~$530 just for the idle time during image download.
This incident led me to redesign the synchronization process. In a first attempt at optimization, I implemented the download of multiple images in parallel using PHP ZTS, which led me to develop the hds-solutions/parallel-sdk
library ^15. This version reduced costs to ~160$, but it was still not enough; my goal was to have costs below what I was spending originally on EC2.
In a second attempt, I moved the image download process to a dedicated EC2 instance. Although this significantly reduced costs to ~50$, the application was divided into two separate components.
Finally, in the third version of the synchronization process, after talking to the client, I implemented a small script on the billing system server. Instead of my application downloading the images, this script sent the images directly from the billing system server to S3 using pre-signed URLs ^16. This eliminated the costs associated with image download, leaving only the cost of S3 storage.
With this, we close another chapter in our journey to the serverless world! Today, we saw how to transform Laravel into a true cloud nomad, adapting it to live without the weight of a traditional server. But this is just the beginning.
In the next article, we'll dive into the heart of integration: we'll see how to transfer session and cache data to DynamoDB, how to use S3 for file management, configure SQS for queuing and use EventBridge to orchestrate scheduled tasks. See you in the next part!