Few things about Serverless Framework

Serverless (Serverless computing) is the hot topic from recent years and it has evolved to be faster, stealthier and to strike harder. Many big cloud providers(Amazon, Google, Microsoft) consistently provide features in their own way.

Main benefits of serverless will be no server management, flexible scaling, and high availability. To know briefly about serverless follow this link.

FaaS is one of the components in serverless. This is the place where actual code is written where all other components interact. AWS lambda is one such famous service which provides FaaS platform. As its slogan says, pay for what you use, it will cost only when it triggers either by HTTP endpoint or by other services. One of the advantages in AWS Lambda, when I am the writing code I found out is that to access AWS resources you don’t have to include and install AWS sdk package in package.json(In case of Node.js runtime), AWS automatically adds it into the codebase when its running you just have to write require statement to use it.

If you are developing full fledged serverless applications it is a cumbersome task to deploy to AWS(In this blog I will focus majorly on development on AWS only).

Welcome to Serverless Framework

It is a tool to create, deploy and manage resources in AWS(Support configuration for other cloud providers also.) You will be specifying the configuration of resources in yaml files, like creating AWS roles for specific functions, DynamoDB database, API Gateway or SQS etc..

It quickly constructs the cloud formation stack and deploys serverless applications to AWS.

Builds packages with a single command. The nice thing is that the entire stack can be brought down with a single command as well.

Nice article about trying out the serverless framework.

Things which I used in my application which enhanced my code

Plugins :

Definitely plugins are made for custom functionality to add to your serverless applications.

One of those are -

- serverless-dotenv-plugin :

Which will load the environment variables into a serverless application. This plugin provides out of the box so many features like you can specify required env variables and also exclusion of variables which were used in local development may be. Here is the npm package link.

But the latest version of serverless provides the useDotEnv keyword to load env variables. So the plugin is no longer required for basic features in environment variables. You can refer this article link for more information.

- serverless-iam-roles-per-function

It is recommended to define a role at function level for security purposes. You can define as many permission statements as needed for the function to run. The framework provides an option(iamRoleStatementsName) to give a custom name. Here you can find info about plugin.

- Using pseudo parameters

To add any pseudo parameters(Like #{AWS::AccountId}) in resource parameters(Like ARN) serverless framework provides inbuilt features (in latest version ) which will be replaced during deployment.

Condition based custom variables :

Suppose if you have a variable(defined in yaml file) which needs to be formed based on condition of stage(Like dev, stage, prod). In my case I had to call the lambda function in local development to test, so pseudo parameters to construct ARN will happen only at deployment.
So I used condition to resolve in local itself based on stage

CLIENT_AC_ID:  ${self:custom.CLIENT_AC_ID_LOGIC.${self:provider.stage}, self:custom.CLIENT_AC_ID_LOGIC.other}CLIENT_AC_ID_LOGIC:
dev-user: “21289893432”
other: “#{AWS::AccountId}”
STAGE1_ARN: arn:aws:states:${self:provider.region}:${self:custom.CLIENT_AC_ID}:stateMachine:stage1pipeline-datasync-${self:provider.stage}

Here CLIENT_AC_ID will be evaluated based on stage parameter in condition. If you want more advanced condition features in yaml you can go for this plugin. And also condition based functions is another feature where you can use for your custom requirement.

Serverless Code Linting :

Linting is basically automatic analyzing code to check for programmatic and stylistic errors.

Linting configuration will be specified in .eslintrc.json file

{
"extends": "@serverless/eslint-config/node",
"parserOptions": {
"ecmaVersion": 2020
},
"env": {
"node": true
},
"rules": {
"no-console": "off",
"no-unused-vars": ["error", { "argsIgnorePattern": "^_" }],
"strict": "off"
}
}

While committing code also in pre commit you can force to do linting to check if it is free from errors.
By specifying config in package.json

"lint-staged": {
"*.js": [
"npm run lint",
"prettier --write"
],
"*.json": [
"prettier --write",
"git add"
]
},
"husky": {
"hooks": {
"pre-commit": "lint-staged"
}
},

For this to work you need to have husky and lint-staged in your dev dependencies package installed.

What if the application keeps growing?

How do we maintain folder structure and best practises? If u keep on adding functions on a single yaml file. It will be very difficult for readability wise. Along with that Cloudformation stack has limitation to 200 resources. So serverless framework gives a nice article about explaining the issue and tackling them. In that my approach will be to break your single serverless yml config into separate ones, based on functionalities.

For that I have created a sample example project in this repo.

As it is stated in repository just show folder structure I took example of multi tenant ecommerce application, even though it is not code complete, you will get an idea by seeing the repo how application can be managed. For a deployment percepective, in CI/CD pipeline you can configure this to separate deployment for each individual stack.

I will explain bit about this here:

The serverless.common.yml, which I used to declare common route and common resources which required for entire application. All the function declaration specified in that file are defined in common folder.

Each route defined can have a schema definition to validate the request payload. These schema definition are specified in schema folder as json extension files. Any DynamoDB helper function or other project related helper function can be written inside helper folder. Defining the IAM role statements for DynamoDB or SQS resource definition can be written in separate yml files, be import in any other stack(serverless yml files). Services folder used to store different services stack. For ex. in case of e-com application, services will be like order management, inventory management etc. and each service have different serverless.yml file which will represent different stack.

There is an another option

Where serverless provides a way to deploy express.js application to serverless REST API. There is an extra package called serverless-http you need to use along with express.

Only one function route is enough to declare in serverless yml file.

And if you need to give any IAM role statements(permissions) for DynamoDB or SQS can be given to this single function. serverless-http package not only stick to express framework along with that it supports many other packages. Complete article can be find here. Even straight away you can create serverless rest api using existing express application same way using the above mentioned package which is mentioned in same article.

We are business accelerator working with startups / entrepreneurs in building the product & launching the companies.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store