Serverless Ninja: Reporting Errors via ChatOps using AWS Lambda & NodeJS

This article explains how to inject an error reporting module using ChatOps and how it helps organizations react faster to issues arising from applications that were running inside AWS Lambda Functions. In this demo, I will be using NodeJS and will utilise Slack to deliver alarms to important stakeholders:

  • Application Developers
  • Software Managers
  • Software Testers
  • Security Response Team
  • Whoever needs to be awake when something is broken

Benefits

Before we start coding, I think that its necessary to understand the good things that ChatOps bring to the table so that we can explain to our software development teams the importance of the effort we are putting in building this alarm mechanisms.

Early Detection of Software Issues

Hey, catching bugs in development environment is the best thing a developer could be aiming for when making mistakes in the code. This allows you to peacefully google the application error and find the right way to fix it.

Rapid Response to Production Issues

Raising alarms to messaging apps help software engineering teams to gain real-time visibility of issues that reached production stage. This benefits organization by reducing the downtime and impact of application issues to the business.

It Saves You from Silent Failures

Without ChatOps, background workers fail silently without having the decency to inform you that something went wrong in production. I find this background workers the nastiest of all as it can easily place your job security to a rocking chair. Installing proper error handling and alarm mechanisms saves you from delivering broken work to production

Stimulates Collaboration and Learning

Raising issues in a channel where an entire team actively listens benefit organizations in a subtle way by raising awareness which trains your development team on the following topics:

  • Why did an issue got raised
  • Where the issue can be fixed
  • How to actually fix the issue
  • How to prevent it from happening again

Setting up a Slack Hook URL

In order to implementations to fully function, you will need to setup a Slack Incoming Webhook URL.

Our Solution Architecture

The diagram above explains our desired solution. To break it down into digestible pieces:

  • We are accepting HTTP request to pull ninja and weapon objects
  • We are using an API Gateway to handle public traffic and offload it to the appropriate lambda-based API.
  • We are using a Lambda layer to share common libraries and functional code between lambda APIs
  • We are going to intentionally not provision Dynamo DB tables to simulate connectivity issue.
  • We are going to gracefully handle errors and build markdown messages base on their metadata
  • We then send the markdown-based message to the Slack Hook URLs
  • Stakeholders then react to the notifications

NodeJS Implementation Pre-requisites

You will need the following resources to work on this project.

  • AWS CLI
  • AWS SAM CLI
  • AWS Account
  • Shell script execution environment (Linux, MAC, WSL)
  • Named profile for your AWS CLI
  • S3 Bucket for SAM artifact storage
  • NodeJS
  • Slack Channel with Incoming Webhooks
  • Clone this Github Repository

Deploying the Application

  1. If you are a Linux or Mac User you need to provide execution permission to shell scripts inside the root of this folder. If you a windows user, go ahead and ignore this step.

  2. You will need to run the 001_install_dependencies.sh script to install node modules for the base layer and the API folders.

  3. You will have to copy the base-layer\nodejs\slack.config.sample.json file to base-layer\nodejs\slack.config.json and provide your own Slack web hook URLs and desired channel name. Without configuring this, you will get our dreaded silent failure issue that can only be diagnosed via CloudWatch logs. Check the code snippet below which shows the contents of the file.

  4. After running the shell script, you will have to release the APIs CloudFormation stack by executing 002_release_apis.sh script. To verify that step 3 worked gracefully, you can go and visit your AWS account's CloudFormation portal and should be able to see a CloudFormation stack named dev-ninja-alarms. This will contain the AWS resources listed below:

Testing the Alarms

To ease testing, you can navigate to the API Gateway section of your AWS console and find the gateway named dev-ninja-alarms and view it's resource list. Trigger any endpoint using the test command visible below:

View the Generated Alarms

You should be able to get the following error message on the Slack channel that you linked with with incoming web hook URL.

How Did it Work?

  1. We created an awesome module named `base-layer\nodejs\slack-alarm.js` inside the shared lambda layer folder with the following code:

  2. Last, we configured both Lambda-based APIs to catch any form of exception that can arise from the Lambda's execution scope.

Decommissioning the CloudFormation Stack

I know that Lambda is super cheap, you can even leave your APIs running and they wont cost you anything significant. However, to keep your personal AWS accounts from getting cluttered, I do recommend decommissioning the CloudFormation stack for this sample using the script 003_decommission_apis.sh.

I'm writing a book about serverless!

If you are interested to learn more about serverless architectures, you can visit my github repository and read more about topics that I haven't published in this blog yet. You can also find interesting sample code and projects that can help you get started with serverless architectures.

Relevant Links

Get Some Cool Stuff From Amazon!


Comments

Post a Comment

Popular posts from this blog

Security: HTTP headers that expose web application / server vulnerabilities

API Gateway in a Nutshell

API Gateway: Response Aggregation with Ocelot and ASP.net Core

Building Simple API Gateways with Ocelot and ASP.net Core