This server-side application is a solution to Peakon's webhook exercise, which is no longer used. The app is built in the Nest framework using TypeScript. Data is not persisted as this is a proof of concept and not meant for actual production.
The application listens for HTTP requests on port 9876.
There are two endpoints:
POST /webhooks: Create a webhook
{
"url": "a valid url",
"token": "a string"
}POST /webhooks/test: Trigger all the webhooks
{
"payload": ["any", { "valid": "JSON" }]
}The app is hosted here: webhook-server-pipsen.herokuapp.com
Nest provides an out-of-the-box application architecture, which makes it easy to create highly testable, scalable, loosely coupled, and easily maintainable applications.
Personally, this is the first project where I try out the framework. Usually I built server-side applications with Express. However, for this exercise I decided to have some fun and try out a couple of new things (TypeScript included).
In retrospective I enjoy the framework a lot. It gives the developer a lot of flexibility and has easy-to-implement solutions for things such as testing, validation, events and much more.
I am not sure I fully agree with the modular architecture. It quickly becomes a lot of files with very different purposes in the same directory. Luckily, Nest gives the developer the freedom to structure the project however we want. For this exercise, I found it important to use the standards defined by the framework. Next time, I might deviate a little if I find it benefits the project.
When the /webhooks/test endpoint is triggered, all the created webhooks are added to the webhooks queue.
This queue exists in Redis and is implemented with Bull, which is a popular, well supported, high performance queue system.
We have a defined processor, which picks up items from the queue and executes them. This is done with the HTTP module in Nest, which is basically just a wrapper for Axios.
If a webhook fails and cannot be successfully delivered, it will be retried 9 more times. We use a custom backoff strategy, which exponentially increases the delay between retries. 1st retry happens after 20 seconds, 2nd after 40 seconds, 3rd after 80 seconds etc.
There are many benefits to handling the webhooks in a queue. It smooths out processing peaks, and is highly scalable as multiple queue processors can be added. Imagine a scenario where we need to send thousands of webhooks in a matter of seconds. We simply just add more servers that are running queue processors.
- Node.js (>= 10.13.0, except for v13)
- Redis
$ npm install# development
$ npm run start
# watch mode
$ npm run start:dev
# production mode
$ npm run start:prodThe application has some automated tests in the shape of unit and e2e (end-to-end) tests. This works through Jest and Supertest.
Tests can be run with the following commands:
# unit tests
$ npm run test
# e2e tests
$ npm run test:e2e
# test coverage
$ npm run test:covThis conversation is like opening a can of worms. Ever heard the phrase: "Good code is self-documenting"? Well, there is certainly some truth to that. Having good naming conventions is super important to writing clean and easy-to-understand code. The scope of this project is so small that I decided writing comments was of little use. Take these snippets as examples:
@OnQueueFailed()
onFailed(job: Job, error: Error) {
this.logger.log(`Job ${job.name}:${job.id} failed: ${error}`);
}
@IsUrl()
url: string;
@IsString()
@IsNotEmpty()
token: string;
Pretty easy to understand what is happening, right? That is not to say that I never write comments. I just prefer to do it whenever the comments will actually benefit the reader, instead of just being another thing the developer has to maintain.