-
-
Notifications
You must be signed in to change notification settings - Fork 163
Mongo
MongoDB >=3.2
RateLimiterMongo creates unique collection for each rate limiter keyPrefix
.
It supports mongodb
native and mongoose
packages
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
const mongoOpts = {
reconnectTries: Number.MAX_VALUE, // Never stop trying to reconnect
reconnectInterval: 100, // Reconnect every 100ms
};
mongoose.connect('mongodb://127.0.0.1:27017/' + RateLimiterMongo.getDbName())
.catch((err) => {});
const mongoConn = mongoose.connection;
// Or
const mongoConn = mongoose.createConnection('mongodb://127.0.0.1:27017/' + RateLimiterMongo.getDbName(), mongoOpts);
const opts = {
storeClient: mongoConn,
points: 10, // Number of points
duration: 1, // Per second(s)
};
const rateLimiterMongo = new RateLimiterMongo(opts);
rateLimiterMongo.consume(remoteAddress, 2) // consume 2 points
.then((rateLimiterRes) => {
// 2 points consumed
})
.catch((rateLimiterRes) => {
// Not enough points to consume
});
/* --- Or with native mongodb package --- */
const { MongoClient } = require('mongodb');
const mongoOpts = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE, // Never stop trying to reconnect
reconnectInterval: 100, // Reconnect every 100ms
};
const mongoConn = MongoClient.connect(
'mongodb://localhost:27017',
mongoOpts
);
const opts = {
storeClient: mongoConn,
dbName: 'somedb',
points: 10, // Number of points
duration: 1, // Per second(s)
};
const rateLimiterMongo = new RateLimiterMongo(opts);
rateLimiterMongo.consume(remoteAddress, 2) // consume 2 points
.then((rateLimiterRes) => {
// 2 points consumed
})
.catch((rateLimiterRes) => {
// Not enough points to consume
});
Connection to Mongo takes milliseconds, so any method of rate limiter is rejected with Error, until connection established
insuranceLimiter
can be setup to avoid errors, but all changes won't be written from insuranceLimiter
to RateLimiterMongo
when connection established
Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie
and mongo:3.6.5-jessie
Docker containers with 4 workers
Endpoint is limited by RateLimiterMongo
with config:
new RateLimiterMongo({
mongo: mongo,
points: 20, // Number of points
duration: 1, // Per second(s)
});
By bombardier -c 1000 -l -d 30s -r 2000 -t 5s http://127.0.0.1:3000
Test with 1000 concurrent requests with maximum 2000 requests per sec during 30 seconds
Statistics Avg Stdev Max
Reqs/sec 1997.87 429.40 3869.98
Latency 4.75ms 3.32ms 68.21ms
Latency Distribution
50% 4.15ms
75% 5.43ms
90% 6.95ms
95% 8.79ms
99% 18.96ms
HTTP codes:
1xx - 0, 2xx - 15000, 3xx - 0, 4xx - 45014, 5xx - 0
MongoDb saves snapshots to disk with fsync and makes journaling by default. It results to extra disk I/O.
If you already use MongoDb as data store and have high traffic like 1000 req/sec or more, you may find it useful to launch the second MongoDb instance with options:
--syncdelay 0 : disable making snapshots to disk
--nojournal : disable journal
--wiredTigerCacheSizeGB 0.25 : set minimum memory
Builtin TTL index automatically deletes expired documents.
Document for one key is 68 bytes
in size.
MongoDb stores information for about 4 million keys in 256Mb.
Here is a small test of MongoDb with different options:
It processes 10k, 100k and 250k writes for 10k random keys for:
10k 926ms
100k 4475ms
250k 13254ms
10k 900ms
100k 4323ms
250k 12407ms
It is about 5% faster with disabled fsync and journaling, but avoiding extra disk I/O is worth.
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting