NGINX.COM
Web Server Load Balancing with NGINX Plus

This is the second blog post in our series on deploying NGINX Open Source and NGINX Plus as an API gateway.

Note: Except as noted, all information in this post applies to both NGINX Open Source and NGINX Plus. For ease of reading, the rest of the blog refers simply to “NGINX”.

Rate Limiting

Unlike browser‑based clients, individual API clients are able to place huge loads on your APIs, even to the extent of consuming such a high proportion of system resources that other API clients are effectively locked out. Not only malicious clients pose this threat: a misbehaving or buggy API client might enter a loop that overwhelms the backend. To protect against this, we apply a rate limit to ensure fair use by each client and to protect the resources of the backend services.

NGINX can apply rate limits based on any attribute of the request. The client IP address is typically used, but when authentication is enabled for the API, the authenticated client ID is a more reliable and accurate attribute.

Rate limits themselves are defined in the top‑level API gateway configuration file and can then be applied globally, on a per‑API basis, or even per URI.

In this example, the limit_req_zone directive on line 4 defines a rate limit of 10 requests per second for each client IP address ($binary_remote_addr), and the one on line 5 defines a limit of 200 requests per second for each authenticated client ID ($http_apikey). This illustrates how we can define multiple rate limits independently of where they are applied. An API may apply multiple rate limits at the same time, or apply different rate limits for different resources.

Then in the following configuration snippet we use the limit_req directive to apply the first rate limit in the policy section of the “Warehouse API” described in Part 1. By default, NGINX sends the 503 (Service Unavailable) response when the rate limit has been exceeded. However, it is helpful for API clients to know explicitly that they have exceeded their rate limit, so that they can modify their behavior. To this end we use the limit_req_status directive to send the 429 (Too Many Requests) response instead.

You can use additional parameters to the limit_req directive to fine‑tune how NGINX enforces rate limits. For example, it is possible to queue requests instead of rejecting them outright when the limit is exceeded, allowing time for the rate of requests to fall under the defined limit. For more information about fine‑tuning rate limits, see Rate Limiting with NGINX and NGINX Plus on our blog.

Enforcing Specific Request Methods

With RESTful APIs, the HTTP method (or verb) is an important part of each API call and very significant to the API definition. Take the pricing service of our Warehouse API as an example:

  • GET /api/warehouse/pricing/item001        returns the price of item001
  • PATCH /api/warehouse/pricing/item001   changes the price of item001

We can update the URI‑routing definitions in the Warehouse API to accept only these two HTTP methods in requests to the pricing service (and only the GET method in requests to the inventory service).

With this configuration in place, requests to the pricing service using methods other than those listed on line 22 (and to the inventory service other than the one on line 13) are rejected and are not passed to the backend services. NGINX sends the 405 (Method Not Allowed) response to inform the API client of the precise nature of the error, as shown in the following console trace. Where a minimum‑disclosure security policy is required, the error_page directive can be used to convert this response into a less informative error instead, for example 400 (Bad Request).

$ curl https://api.example.com/api/warehouse/pricing/item001
{"sku":"item001","price":179.99}
$ curl -X DELETE https://api.example.com/api/warehouse/pricing/item001
{"status":405,"message":"Method not allowed"}

Applying Fine-Grained Access Control

Part 1 in this series described how to protect APIs from unauthorized access by enabling authentication options such as API keys and JSON Web Tokens (JWTs). We can use the authenticated ID, or attributes of the authenticated ID, to perform fine‑grained access control.

Here we show two such examples.

Of course, other authentication methods are applicable to these sample uses cases, such as HTTP Basic authentication and OAuth 2.0 token introspection.

Controlling Access to Specific Resources

Let’s say we want to allow only “infrastructure clients” to access the audit resource of the Warehouse API inventory service. With API key authentication enabled, we use a map block to create an allowlist of infrastructure client names so that the variable $is_infrastructure evaluates to 1 when a corresponding API key is used.

In the definition of the Warehouse API, we add a location block for the inventory audit resource (lines 15–20). The if block ensures that only infrastructure clients can access the resource.

Note that the location directive on line 15 uses the = (equals sign) modifier to make an exact match on the audit resource. Exact matches take precedence over the default path‑prefix definitions used for the other resources. The following trace shows how with this configuration in place a client that isn’t on the allowlist is unable to access the inventory audit resource. The API key shown belongs to client_two (as defined in Part 1).

$ curl -H "apikey: QzVV6y1EmQFbbxOfRCwyJs35" https://api.example.com/api/warehouse/inventory/audit
{"status":403,"message":"Forbidden"}

Controlling Access to Specific Methods

As defined above, the pricing service accepts the GET and PATCH methods, which respectively enable clients to obtain and modify the price of a specific item. (We could also choose to allow the POST and DELETE methods, to provide full lifecycle management of pricing data.) In this section, we expand that use case to control which methods specific users can issue. With JWT authentication enabled for the Warehouse API, the permissions for each client are encoded as custom claims. The JWTs issued to administrators who are authorized to make changes to pricing data include the claim "admin":true. We now extend our access control logic so that only administrators can make changes.

This map block, added to the bottom of api_gateway.conf, takes the request method ($request_method) as input and produces a new variable, $admin_permitted_method. Read‑only methods are always permitted (lines 62–64) but access to write operations depends on the value of the admin claim in the JWT (line 65). We now extend our Warehouse API configuration to ensure that only administrators can make pricing changes.

The Warehouse API requires all clients to present a valid JWT (line 7). We also check that write operations are permitted by evaluating the $admin_permitted_method variable (line 25). Note again that JWT authentication is exclusive to NGINX Plus.

Controlling Request Sizes

HTTP APIs commonly use the request body to contain instructions and data for the backend API service to process. This is true of XML/SOAP APIs as well as JSON/REST APIs. Consequently, the request body can pose an attack vector to the backend API services, which may be vulnerable to buffer overflow attacks when processing very large request bodies.

By default, NGINX rejects requests with bodies larger than 1 MB. This can be increased for APIs that specifically deal with large payloads such as image processing, but for most APIs we set a lower value.

The client_max_body_size directive on line 7 limits the size of the request body. With this configuration in place, we can compare the behavior of the API gateway upon receiving two different PATCH requests to the pricing service. The first curl command sends a small piece of JSON data, whereas the second command attempts to send the contents of a large file (/etc/services).

$ curl -iX PATCH -d '{"price":199.99}' https://api.example.com/api/warehouse/pricing/item001
HTTP/1.1 204 No Content
Server: nginx/1.19.5
Connection: keep-alive

$ curl -iX PATCH -d@/etc/services https://api.example.com/api/warehouse/pricing/item001
HTTP/1.1 413 Request Entity Too Large
Server: nginx/1.19.5
Content-Type: application/json
Content-Length: 45
Connection: close

{"status":413,"message":"Payload too large"}

Validating Request Bodies

[Editor – The following use case is one of several for the NGINX JavaScript module. For a complete list, see Use Cases for the NGINX JavaScript Module.]

In addition to being vulnerable to buffer overflow attacks with large request bodies, backend API services can be susceptible to bodies that contain invalid or unexpected data. For applications that require correctly formatted JSON in the request body, we can use the NGINX JavaScript module to verify that JSON data is parsed without error before proxying it to the backend API service.

With the JavaScript module installed, we use the js_import directive to reference the file containing the JavaScript code for the function that validates JSON data.

The js_set directive defines a new variable, $json_validated, which is evaluated by calling the parseRequestBody function.

The parseRequestBody function attempts to parse the request body using the JSON.parse method (line 6). If parsing succeeds, the name of the intended upstream group for this request is returned (line 8). If the request body cannot be parsed (causing an exception), a local server address is returned (line 11). The return directive populates the $json_validated variable so that we can use it to determine where to send the request.

In the URI routing section of the Warehouse API, we modify the proxy_pass directive on line 22. It passes the request to the backend API service as in the Warehouse API configurations discussed in previous sections, but now uses the $json_validated variable as the destination address. If the client body was successfully parsed as JSON then we proxy to the upstream group defined on line 15. If, however, there was an exception, we use the returned value of 127.0.0.1:10415 to send an error response to the client.

When requests are proxied to this virtual server, NGINX sends the 415 (Unsupported Media Type) response to the client.

With this complete configuration in place, NGINX proxies requests to the backend API service only if they have correctly formatted JSON bodies.

$ curl -iX POST -d '{"sku":"item002","price":85.00}' https://api.example.com/api/warehouse/pricing
HTTP/1.1 201 Created
Server: nginx/1.19.5
Location: /api/warehouse/pricing/item002

$ curl -X POST -d 'item002=85.00' https://api.example.com/api/warehouse/pricing
{"status":415,"message":"Unsupported media type"}

A Note about the $request_body Variable

The JavaScript function parseRequestBody uses the $request_body variable to perform JSON parsing. However, NGINX does not populate this variable by default, and simply streams the request body to the backend without making intermediate copies. By using the mirror directive inside the URI routing section (line 16) we create a copy of the client request, and consequently populate the $request_body variable.

The directives on lines 17 and 19 control how NGINX handles the request body internally. We set client_body_buffer_size to the same size as client_max_body_size so that the request body is not written to disk. This improves overall performance by minimizing disk I/O operations, but at the expense of additional memory utilization. For most API gateway use cases with small request bodies this is a good compromise.

As mentioned, the mirror directive creates a copy of the client request. Other than populating $request_body, we have no need for this copy so we send it to a “dead end” location (/_get_request_body) that we define in the server block in the top‑level API gateway configuration.

This location does nothing more than send the 204 (No Content) response. Because this response is related to a mirrored request, it is ignored and so adds negligible overhead to the processing of the original client request.

Summary

In this second blog post of our series about deploying NGINX Open Source and NGINX Plus as an API gateway, we focused on the challenge of protecting backend API services in a production environment from malicious and misbehaving clients. NGINX uses the same technology for managing API traffic that is used to power and protect the busiest sites on the Internet today.

Check out the other post in this series:

  • Part 1 explains how to configure NGINX in some essential API gateway use cases.
  • Part 3 explains how to deploy NGINX as an API gateway for gRPC services.

To try NGINX Plus as an API gateway, start your free 30-day trial today or contact us to discuss your use cases. During your trial, use the complete set of configuration files from our GitHub Gist repo.

Hero image
Deploying NGINX as an API Gateway

This free eBook shows you how to deploy NGINX as an API gateway



About The Author

Liam Crilly

Sr Director, Product Management

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.