NGINX.COM
Web Server Load Balancing with NGINX Plus

Over its relatively short history, the programming language Rust has garnered exceptional accolades along with a rich and mature ecosystem. Both Rust and Cargo (its build system, toolchain interface, and package manager) are admired and desired technologies in the landscape, with Rust holding a stable position in the top 20 languages of RedMonk’s programming language rankings. Furthermore, projects that adopt Rust often show improvement in stability and security-related programming errors (as an example, Android developers tell a compelling story of punctuated improvement).

F5 has been watching these developments around Rust and its community of Rustaceans with excitement for some time. We’ve taken notice with active advocacy for the language, its toolchain, and adoption moving forward.

At NGINX, we’re now putting some skin in the game to satisfy developer wants and needs in an increasingly digital and security-conscious world. We’re excited to announce the ngx-rust project – a new way to write NGINX modules with the Rust language. Rustaceans, this one’s for you!

A Quick History of NGINX and Rust

Close followers of NGINX and our GitHub might realize this isn’t our first incarnation of Rust-based modules. In the initial years of Kubernetes and early days of service mesh, some work manifested around Rust, creating the groundwork for the ngx-rust project.

Originally, ngx-rust acted as a way to accelerate the development of an Istio-compatible service mesh product with NGINX. After development of the initial prototype, this project was left unchanged for many years. During that time, many community members forked the repository or created projects inspired by the original Rust bindings examples provided in ngx-rust.

Fast forward and our F5 Distributed Cloud Bot Defense team needed to integrate NGINX proxies into its protection services. This required building a new module.

We also wanted to keep expanding our Rust portfolio while improving the developer experience and satisfying customers’ evolving needs. So, we leveraged our internal innovation sponsorships and worked with the original ngx-rust author to develop a new and improved Rust bindings project. After a long hiatus, we restarted the publishing of ngx-rust crates with enhanced documentation and improvements to build ergonomics for community use.

What Does This Mean for NGINX?

Modules are the core building blocks of NGINX, implementing most of its functionality. Modules are also the most powerful way NGINX users can customize that functionality and build support for specific use cases.

NGINX has traditionally only supported modules written in C (as a project written in C, supporting module bindings in the host language was a clear and easy choice). However, advancements in computer science and programming language theory have improved on past paradigms, especially with respect to memory safety and correctness. This has paved the way for languages like Rust, which can now be made available for NGINX module development.

How to Get Started with ngx-rust

Now with some of the history of NGINX and Rust covered, let’s start building a module. You’re free to build from source and develop your module locally, pull ngx-rust source and help build better bindings, or simply pull the crate from crates.io.

The ngx-rust README covers contributing guidelines and local build requirements to get started. It’s still early and in its initial development, but we aim to improve quality and features with community support. In this tutorial, we focus on the creation of a simple independent module. You can also look at the ngx-rust examples for more complex lessons.

The bindings are organized into two crates:

  • nginx-sys is a crate that generates bindings from NGINX source code. The file downloads NGINX source code, dependencies, and uses bindgen code automation to create the foreign function interface (FFI) bindings.
  • ngx is the main crate that implements Rust glue code, APIs, and re-exports nginx-sys. Module writers import and interact with NGINX through these symbols while the re-export of nginx-sys removes the need to import it explicitly.

The instructions below will initialize a skeleton workspace. Begin by creating a working directory and initialize the Rust project:


cd $YOUR_DEV_ARENA 
mkdir ngx-rust-howto 
cd ngx-rust-howto 
cargo init --lib

Next, open the Cargo.toml file and add the following section:


[lib] 
crate-type = ["cdylib"] 

[dependencies] 
ngx = "0.3.0-beta"

Alternatively, if you want to see the completed module while reading along, it can be cloned from Git:


cd $YOUR_DEV_ARENA 
git clone git@github.com:f5yacobucci/ngx-rust-howto.git

And with that, you’re ready to start developing your first NGINX Rust module. The structure, semantics, and general approach to constructing a module won’t look very different from what’s necessary when using C. For now, we’ve set out to offer NGINX bindings in an iterative approach to get the bindings generated, usable, and in developers’ hands to create their inventive offerings. In the future, we’ll work to build a better and more idiomatic Rust experience.

This means your first step is to construct your module in tandem with any directives, context, and other aspects required to install and run in NGINX. Your module will be a simple handler that can accept or deny a request based on HTTP method, and it will create a new directive that accepts a single argument. We’ll discuss this in steps, but you can refer to the complete code at the ngx-rust-howto repo on GitHub.

Note: This blog focuses on outlining the Rust specifics, rather than how to build NGINX modules in general. If you’re interested in building other NGINX modules, please refer to the many superb discussions out in the community. These discussions will also give you a more fundamental explanation of how to extend NGINX (see more in the Resources section below).

Module Registration

You can create your Rust module by implementing the HTTPModule trait, which defines all the NGINX entry points (postconfiguration, preconfiguration, create_main_conf, etc.). A module writer only needs to implement the functions necessary for its task. This module will implement the postconfiguration method to install its request handler.

Note: If you haven’t cloned the ngx-rust-howto repo, you can begin editing the src/lib.rs file created by cargo init.


struct Module; 

impl http::HTTPModule for Module { 
    type MainConf = (); 
    type SrvConf = (); 
    type LocConf = ModuleConfig; 

    unsafe extern "C" fn postconfiguration(cf: *mut ngx_conf_t) -> ngx_int_t { 
        let htcf = http::ngx_http_conf_get_module_main_conf(cf, &ngx_http_core_module); 

        let h = ngx_array_push( 
            &mut (*htcf).phases[ngx_http_phases_NGX_HTTP_ACCESS_PHASE as usize].handlers, 
        ) as *mut ngx_http_handler_pt; 
        if h.is_null() { 
            return core::Status::NGX_ERROR.into(); 
        } 

        // set an Access phase handler 
        *h = Some(howto_access_handler); 
        core::Status::NGX_OK.into() 
    } 
} 

The Rust module only needs a postconfiguration hook at the access phase NGX_HTTP_ACCESS_PHASE. Modules can register handlers for various phases of the HTTP request. For more information on this, see the details in the development guide.

You’ll see the phase handler howto_access_handler added just before the function returns. We’ll come back to this later. For now, just note that it’s the function that will perform the handling logic during the request chain.

Depending on your module type and its needs, these are the available registration hooks:

  • preconfiguration
  • postconfiguration
  • create_main_conf
  • init_main_conf
  • create_srv_conf
  • merge_srv_conf
  • create_loc_conf
  • merge_loc_conf

Configuration State

Now it’s time to create storage for your module. This data includes any configuration parameters required or the internal state used to process requests or alter behavior. Essentially, whatever information the module needs to persist can be put in structures and saved. This Rust module uses a ModuleConfig structure at the location config level. The configuration storage must implement the Merge and Default traits.

When defining your module in the step above, you can set the types for your main, server, and location configurations. The Rust module you’re developing here only supports locations, so only the LocConf type is set.

To create state and configuration storage for your module, define a structure and implement the Merge trait:


#[derive(Debug, Default)] 
struct ModuleConfig { 
    enabled: bool, 
    method: String, 
} 

impl http::Merge for ModuleConfig { 
    fn merge(&mut self, prev: &ModuleConfig) -> Result<(), MergeConfigError> { 
        if prev.enabled { 
            self.enabled = true; 
        } 

        if self.method.is_empty() { 
            self.method = String::from(if !prev.method.is_empty() { 
                &prev.method 
            } else { 
                "" 
            }); 
        } 

        if self.enabled && self.method.is_empty() { 
            return Err(MergeConfigError::NoValue); 
        } 
        Ok(()) 
    } 
} 

ModuleConfig stores an on/off state in the enabled field, along with an HTTP request method. The handler will check against this method and either allow or forbid requests.

Once storage is defined, your module can create directives and configuration rules for users to set themselves. NGINX uses the ngx_command_t type and an array to register module-defined directives to the core system.

Through the FFI bindings, Rust module writers have access to the ngx_command_t type and can register directives as they would in C. The ngx-rust-howto module defines a howto directive that accepts a string value. For this case, we define one command, implement a setter function, and then (in the next section) hook those commands into the core system. Remember to terminate your command array with the provided ngx_command_null! macro.

Here is how to create a simple directive using NGINX commands:


#[no_mangle] 
static mut ngx_http_howto_commands: [ngx_command_t; 2] = [ 
    ngx_command_t { 
        name: ngx_string!("howto"), 
        type_: (NGX_HTTP_LOC_CONF | NGX_CONF_TAKE1) as ngx_uint_t, 
        set: Some(ngx_http_howto_commands_set_method), 
        conf: NGX_RS_HTTP_LOC_CONF_OFFSET, 
        offset: 0, 
        post: std::ptr::null_mut(), 
    }, 
    ngx_null_command!(), 
]; 

#[no_mangle] 
extern "C" fn ngx_http_howto_commands_set_method( 
    cf: *mut ngx_conf_t, 
    _cmd: *mut ngx_command_t, 
    conf: *mut c_void, 
) -> *mut c_char { 
    unsafe { 
        let conf = &mut *(conf as *mut ModuleConfig); 
        let args = (*(*cf).args).elts as *mut ngx_str_t; 
        conf.enabled = true; 
        conf.method = (*args.add(1)).to_string(); 
    }; 

    std::ptr::null_mut() 
} 

Hooking in the Module

Now that you have a registration function, phase handler, and commands for configuration, you can hook everything together and expose the functions to the core system. Create a static ngx_module_t structure with references to your registration function(s), phase handlers, and directive commands. Every module must contain a global variable of type ngx_module_t.

Then create a context and static module type, and expose them with the ngx_modules! macro. In the example below, you can see how commands are set in the commands field and the context referencing the modules registration functions is set in the ctx field. For this module, all other fields are effectively defaults.


#[no_mangle] 
static ngx_http_howto_module_ctx: ngx_http_module_t = ngx_http_module_t { 
    preconfiguration: Some(Module::preconfiguration), 
    postconfiguration: Some(Module::postconfiguration), 
    create_main_conf: Some(Module::create_main_conf), 
    init_main_conf: Some(Module::init_main_conf), 
    create_srv_conf: Some(Module::create_srv_conf), 
    merge_srv_conf: Some(Module::merge_srv_conf), 
    create_loc_conf: Some(Module::create_loc_conf), 
    merge_loc_conf: Some(Module::merge_loc_conf), 
}; 

ngx_modules!(ngx_http_howto_module); 

#[no_mangle] 
pub static mut ngx_http_howto_module: ngx_module_t = ngx_module_t { 
    ctx_index: ngx_uint_t::max_value(), 
    index: ngx_uint_t::max_value(), 
    name: std::ptr::null_mut(), 
    spare0: 0, 
    spare1: 0, 
    version: nginx_version as ngx_uint_t, 
    signature: NGX_RS_MODULE_SIGNATURE.as_ptr() as *const c_char, 

    ctx: &ngx_http_howto_module_ctx as *const _ as *mut _, 
    commands: unsafe { &ngx_http_howto_commands[0] as *const _ as *mut _ }, 
    type_: NGX_HTTP_MODULE as ngx_uint_t, 

    init_master: None, 
    init_module: None, 
    init_process: None, 
    init_thread: None, 
    exit_thread: None, 
    exit_process: None, 
    exit_master: None, 

    spare_hook0: 0, 
    spare_hook1: 0, 
    spare_hook2: 0, 
    spare_hook3: 0, 
    spare_hook4: 0, 
    spare_hook5: 0, 
    spare_hook6: 0, 
    spare_hook7: 0, 
}; 

After this, you’ve practically completed the steps necessary to set up and register a new Rust module. That said, you still need to implement the phase handler (howto_access_handler) that was set in the postconfiguration hook.

Handlers

Handlers are called for each incoming request and perform most of the work of your module. Request handlers have been the ngx-rust team’s focus and are where the majority of initial ergonomic improvements have been made. While the previous setup steps require writing Rust in a C-like style, ngx-rust provides more convenience and utilities for request handlers.

As seen in the example below, ngx-rust provides the macro http_request_handler! to accept a Rust closure called with a Request instance. It also provides utilities to get configuration and variables, set those variables, and to access memory, other NGINX primitives, and APIs.

To initiate a handler procedure, invoke the macro and provide your business logic as a Rust closure. For the ngx-rust-howto module, check the request’s method to allow the request to continue processing.


http_request_handler!(howto_access_handler, |request: &mut http::Request| { 
    let co = unsafe { request.get_module_loc_conf::(&ngx_http_howto_module) }; 
    let co = co.expect("module config is none"); 

    ngx_log_debug_http!(request, "howto module enabled called"); 
    match co.enabled { 
        true => { 
            let method = request.method(); 
            if method.as_str() == co.method { 
                return core::Status::NGX_OK; 
            } 
            http::HTTPStatus::FORBIDDEN.into() 
        } 
        false => core::Status::NGX_OK, 
    } 
}); 

With that, you’ve completed your first Rust module!

The ngx-rust-howto repo on GitHub contains an NGINX configuration file in the conf directory. You can also build (with cargo build), add the module binary to the load_module directive in a local nginx.conf, and run it using an instance of NGINX. In writing this tutorial, we used NGINX v1.23.3, the default NGINX_VERSION supported by ngx-rust. When building and running dynamic modules, be sure to use the same NGINX_VERSION for ngx-rust builds as the NGINX instance you’re running on your machine.

Conclusion

NGINX is a mature software system with years of features and use cases built into it. It is a capable proxy, load balancer, and a world-class web server. Its presence in the market is certain for years to come, which feeds our motivation to build on its capabilities and give our users new methods to interact with it. With Rust’s popularity among developers and its improved safety constraints, we’re excited to provide the option to use Rust alongside the best web server in the world.

However, NGINX’s maturity and feature-rich ecosystem both create a large API surface area and ngx-rust has only scratched the surface. The project aims to improve and expand through adding more idiomatic Rust interfaces, building additional reference modules, and advancing the ergonomics of writing modules.

This is where you come in! The ngx-rust project is open to all and available on GitHub. We’re eager to work with the NGINX community to keep improving the module’s capabilities and ease of use. Check it out and experiment with the bindings yourself! And please reach out, file issues or PRs, and engage with us on the NGINX Community Slack channel.

Resources

Hero image

Learn how to deploy, configure, manage, secure, and monitor your Kubernetes Ingress controller with NGINX to deliver apps and APIs on-premises and in the cloud.



About The Author

Matt Yacobucci

Matthew Yacobucci

Principal Software Engineer

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.