What Are We Doing?
Instead, we’re creating a very simple runtime matched to our requirements:
- Architecture – The single‑threaded, bytecode execution is designed for quick initialization and disposal. VMs are allocated per request. Startup is extremely quick, because there is no complex state or helpers to initialize. Memory is accumulated in pools during execution and released at completion by freeing the pools. This memory management scheme eliminates the need to track and free individual objects or to use a garbage collector.
- Helper functions – Built‑in operations are implemented natively in NGINX. For example, complex mathematical operations such as hash functions can be evaluated much more quickly when run natively, and when nginScript interfaces with NGINX, this too will be native. You can regard nginScript as a programmatic way to drive the native operations of NGINX.
- Pre‑emption – NGINX’s event‑driven model schedules the execution of individual nginScript VMs. When an nginScript rule performs a blocking operation (such as reading network data or issuing an external subrequest), NGINX transparently suspends execution of that VM and reschedules it when the event completes. This means that you can write rules in a simple, linear fashion and NGINX will schedule them without internal blocking.
Finally, with our own VM, we are not susceptible to changing APIs and standards, and we can ensure that our VM supports the wide range of platforms we target for NGINX.
It’s early days to talk about performance, and we’re concentrating on building the functionality first. nginScript compiles to internal bytecode and executes in a register‑based VM; we’d like to add JIT compilation at some point, but that may limit the range of supported platforms. Our tests indicate that the nginScript VM offers performance similar to other interpreted languages (PHP, Ruby, etc.) but is not as fast as JIT implementations.
Remember that the expected use case for nginScript is to execute short rules that drive internal, native operations in NGINX. We don’t intend for anyone to implement compute‑intensive operations directly in nginScript, because internal operations and C‑based modules already address that need. We’ll always strive to optimize performance where sensible, but a simple comparison with a JIT language gives a skewed picture.
We’ll also focus on integration – how are nginScript rules declared or referenced in the NGINX configuration, and what are the integration points where nginScript can access and control NGINX internals? How can we share data between scripts, and across a cluster? How can users instrument and debug nginScript rules?
For long‑running scripts (such as those associated with WebSocket), we might even add a garbage collector, but that’s a long way in the future.
The full path for nginScript is not mapped out yet. We’d love to get your feedback on use cases and gaps. Please share your ideas, rules and insights on our mailing list and we’ll develop this feature together. Thank you.