Compile Time Feature Flags in Rust: Why, How, When?

Compile Time Feature Flags in Rust: Why, How, When?

The ability to pick compile time features in Rust can allow you to improve the performance, size, maintainability, safety, and portability of your code. Below are a few arguments for why you should use features proactively when consuming dependencies, and why you should offer those to other users in your libraries.

Performance

Using feature flags in Rust can improve the performance of the resulting code. By only including the code that is needed for a specific application, you can avoid the overhead of unused or unnecessary code. Though there are compiler optimizations to remove dead code, still, this can result in faster and more efficient programs (and make the compiler’s life easier).

Size

The overall size of the resulting binary is influenced by which dependencies you include and how you use them. Feature selection can help the resulting binary be smaller, which can be beneficial for applications that need to be distributed or deployed to resource-constrained environments.

Maintainability

I’ve recently had a breaking upstream dependency, where I was lucky enough to have that upstream breaking code under a feature flag — for a feature I was not using. While waiting for the upstream library to update, I simply removed the feature for my local project and it was building fine again. This means you can improve the maintainability of Rust code by way of allowing developers to selectively include or exclude specific functionality.

Security

Statistically speaking — the more code you depend on, the higher the chance for a security issue. Depending on only the features you need to lower the odds of a security issue is a security-by-design thinking, and a crate that offers itself “in chunks” is helping that happen.

There are also ways to select different implementation of the same functionality, based on how comfortable you are with the safety of an implementation. For example, you might prefer a Rust-native TLS implementation over a C based one, because Rust is a safe language, and some crates like Reqwest offer a selection of TLS backends to use.

Portability

As a compiled language, an important aspect of feature flags is to improve the portability of your code. By selectively including or excluding specific functionality, you can make your code more portable across different platforms and environments.

How does C/C++ compare?

C and C++ have historically been the archetypes of compiled portable code, that’s deployed to a massive amount of platforms and CPU architectures. C++ does not have a built-in feature that is directly equivalent to the ability to pick compile time features in Rust. However, C++ does have a number of preprocessor directives that can be used to selectively include or exclude certain code at compile time.

This can provide some of the same benefits as feature flags in Rust, but it’s messy, and hard to discover — both as a programmer looking to build into an existing codebase, and as a consumer looking to enable or disable features.

Feature flags: the building blocks

To enable a specific feature flag for a specific crate, you can use the default-features = false and features attributes in the crate's Cargo.toml file. For example:

[dependencies]
my-crate = { default-features = false, features = ["my-feature"] }

To enable a feature flag for a specific piece of code, you can use the #[cfg(feature = "my-feature")] attribute. For example:

#[cfg(feature = "my-feature")]
fn my_function() {
    // Code that is only included when the "my-feature" flag is enabled
}

To enable a feature flag for a specific module, you can use the #[cfg(feature = "my-feature")] attribute on the mod declaration. For example:

#[cfg(feature = "my-feature")]
mod my_module {
    // Code that is only included when the "my-feature" flag is enabled
}

To enable a feature flag for a specific struct or enum with derive, you can use the #[cfg_attr(feature = "my-feature", derive(...))] attribute. For example:

#[cfg_attr(feature = "my-feature", derive(Debug, PartialEq))]
struct MyStruct {
    // Fields and methods that are only included when the "my-feature" flag is enabled
}

Enabling or disabling support for a specific platform:

#[cfg(target_os = "linux")]
mod linux_specific_code {
    // Linux-specific code goes here...
}

Enabling or disabling a specific implementation of a trait:

#[cfg(feature = "special_case")]
impl MyTrait for MyType {
    // Implementation of trait for special case goes here...
}

Enabling or disabling a specific test case:

#[cfg(feature = "expensive_tests")]
#[test]
fn test_expensive_computation() {
    // Test that performs expensive computation goes here...
}

Enabling or disabling a specific benchmark:

#[cfg(feature = "long_benchmarks")]
#[bench]
fn bench_long_running_operation(b: &mut Bencher) {
    // Benchmark for a long-running operation goes here...
}

To enable a feature only when multiple flags are set, you can use the #[cfg(all(feature1, feature2, ...))] attribute. For example, to enable a my_function() only when both the my_feature1 and my_feature2 flags are set:

#[cfg(all(feature = "my_feature1", feature = "my_feature2"))]
fn my_function() {
    // code for my_function
}

To enable a feature only when one of multiple flags is set, you can use the #[cfg(any(feature1, feature2, ...))] attribute. For example, to enable a my_function() when either the my_feature1 or my_feature2 flag is set:

#[cfg(any(feature = "my_feature1", feature = "my_feature2"))]
fn my_function() {
    // code for my_function
}

Feature flags illustrated

Same module, but point to different path for implementation, then pull out a function to expose from that module with pub use

//! Signal monitor
#[cfg(unix)]
#[path = "unix.rs"]
mod imp;#[cfg(windows)]
#[path = "windows.rs"]
mod imp;#[cfg(not(any(windows, unix)))]
#[path = "other.rs"]
mod imp;pub use self::imp::create_signal_monitor;

see: https://github.com/shadowsocks/shadowsocks-rust/blob/master/src/monitor/mod.rs

When all different components have the same implementation: you can offer everything under the sun, without any disadvantage because only features selected get compiled in.

The tradeoff is that now you have a bigger test matrix, which grows combinatorially with every new alternative.

In this example, the library lets you pick any allocator you can think of, because allocators have a well defined interface and require no work on your part to swap:

//! Memory allocator
#[cfg(feature = "jemalloc")]
#[global_allocator]
static ALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc;#[cfg(feature = "tcmalloc")]
#[global_allocator]
static ALLOC: tcmalloc::TCMalloc = tcmalloc::TCMalloc;#[cfg(feature = "mimalloc")]
#[global_allocator]
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;#[cfg(feature = "snmalloc")]
#[global_allocator]
static ALLOC: snmalloc_rs::SnMalloc = snmalloc_rs::SnMalloc;#[cfg(feature = "rpmalloc")]
#[global_allocator]
static ALLOC: rpmalloc::RpMalloc = rpmalloc::RpMalloc

In this example, you see how to let your users “layer in” the functionality they need, where you can pick how much deeper you want to go:

//! Service launchers
pub mod genkey;
#[cfg(feature = "local")]
pub mod local;
#[cfg(feature = "manager")]
pub mod manager;
#[cfg(feature = "server")]
pub mod server;

In the below example, you can use blocks to “artificially” scope in entire pieces of code under a feature:

#[cfg(feature = "local-tunnel")]
{
    app = app.arg(
        Arg::new("FORWARD_ADDR")
            .short('f')
            .long("forward-addr")
            .num_args(1)
            .action(ArgAction::Set)
            .requires("LOCAL_ADDR")
            .value_parser(vparser::parse_address)
            .required_if_eq("PROTOCOL", "tunnel")
            .help("Forwarding data directly to this address (for tunnel)"),
    );
}

In this example, we inline empty implementations, because why pay the price of a function call if its body always return a simplistic and empty-ish value? (Ok(())

#[cfg(all(not(windows), not(unix)))]
#[inline]
fn set_common_sockopt_after_connect_sys(_: &tokio::net::TcpStream, _: &ConnectOpts) -> io::Result<()> {
    Ok(())
}

Last but not Least: What’s the tradeoff?

If features are so powerful, and shed away a lot of C/C++’s primitive ways of doing conditional code compilation, why not use it everywhere and always? Here are a few things you should consider.

  • Using too many features is a real thing. In the imaginary and extreme case: imagine you had a feature on every module, on every function. That would require your consumers to solve a very hard puzzle of understanding how to compose your library from its discrete features. This is the danger of features. You want to be modest with the amount of features you’re offering to reduce a cognitive load, and for those features to be actually things people care about removing or adding.

  • Testing is another big deal with features. You never know which combination of features your users will select, and every combination selects a different set of code — and those code pieces have to interoperate smoothly both in compilation (successfully compile) and in logic (not introduce bugs) You need to test a combination of all features with every other feature, and create a powerset of features!.

  • You an automate that with xtaskops::powerset — see more here: https://github.com/jondot/xtaskops