← Back to Blog

Why We Chose SwiftNIO Over Network.framework

· 6 min read

Two networking options on macOS

When you're writing a networked application on macOS, Apple gives you two first-party options. Network.framework is the modern choice -- introduced in 2018, built around NWConnection and NWListener, with native support for TLS 1.3, Happy Eyeballs v2, QUIC, and connection migration. URLSession sits a level higher, purpose-built for HTTP request/response patterns. Both are excellent APIs for their intended use cases.

Then there's the open-source option: SwiftNIO, Apple's server-side networking framework. It's a low-level, event-driven, non-blocking I/O library modeled after Netty. It powers Vapor, Hummingbird, and a significant chunk of Swift-on-server infrastructure.

For most macOS apps -- a REST client, a chat application, a file sync tool -- Network.framework is the right call. It handles the hard parts automatically: TLS negotiation, IPv4/IPv6 dual-stack racing, connection coalescing, proxy configuration. You get correct behavior by default.

For a man-in-the-middle proxy, it's the wrong choice entirely. Here's why.

What a MITM proxy actually needs

A MITM proxy is not a normal network application. It doesn't make connections or serve content. It sits between two parties -- a client and a server -- and intercepts their conversation. This is a fundamentally different relationship with the network stack, and it imposes a specific set of requirements:

  • Raw TCP access. The proxy must accept a TCP connection from the client and open a separate TCP connection to the server. It reads and writes raw bytes on both sides.
  • Independent TLS termination. The proxy terminates TLS with the client using a generated certificate, and separately initiates TLS with the upstream server. These are two distinct TLS sessions with different certificates, different negotiation parameters, and different lifecycle states.
  • HTTP CONNECT interception. Before TLS begins, the client sends an HTTP CONNECT request in plaintext. The proxy must read this request, extract the target hostname, establish the upstream connection, and only then start TLS. This means the proxy needs to read HTTP from a connection that will become TLS -- a state transition that happens mid-stream.
  • Byte-level modification. The proxy needs to inject headers, rewrite URLs, modify response bodies, delay packets, or drop connections entirely -- all while maintaining valid HTTP framing on both sides.
  • High concurrency. A developer's machine easily generates hundreds of simultaneous connections. A proxy must handle all of them without blocking, without thread explosion, and without dropping data.

Every one of these requirements points to the same underlying need: full control over the connection pipeline, from raw TCP accept to TLS handshake to HTTP parsing to byte forwarding.

Network.framework: great abstraction, wrong level

Network.framework is designed to make networking correct by default. That's a genuine virtue for application developers. But for a proxy, "correct by default" means "doing things we need to prevent."

The core problem is TLS. When you create an NWConnection with TLS parameters, the framework handles the entire handshake internally. Your code receives plaintext data after negotiation completes. You never see the ClientHello. You never see the ServerHello. You can't substitute your own certificate into the handshake because you don't control the handshake -- the framework does.

This is exactly what you want when you're building a chat app. It's exactly what you don't want when you're building a proxy that needs to present a dynamically generated certificate to the client while simultaneously negotiating a real certificate with the upstream server.

The second problem is the CONNECT method. When a client connects to an HTTP proxy, it first sends a plaintext CONNECT example.com:443 request. The proxy reads this, opens a TCP connection to the target, responds with 200 Connection Established, and then both sides upgrade to TLS. With NWConnection, there's no way to accept a connection as raw TCP, read some HTTP, and then dynamically add TLS to that same connection. The TLS configuration is set at connection creation time. You can't change the rules mid-stream.

Network.framework also handles HTTP/2 multiplexing and QUIC streams internally. A proxy needs to see individual HTTP/2 frames, modify HEADERS frames, and manage stream priorities directly. The framework gives you multiplexed streams, but it won't let you reach into the framing layer below them.

The short version: Network.framework hides exactly the bytes we need to see. It's a client/server framework, not a middleware framework.

SwiftNIO: channel handlers all the way down

SwiftNIO takes the opposite approach. Instead of hiding the connection pipeline, it is the connection pipeline. The core abstraction is a ChannelPipeline: an ordered chain of ChannelHandler objects that process inbound and outbound I/O events. Each handler receives data, transforms it, and passes it to the next handler. You build the pipeline yourself, and you can modify it at any point during the connection's lifetime.

This is exactly the model a MITM proxy needs. In Rockxy, the channel pipeline for a proxied HTTPS connection looks like this:

  • TCP accept handler -- accepts the incoming connection from the client on the proxy's listening port.
  • HTTP decoder -- parses the initial plaintext bytes as HTTP. At this point, the client hasn't started TLS yet; it's sending a CONNECT request.
  • CONNECT handler -- reads the target hostname from the CONNECT request, opens a new TCP connection to the upstream server, and sends 200 Connection Established back to the client.
  • TLS handler (server side) -- negotiates TLS with the upstream server using the real server's certificate chain. We inspect the server's leaf certificate to extract the Common Name and Subject Alternative Names.
  • Certificate generator -- takes the server's certificate details and generates a matching leaf certificate signed by Rockxy's root CA. This certificate has the same CN and SANs as the real one, so the client sees the correct hostname.
  • TLS handler (client side) -- negotiates TLS with the client, presenting the generated certificate. From the client's perspective, it's talking to the real server (assuming Rockxy's root CA is trusted).
  • HTTP decoder (post-TLS) -- now reads plaintext HTTP from the decrypted client stream. At this point, the full request -- headers, body, everything -- is visible.
  • Inspector/modifier handler -- this is where Rockxy's debugging logic lives. Log the request. Check it against rules. Apply breakpoints. Modify headers. Rewrite the body. Delay the response. This handler is the reason the entire proxy exists.
  • HTTP encoder + forwarder -- re-encodes the (possibly modified) request and forwards it through the upstream TLS connection to the real server.

The critical detail: this pipeline is not defined at connection creation time. It's built dynamically. The HTTP decoder is added first, then removed after the CONNECT is processed. The TLS handlers are added mid-connection, after the proxy has established the upstream connection and generated the right certificate. A second HTTP decoder is added after TLS negotiation completes. SwiftNIO lets us restructure the pipeline at every stage of the connection lifecycle.

This is impossible with Network.framework. Steps 2 through 6 all require raw byte access at specific, different points in time. The framework doesn't expose those interception points because it was never designed to.

Event-driven, non-blocking I/O

SwiftNIO uses an event loop model similar to Netty (Java) or libuv (Node.js). A small number of threads -- typically one per CPU core -- run tight loops that process I/O events for all connections assigned to that loop. There's no thread-per-connection overhead. No context switching between thousands of threads. No lock contention on shared state.

For a proxy, this matters more than usual. Every proxied connection involves two sockets: client-to-proxy and proxy-to-server. A build triggering 500 HTTP requests means 1,000 open sockets. With thread-per-connection, that's 1,000 threads. With SwiftNIO, it's 8 threads on an M-series Mac handling all 1,000 sockets.

Network.framework also avoids thread-per-connection -- it uses GCD dispatch queues internally. But SwiftNIO gives us explicit control over which event loop handles which connection. In Rockxy, we pin the client-side channel and the upstream-side channel for the same proxied connection to the same EventLoop. This means data flowing from client to server and back never crosses thread boundaries, eliminating synchronization overhead entirely. We don't need locks, atomics, or dispatch queues to coordinate the two halves of a proxied connection. They run on the same thread, in the same run loop, with sequential access to shared state.

The ecosystem advantage

SwiftNIO isn't a single library -- it's an ecosystem of packages that all speak the same ChannelHandler protocol. Rockxy uses several of them:

  • swift-nio-ssl -- TLS support with full control over certificate presentation, verification callbacks, and ALPN negotiation. We use it to present generated certificates to clients and to connect to upstream servers with custom trust settings.
  • swift-nio-http2 -- HTTP/2 frame-level access. Not "HTTP/2 streams as an abstraction" but actual HTTP2Frame values that we can inspect, modify, and forward. This is how Rockxy handles HTTP/2 traffic without collapsing multiplexed streams into opaque data.
  • swift-nio-extras -- utility handlers like ByteToMessageDecoder and FixedLengthFrameDecoder for protocol parsing.
  • swift-certificates -- programmatic X.509 certificate creation. We generate a root CA at first launch and create per-host leaf certificates on the fly during TLS interception.

These aren't wrappers around system libraries. They're native SwiftNIO ChannelHandler implementations that plug directly into the pipeline. Adding NIOSSLServerHandler to a pipeline makes it part of the same handler chain as our HTTP decoder and inspector. Data flows through without copying or crossing API boundaries.

Network.framework has no equivalent extensibility. You can't write a custom protocol handler and insert it into the framework's internal pipeline. The pipeline is a black box by design.

Trade-offs we accepted

SwiftNIO is not the easy path. We chose it because it's the only path that works for our use case, but it comes with real costs.

  • More code. Network.framework gives you a TLS connection in 10 lines. SwiftNIO requires building the pipeline handler by handler, managing ByteBuffer allocations, handling backpressure, and writing error propagation through the handler chain. Our proxy engine is several thousand lines of Swift. The equivalent Network.framework code (if it could do what we needed) would be a fraction of that.
  • No automatic Happy Eyeballs. Network.framework implements RFC 8305 (Happy Eyeballs v2) automatically, racing IPv4 and IPv6 connections and picking the winner. SwiftNIO doesn't. We implement our own dual-stack connection logic in Rockxy, which adds code and complexity.
  • No native QUIC. Network.framework has built-in QUIC support. SwiftNIO doesn't (as of early 2026). Rockxy currently handles QUIC as opaque UDP traffic -- we can see that it's happening, but we can't inspect the contents. Full QUIC interception is on the roadmap, but it will require either a SwiftNIO QUIC implementation or a custom integration with quiche or msquic.
  • No free macOS integration. Network.framework automatically respects system proxy settings, VPN routing, and network diagnostics. SwiftNIO operates below those abstractions. We handle system proxy configuration ourselves through a privileged helper tool and SCNetworkConfiguration.

These trade-offs are real. But the alternative isn't "use Network.framework and have fewer problems." The alternative is "use Network.framework and can't build a proxy at all." When the higher-level API can't express what you need, you drop down a level.

When to use what

This isn't a post arguing that SwiftNIO is better than Network.framework. For most applications, Network.framework is genuinely the better choice. Here's our rough decision tree:

  • Use Network.framework when you're building a client that talks to servers, a server that talks to clients, or any application where you want the OS to handle TLS, routing, and connection management. If you don't need to modify the TLS pipeline, don't take on the responsibility of managing it.
  • Use SwiftNIO when you need byte-level control over the connection lifecycle. Proxies, protocol analyzers, custom protocol servers, network testing tools, anything where you need to see (or change) what's on the wire before it reaches the application layer.

Both are open source. Both are maintained by Apple. SwiftNIO lives on GitHub with an Apache 2.0 license. Network.framework ships with macOS and is documented in Apple's developer documentation. They serve different audiences, and choosing between them is a question of what your application does to the network, not just with it.

Rockxy's proxy engine wouldn't exist without SwiftNIO. If you want to see how we use it -- the pipeline setup, the dynamic handler insertion, the event loop pinning -- the source is on GitHub.