← Back to Blog

Virtual-Scrolling 100k Network Requests in SwiftUI

· 8 min read

The problem: a proxy can capture 100k requests in minutes

Leave a macOS machine running for an hour with a browser, Slack, Xcode, a couple of Homebrew background services, and whatever else developers keep open. Point a debugging proxy at it. You will see thousands of HTTP requests per minute. Not because any single app is chatty -- because everything talks over HTTP now. Analytics pings, API polling, health checks, WebSocket heartbeats, certificate revocation lookups. It adds up fast.

Rockxy is an HTTP/HTTPS debugging proxy. Its primary UI is a scrollable list of captured network requests. The list needs to display every request in real-time, let users scroll back through history, and stay responsive while new entries arrive at the bottom. At 500 requests per second during a burst, the list grows to 100,000 entries in under four minutes.

This is the classic virtual scrolling problem. You cannot allocate 100,000 view objects simultaneously. You cannot lay out 100,000 rows. The display shows maybe 40 rows at a time on a typical screen. The solution is to render only what's visible, reuse view objects as the user scrolls, and keep everything else as lightweight data.

SwiftUI's List was our first attempt. We shipped it, profiled it, and replaced it within two weeks. Here's why, and what we built instead.

Why SwiftUI List fails at scale

SwiftUI's List works well for most apps. Contacts, settings screens, to-do lists -- anything up to a few thousand rows with simple cell views scrolls fine. Apple clearly optimized for that range, and for those use cases it's the right choice.

The problems start at around 10,000 items. We profiled List with Instruments at various row counts to get hard numbers:

  • 5,000 rows: 58 FPS during fast scroll, ~120 MB memory. Acceptable.
  • 10,000 rows: 45 FPS, ~220 MB. Noticeable hitches when flinging the scroll view.
  • 50,000 rows: 31 FPS, ~540 MB. Clearly stuttering. Users would notice.
  • 100,000 rows: 22 FPS, ~800 MB. Unusable for a tool that developers keep open all day.

The core issue is that List on macOS uses NSCollectionView under the hood, not NSTableView. While NSCollectionView does lazy loading, its view recycling behavior differs significantly from NSTableView. It creates views lazily as they scroll into the viewport, but it doesn't reclaim them as aggressively when they scroll out. The result: memory grows roughly linearly with the total row count, not with the visible row count.

We also tried LazyVStack inside a ScrollView. This was worse. LazyVStack creates views on demand but performs no recycling at all. Every row that has ever been visible stays in memory. At 100k rows, we measured over 1.2 GB and sub-20 FPS.

The comparison that sealed the decision: NSTableView at 100,000 rows used 45 MB and held a constant 60 FPS during scroll. Not close.

NSTableView: the old solution that still works

NSTableView has existed on macOS since the NeXT era -- over thirty years. It uses a view reuse pool that works identically to UITableView's cell recycling on iOS. When a row scrolls off screen, its view goes back into the pool. When a new row scrolls into view, the table dequeues a recycled view and reconfigures it with new data. The number of live view objects equals the number of visible rows plus a small buffer -- typically 40 to 60 total, regardless of whether the data source has 1,000 rows or 1,000,000.

This is the same mechanism that powers Finder's list view. Apple trusts it to display entire file system hierarchies. It was designed for exactly this kind of scale.

For Rockxy, this means at 100,000 captured requests, only about 50 NSView instances exist at any time. Each one is a lightweight row showing the method badge, URL, status code, response size, and duration. When the user scrolls, views are recycled and rebound. Memory stays flat. Frame rate stays at 60.

The data source protocol is straightforward. You implement numberOfRows(in:) to return the count and tableView(_:viewFor:row:) to configure each cell. The table calls these methods only for visible rows. Everything else is just an integer index into your data array.

The NSViewRepresentable bridge pattern

Rockxy's UI is primarily SwiftUI. The sidebar, the request inspector, the settings panels, the rules editor -- all SwiftUI. We didn't want to rewrite the entire app in AppKit just because the main list needed NSTableView. The solution is NSViewRepresentable, Apple's bridge protocol for embedding AppKit views in SwiftUI.

The architecture looks like this: a SwiftUI parent view holds the @Observable model that owns the request data. It passes this model to a RequestListView: NSViewRepresentable struct. Inside makeNSView, we create and configure an NSScrollView wrapping an NSTableView. A Coordinator class serves as both NSTableViewDataSource and NSTableViewDelegate. The entire bridge is about 200 lines of code.

The challenges are in the details:

  • Data synchronization: The @Observable model changes on the main actor. The Coordinator must know when rows are added or removed. We use withObservationTracking to detect changes and call noteNumberOfRowsChanged on the table, not reloadData. A full reload resets scroll position and selection -- both unacceptable.
  • Scroll position preservation: New requests arrive while the user is scrolling through older ones. If the user is scrolled to the bottom, new rows should auto-scroll into view. If they've scrolled up to inspect something, the view should stay put. We track this with a isAtBottom flag updated in the delegate's scrollViewDidScroll notification.
  • Selection bridging: When the user clicks a row in the NSTableView, the SwiftUI inspector panel needs to update. The Coordinator writes to a @Binding<NetworkRequest?> from tableViewSelectionDidChange. Going the other direction -- selecting a row programmatically from SwiftUI -- we call selectRowIndexes(_:byExtendingSelection:) in updateNSView.
  • Column sorting: Clicking a column header should sort by that field -- method, URL, status, duration, size. We implement tableView(_:sortDescriptorsDidChange:) in the delegate, apply the sort to the model's data array, and call reloadData only in this case since the entire row ordering changes.

The Coordinator pattern is the key to making this work. NSViewRepresentable.Coordinator is a class (not a struct), which means it can hold mutable state, conform to delegate protocols, and persist across SwiftUI view updates. It's the natural place to put all the bridging logic.

Batching updates with a background actor

Even with NSTableView handling the rendering, there's another bottleneck: update frequency. The proxy engine captures requests on background threads. If every incoming request immediately triggers a main-thread UI update, the main thread gets saturated at high traffic volumes.

We measured this directly. At 500 requests per second -- which happens during an npm install or when a browser opens a page with dozens of resources -- dispatching each request individually to the main thread and calling insertRows(at:) consumed 100% of one CPU core on the main thread. The UI was frozen during these bursts.

The fix is batching. We use a Swift actor -- RequestBatchActor -- that sits between the proxy engine and the UI. Incoming requests are appended to a buffer inside the actor. A recurring timer fires every 100 milliseconds. On each tick, the actor grabs the current buffer contents, clears the buffer, and publishes the batch to the main actor.

On the main thread, the update is a single operation: append the batch to the data array, then call noteNumberOfRowsChanged followed by insertRows(at:withAnimation:) for the new index range. One main-thread dispatch per 100ms instead of 500 per second.

The results:

  • Without batching at 500 req/s: 100% main-thread CPU, visible UI freezes, dropped frames.
  • With 100ms batching at 500 req/s: 4% main-thread CPU, 60 FPS constant, each batch inserts ~50 rows.
  • Latency cost: A request appears in the list at most 100ms after capture. For a debugging tool, this is imperceptible. Users see "real-time" updates.

The 100ms interval is configurable. During testing we tried 50ms (lower latency, slightly more CPU) and 200ms (less CPU, noticeable delay). 100ms is the sweet spot for Rockxy's use case.

One subtlety: the actor must be careful about ordering. Requests must appear in the list in capture order, not in the order they complete. The proxy engine timestamps each request at capture time, and the actor's buffer preserves insertion order. No sorting needed at the batch level.

The ring buffer for memory control

A debugging proxy can run for hours. A developer might leave Rockxy open all day, capturing traffic across multiple debugging sessions. If we kept every request object in memory indefinitely, memory would grow without bound. At an average of 2 KB per request object (URL string, headers dictionary, timing data, metadata), 1 million requests would consume 2 GB of heap just for the model objects -- before considering any response body data.

Rockxy uses a ring buffer (circular buffer) with a default capacity of 50,000 entries. The data structure is a fixed-size array with a head index and a tail index. New requests are written at the tail. When the buffer is full, the tail wraps around and overwrites the oldest entry at the head. The count of valid entries is always min(totalWritten, capacity).

This gives predictable memory usage. Whether Rockxy has been running for five minutes or five hours, the request list holds at most 50,000 entries. Memory for the model layer stays under 100 MB regardless of session duration.

When an entry is evicted from the ring buffer, it isn't simply discarded. If the user has enabled session recording, the evicted request is serialized to a SQLite database on disk. The database stores the full request and response data, keyed by a monotonic sequence number. If the user scrolls back far enough or searches for an old request, Rockxy can load it from disk on demand. The in-memory ring buffer is the hot cache; SQLite is the cold store.

The ring buffer implementation is optimized for the single-writer pattern. The proxy engine (via the batch actor) is the only writer. The UI thread is a reader. We use Swift's Atomic<Int> from the Synchronization framework for the head and tail indices. The writer atomically advances the tail; the reader reads the head and tail to determine the valid range. No locks, no contention. Reads never block writes.

The capacity is user-configurable in Rockxy's settings. Some users set it to 10,000 for lower memory use on constrained machines. Others bump it to 200,000 if they have RAM to spare and want to keep more history in the fast path. The table view doesn't care -- it only ever sees the current count and asks for rows by index.

Results and trade-offs

Here are the final numbers from our profiling, measured on an M1 MacBook Pro with 16 GB RAM, running macOS 15:

  • 100,000 rows in list: 45 MB memory for the view layer, constant 60 FPS during fast scroll.
  • CPU during scroll: under 5% of one core. The main thread spends its time configuring recycled views -- string assignments and layout, nothing expensive.
  • CPU during heavy traffic (500 req/s): 4% main-thread, 8% total (including the background actor and proxy engine).
  • Time to first row visible on launch: under 16ms. The table view doesn't wait for all data to load -- it renders whatever's in the buffer immediately.

There are real trade-offs to this approach:

  • No SwiftUI animations on the list. You lose .animation() and .transition() modifiers. NSTableView has its own animation system via insertRows(at:withAnimation:), but it's less flexible than SwiftUI's. We use .slideDown for new row insertion and that's it.
  • Column layout is imperative AppKit code. Adding a column means creating an NSTableColumn, setting width constraints, defining the sort descriptor, and wiring up the cell view in the delegate. It's not hard, but it's more code than adding a Text() in a List row.
  • Accessibility needs explicit wiring. SwiftUI views get a lot of accessibility support for free. With NSTableView, you need to implement NSAccessibility methods on your cell views to expose the right labels, roles, and values to VoiceOver. We did this -- it took an extra day of work -- but it's something SwiftUI handles automatically.
  • Two UI paradigms in one app. The inspector panel is SwiftUI. The request list is AppKit. State flows between them through bindings and the Coordinator. It works, but new contributors need to understand both frameworks to work on the main screen.

Every one of these trade-offs was worth it. Users don't see the implementation. They see a list that loads instantly, scrolls without stuttering, and doesn't eat their RAM. That's the whole point.

Takeaways for SwiftUI developers

SwiftUI is the right choice for most of Rockxy's UI. Detail views, forms, inspector panels, popovers, settings windows -- all SwiftUI, and they're better for it. The declarative syntax, the automatic dark mode support, the built-in accessibility, the animation system. For views that display moderate amounts of data with rich interactivity, SwiftUI is excellent.

But SwiftUI's List is not a general-purpose virtual scrolling container. It's a convenience wrapper optimized for typical app content -- hundreds to low thousands of rows. If you're building something that displays large datasets in a scrollable list -- log viewers, network monitors, database browsers, file managers, analytics dashboards -- you will hit the wall.

When you hit that wall, NSTableView via NSViewRepresentable is still the right answer in 2026. The bridge code is straightforward. The Coordinator pattern handles the state synchronization. The performance difference isn't marginal -- it's 45 MB versus 800 MB, 60 FPS versus 22 FPS. That's not an optimization; it's a different architecture.

Pair it with actor-based batching and a ring buffer, and you get a system that handles sustained high throughput without breaking a sweat. Rockxy's request list can ingest 500 requests per second, display 100,000 rows, and use less memory than a single Chrome tab.

The full implementation is in Rockxy's source code on GitHub. If you're facing a similar problem, take a look at how we structured the NSViewRepresentable bridge -- it's a pattern you can adapt for any AppKit view you need to embed in SwiftUI.