perf(nbd): zero-alloc, zero-copy synchronous dispatch#2395
perf(nbd): zero-alloc, zero-copy synchronous dispatch#2395ValentaTomas wants to merge 5 commits intomainfrom
Conversation
PR SummaryMedium Risk Overview Reviewed by Cursor Bugbot for commit f2ca90c. Bugbot is set up for automated code reviews on this repo. Configure here. |
32ef3b7 to
8018ef2
Compare
| c.setIsCached(off, end-off) | ||
| } | ||
|
|
||
| c.mu.Unlock() |
| // done(false) releases the write lock without marking dirty (use on write failure). | ||
| func (c *Cache) WriteSlice(off, length int64) ([]byte, func(bool), error) { | ||
| c.mu.Lock() | ||
|
|
There was a problem hiding this comment.
Cache mutex held during socket I/O. c.mu.Lock() is acquired here and not released until done() is called in dispatch.go after the entire fp.Read loop completes (line 195). The mutex is held for the full duration of reading the write payload off the socket. With FlagCanMulticonn, multiple Dispatch goroutines share the same Provider. While one goroutine is blocked on slow socket data holding c.mu, every other connection's reads and writes stall on this lock. The old design held the lock only briefly inside WriteAt, after socket data was already buffered locally. The fix is to read socket data into a temp buffer first, then acquire the lock only to copy into and mark the mmap.
| func (o *Overlay) ReadSlices(ctx context.Context, off, length int64, dest [][]byte) ([][]byte, error) { | ||
| dest = dest[:0] | ||
| numBlocks := length / o.blockSize | ||
|
|
There was a problem hiding this comment.
Silent truncation for non-blockSize-aligned lengths. numBlocks := length / o.blockSize discards trailing bytes when length is not a multiple of blockSize, with no error. If a non-aligned length is passed, writeReadResponse sends fewer bytes than requested and the kernel NBD driver treats this as an I/O error. Kernel NBD requests are block-aligned in practice, but a guard returning an error for non-aligned lengths would surface any future misuse immediately.
8018ef2 to
6e73f66
Compare
6e73f66 to
c8fb5c2
Compare
Replace async goroutine-per-request NBD dispatch with a synchronous loop using mmap slices directly. Reads: Overlay.ReadSlices returns per-block mmap slices, written to the NBD socket via writev (net.Buffers) — zero user-space copies. Writes: socket data is read into a reusable buffer, then copied into the mmap under a brief lock via Cache.WriteSlice. Removes per-request: goroutines, channels, sync.Pool, write lock, WaitGroup, and shutting-down machinery.
…ignment Restore the 32 MB dispatchMaxWriteBufferSize check removed in the zero-alloc rewrite — a corrupted NBD header could cause an unbounded allocation and OOM. Add offset alignment validation in Overlay.ReadSlices to match the existing length check.
ReadSlices calls device.Slice instead of ReadAt, so the SlowDevice test wrapper must delay Slice as well to trigger the kernel NBD I/O timeout in TestSlowBackend_ShortTimeout.
7a492d2 to
eef5a81
Compare
Copy mmap slice data into a reusable contiguous buffer before writing the NBD read response, matching the old ReadAt behavior. This avoids potential issues with sending mmap references directly while the underlying data could be modified by concurrent operations.
723a308 to
054fbf2
Compare
5c06f2c to
f0ec082
Compare
Replace per-request buffer allocations in NBD dispatch with zero-copy mmap slice access. Reads use ReadSlices to get per-block mmap references and copy into a pooled buffer; writes use WriteSlice to get a writable mmap reference and copy socket data directly in. Keeps the existing goroutine-per-request dispatch structure unchanged.
f0ec082 to
f2ca90c
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit f2ca90c. Configure here.
| } | ||
|
|
||
| return dest, nil | ||
| } |
There was a problem hiding this comment.
Mmap slices read/written without lock protection cause data race
Medium Severity
ReadSlices calls cache.Slice() without holding any lock, and the returned mmap slice references are later written to the socket by writeSlices. Previously, Overlay.ReadAt went through Cache.ReadAt which held c.mu.RLock(), providing mutual exclusion with writes via RWMutex. Now, a concurrent WriteFrom on another connection (multiconn) can modify the same mmap pages while writeSlices is reading them, producing torn data in the NBD response.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit f2ca90c. Configure here.
|
|
||
| c.mu.Unlock() | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
Exclusive cache lock held during entire socket read
Medium Severity
Cache.WriteSlice acquires the exclusive write lock (c.mu.Lock()), and Overlay.WriteFrom calls io.ReadFull from a network socket before releasing it via the done callback. For writes up to 32 MB, this holds the exclusive lock for the entire duration of the socket read. The old code read socket data into a temporary buffer first, then held the lock only for a brief memcpy. This serializes all lock-dependent cache operations (Close, ExportToDiff, other WriteAt calls) across all multiconn connections during potentially slow I/O.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit f2ca90c. Configure here.


Summary
Overlay.ReadSlicesreturns per-block mmap slices (overlay cache or template cache), written to the NBD socket viawritev(net.Buffers) — zero user-space copies, zero allocations after warmupCache.WriteSlicereturns a writable mmap reference; socket data is read directly into the mmap page — zero user-space copies, zero allocationssync.Pool, write lock, WaitGroup, and shutting-down machineryProfile context
From profiling on n2-highmem-80 nodes, NBD dispatch was the dominant allocator after the multipart upload fix (#2394):
Dispatch.Handle(write buffers)make([]byte, request.Length)per writecmdRead.func1(read buffers)make([]byte, length)per readThis PR eliminates both — zero allocations after the first request at each size.
How it works
Reads: Instead of allocating a buffer, copying mmap data into it, then writing to socket — we collect per-block mmap slice references and send them via
writevin a single syscall.Writes: Instead of allocating a buffer, reading socket data into it, then copying into mmap — we get a writable reference to the overlay cache mmap and read socket data directly into it.
Parallelism is preserved via
FlagCanMulticonn(multiple NBD socket connections per device), rather than per-request goroutines within a connection.Test plan
go build ./...passesgo vetpasses