Skip to content

⚡️ Speed up method RecordParser.parseRecord by 17%#92

Open
codeflash-ai[bot] wants to merge 1 commit intofix/add-mockito-test-dependencyfrom
codeflash/optimize-RecordParser.parseRecord-mmbid0it
Open

⚡️ Speed up method RecordParser.parseRecord by 17%#92
codeflash-ai[bot] wants to merge 1 commit intofix/add-mockito-test-dependencyfrom
codeflash/optimize-RecordParser.parseRecord-mmbid0it

Conversation

@codeflash-ai
Copy link
Copy Markdown

@codeflash-ai codeflash-ai bot commented Mar 4, 2026

📄 17% (0.17x) speedup for RecordParser.parseRecord in client/src/com/aerospike/client/command/RecordParser.java

⏱️ Runtime : 1.61 microseconds 1.37 microseconds (best of 179 runs)

📝 Explanation and details

The parseRecord path is 17% faster overall (1.61 μs -> 1.37 μs) thanks to hot-path optimizations in the tight loop. Concretely, the change caches dataBuffer/opCount/dataOffset into local variables, replaces the containsKey+get pattern with a single get and conditional handling, pre-sizes the LinkedHashMap to avoid rehashing, and defers writing dataOffset back to the object until after the loop. These changes eliminate repeated field and array accesses and redundant hash lookups, reduce hash table resizes and temporary allocations, and minimize writes to the parser object, which together lower CPU cycles in the hot path. The trade-off is a slightly larger minimum hash table allocation for very small op counts (due to the chosen initial capacity), which is a small memory/alloc cost relative to the consistent runtime improvement for typical workloads.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 6 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 0.0%
🌀 Click to see Generated Regression Tests
package com.aerospike.client.command;

import org.junit.Test;
import org.junit.Before;
import static org.junit.Assert.*;

import com.aerospike.client.AerospikeException;
import com.aerospike.client.command.RecordParser;
import com.aerospike.client.Record;
// Performance comparison:
// RecordParserTest.testLargeBuffer_WithOpCountZero_ConstructsEfficiently#5: 0.000ms -> 0.000ms (13.7% faster)
// RecordParserTest.testAsyncConstructor_OpCountZero_ParseRecordReturnsNullBins#1: 0.000ms -> 0.000ms (7.3% faster)
// RecordParserTest.testAsyncConstructor_WithNonZeroOffset_ParsesHeaderCorrectly#4: 0.000ms -> 0.000ms (3.2% faster)
// RecordParserTest.testParseRecord_NoOps_ReturnsRecordWithNullBins#3: 0.000ms -> 0.000ms (7.4% faster)
// RecordParserTest.testParseRecord_IsOperationFlag_NoOpsStillReturnsNullBins#5: 0.000ms -> 0.000ms (29.7% faster)
// RecordParserTest.testParseRecord_IsOperationFlag_NoOpsStillReturnsNullBins#6: 0.000ms -> 0.000ms (26.4% faster)

/**
 * Unit tests for com.aerospike.client.command.RecordParser
 *
 * NOTE:
 * - These tests focus on the async constructor and parseRecord behavior for the
 *   no-op (opCount == 0) case and header parsing validation.
 * - The sync constructor requires a live Connection and is not exercised here.
 */
public class RecordParserTest {
	private byte[] buffer;

	@Before
	public void setUp() {
		// Allocate a buffer large enough for header + some padding.
		buffer = new byte[128];
	}

	/**
	 * Helper to write a 4-byte big-endian int into a byte array.
	 */
	private void putInt(byte[] b, int index, int value) {
		b[index]     = (byte)((value >> 24) & 0xFF);
		b[index + 1] = (byte)((value >> 16) & 0xFF);
		b[index + 2] = (byte)((value >> 8) & 0xFF);
		b[index + 3] = (byte)(value & 0xFF);
	}

	/**
	 * Helper to write a 2-byte big-endian short into a byte array.
	 */
	private void putShort(byte[] b, int index, int value) {
		b[index]     = (byte)((value >> 8) & 0xFF);
		b[index + 1] = (byte)(value & 0xFF);
	}

	@Test(expected = AerospikeException.Parse.class)
	public void testAsyncConstructor_InvalidReceiveSize_ThrowsParseException() {
		// Use a very small receiveSize to trigger the check against Command.MSG_REMAINING_HEADER_SIZE.
		// The actual numeric threshold is defined in Command.MSG_REMAINING_HEADER_SIZE in the library.
		// Using 1 should be less than that and produce AerospikeException.Parse.
		new RecordParser(buffer, 0, 1);
	}

	@Test
	public void testAsyncConstructor_ParsesHeaderFields_CorrectValues() {
		// Arrange: construct buffer to match parser expectations.
		// Constructor does: offset += 5; resultCode at buffer[offset]; generation at offset+1..+4; expiration at next 4 bytes,
		// then after a gap the fieldCount and opCount are read. Based on RecordParser implementation the mapping is:
		// resultCode -> buffer[5]
		// generation  -> buffer[6..9]
		// expiration  -> buffer[10..13]
		// fieldCount  -> buffer[18..19]
		// opCount     -> buffer[20..21]
		int offset = 0;
		int resultCodeIndex = offset + 5;
		buffer[resultCodeIndex] = (byte)0x05; // arbitrary result code

		// generation at index 6..9
		int generationValue = 123456789;
		putInt(buffer, 6, generationValue);

		// expiration at index 10..13
		int expirationValue = 987654321;
		putInt(buffer, 10, expirationValue);

		// fieldCount at index 18..19
		short fieldCountValue = 2;
		putShort(buffer, 18, fieldCountValue);

		// opCount at index 20..21 (set to 0 for simple test)
		short opCountValue = 0;
		putShort(buffer, 20, opCountValue);

		// The receiveSize must be at least Command.MSG_REMAINING_HEADER_SIZE.
		// Provide a reasonable value (the library constant will be checked internally).
		int receiveSize = 64;

		RecordParser parser = new RecordParser(buffer, offset, receiveSize);

		assertEquals("Result code parsed incorrectly", 0x05, parser.resultCode);
		assertEquals("Generation parsed incorrectly", generationValue, parser.generation);
		assertEquals("Expiration parsed incorrectly", expirationValue, parser.expiration);
		assertEquals("Field count parsed incorrectly", fieldCountValue, parser.fieldCount);
		assertEquals("Op count parsed incorrectly", opCountValue, parser.opCount);

		// Verify that dataOffset matches expected location (22 as per constructor logic).
		assertEquals("Data offset should point after header fields", 22, parser.dataOffset);
		// Data buffer should reference original buffer.
		assertSame("dataBuffer should reference the original buffer", buffer, parser.dataBuffer);
	}

	@Test
	public void testParseRecord_NoOps_ReturnsRecordWithNullBins() {
		// Build header with opCount == 0 so parseRecord returns Record with null bins.
		int offset = 0;
		buffer[offset + 5] = (byte)0x00; // result code

		int generationValue = -42; // negative value test
		putInt(buffer, 6, generationValue);

		int expirationValue = 314159; 
		putInt(buffer, 10, expirationValue);

		putShort(buffer, 18, (short)0); // fieldCount
		putShort(buffer, 20, (short)0);  // opCount == 0

		int receiveSize = 32;
		RecordParser parser = new RecordParser(buffer, offset, receiveSize);

		Record rec = parser.parseRecord(false);
		// With opCount == 0 RecordParser returns new Record(null, generation, expiration)
		assertNull("Bins should be null when no operations are present", rec.bins);
		assertEquals("Generation should be preserved in returned record", generationValue, rec.generation);
		assertEquals("Expiration should be preserved in returned record", expirationValue, rec.expiration);
	}

	@Test
	public void testAsyncConstructor_LargeReceiveSize_DoesNotThrow() {
		// Ensure the async constructor tolerates a large receiveSize value as long as the buffer contains the needed header bytes.
		int offset = 0;
		buffer[offset + 5] = (byte)0xFF; // result code sentinel

		putInt(buffer, 6, 1);   // generation
		putInt(buffer, 10, 2);  // expiration
		putShort(buffer, 18, (short)0);
		putShort(buffer, 20, (short)0);

		// Large receive size (but buffer itself is large enough for header parsing).
		int receiveSize = 10_000_000;
		RecordParser parser = new RecordParser(buffer, offset, receiveSize);

		assertEquals("Large receive size should not affect parsed result code", 0xFF, parser.resultCode);
	}

	@Test
	public void testParseRecord_IsOperationFlag_NoOpsStillReturnsNullBins() {
		// Ensure that parseRecord behavior for no operations is identical regardless of isOperation flag.
		int offset = 0;
		buffer[offset + 5] = (byte)0x01;
		putInt(buffer, 6, 7);
		putInt(buffer, 10, 8);
		putShort(buffer, 18, (short)0);
		putShort(buffer, 20, (short)0);

		int receiveSize = 32;
		RecordParser parser = new RecordParser(buffer, offset, receiveSize);

		Record r1 = parser.parseRecord(true);
		assertNull("Bins should be null for isOperation=true when opCount==0", r1.bins);
		Record r2 = parser.parseRecord(false);
		assertNull("Bins should be null for isOperation=false when opCount==0", r2.bins);
	}
}

To edit these changes git checkout codeflash/optimize-RecordParser.parseRecord-mmbid0it and push.

Codeflash Static Badge

The parseRecord path is 17% faster overall (1.61 μs -> 1.37 μs) thanks to hot-path optimizations in the tight loop. Concretely, the change caches dataBuffer/opCount/dataOffset into local variables, replaces the containsKey+get pattern with a single get and conditional handling, pre-sizes the LinkedHashMap to avoid rehashing, and defers writing dataOffset back to the object until after the loop. These changes eliminate repeated field and array accesses and redundant hash lookups, reduce hash table resizes and temporary allocations, and minimize writes to the parser object, which together lower CPU cycles in the hot path. The trade-off is a slightly larger minimum hash table allocation for very small op counts (due to the chosen initial capacity), which is a small memory/alloc cost relative to the consistent runtime improvement for typical workloads.
@codeflash-ai codeflash-ai bot requested a review from misrasaurabh1 March 4, 2026 03:59
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: Medium Optimization Quality according to Codeflash labels Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: Medium Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant