-
Notifications
You must be signed in to change notification settings - Fork 0
feat(nfc): add FM175XX NFC reader driver #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add platform-independent FM175XX/MFRC522/PN512 NFC reader driver with: - lib/trait/addr_io: unified register R/W interface for I2C/SPI - lib/hal: I2cDevice and SpiDevice adapters (bus -> addr_io) - lib/nfc: common NFC definitions (card types, protocols, NDEF) - lib/nfc/drivers/fm175xx: full driver implementation - ISO14443A (REQA, anticollision, select, NTAG read/write) - ISO14443B (REQB, ATTRIB) - Register definitions and chip documentation Pending hardware testing with actual FM175XX module. Co-authored-by: Cursor <cursoragent@cursor.com>
Summary of ChangesHello @idy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the project's capabilities by introducing a robust and modular NFC communication stack. It provides a new hardware driver for common NFC reader ICs, abstracts hardware communication through a generic I/O trait, and includes essential protocol and data format handling. This foundational work enables future features requiring NFC interaction, such as reading and writing NFC tags. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive, platform-independent NFC stack for the FM175XX reader, featuring a well-designed architecture and good documentation. However, two significant security vulnerabilities were identified: a buffer overflow in the ISO14443A UID parsing logic and a flawed NDEF record size calculation leading to parser desynchronization, which are critical for handling untrusted card data. Additionally, a critical bug in the I2C adapter's large-write implementation and a significant stack memory issue in the NDEF parser for embedded targets were found, alongside several medium-severity suggestions to improve efficiency and robustness.
| // For larger writes, write register first then data | ||
| // Note: This may not work for all I2C devices that require | ||
| // contiguous write. In that case, use a heap allocator. | ||
| self.i2c.write(self.addr, &.{reg}) catch |err| { | ||
| return mapError(err); | ||
| }; | ||
| self.i2c.write(self.addr, data) catch |err| { | ||
| return mapError(err); | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation for writes larger than 32 bytes is incorrect for most I2C devices. It performs two separate I2C write transactions: one for the register address and another for the data. This results in [START][ADDR+W][REG][STOP][START][ADDR+W][DATA...][STOP], which will fail for devices that expect a single, contiguous write transaction for register writes.
The comment correctly notes that a heap allocator should be used, but the implemented fallback is buggy. It's better to explicitly disallow this operation until a correct implementation with an allocator is in place, rather than failing silently or unpredictably on some hardware.
// For larger writes, this implementation is incorrect as it issues two separate
// I2C transactions. This will not work for most I2C devices that expect a
// contiguous write. A heap allocator should be used to create a single buffer.
// Until then, this path should return an error to prevent silent failures.
// TODO: Pass an allocator and use it for writes > 32 bytes.
return error.TransportError;
| pub const Message = struct { | ||
| records: []Record, | ||
| record_count: usize = 0, | ||
|
|
||
| _records_buf: [16]Record = undefined, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Message struct's design leads to a very large stack allocation. It contains an array of 16 Records, and each Record has large internal buffers (e.g., _payload_buf: [1024]u8). When a Message object is created on the stack, this results in an allocation of approximately 24 KB (16 * (256 + 256 + 1024)), which will cause a stack overflow on most embedded systems.
To fix this, you should either significantly reduce the size and number of these buffers or, for a more flexible solution, refactor the parsing logic to use a dynamic allocator.
_records_buf: [4]Record = undefined, // Reduced from 16
| // Store UID bytes (skip cascade tag 0x88 if present) | ||
| if (uid_part[0] == TypeA.CASCADE_TAG and level < card.cascade_level - 1) { | ||
| // Cascade tag present, copy only 3 bytes | ||
| @memcpy(card.uid[uid_offset..][0..3], uid_part[1..4]); | ||
| uid_offset += 3; | ||
| } else { | ||
| // No cascade tag, copy all 4 bytes | ||
| @memcpy(card.uid[uid_offset..][0..4], uid_part[0..4]); | ||
| uid_offset += 4; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The driver is vulnerable to a buffer overflow (or safety-checked panic) when parsing the UID of an ISO14443A card. If a malicious card claims to be triple-size (cascade level 3) in its ATQA response but does not provide the expected cascade tags (0x88) during the anticollision phase, uid_offset will increment by 4 at each level. On the third level, uid_offset will be 8, and the code will attempt to copy 4 bytes into card.uid[8..]. Since card.uid has a fixed size of 10 bytes, this will result in an out-of-bounds write or a panic.
Recommendation: Validate that uid_offset + 4 <= card.uid.len before copying, and strictly enforce the presence of cascade tags for multi-level UIDs.
// Store UID bytes (skip cascade tag 0x88 if present)
if (uid_part[0] == TypeA.CASCADE_TAG and level < card.cascade_level - 1) {
// Cascade tag present, copy only 3 bytes
if (uid_offset + 3 > card.uid.len) return error.BufferOverflow;
@memcpy(card.uid[uid_offset..][0..3], uid_part[1..4]);
uid_offset += 3;
} else {
// No cascade tag, copy all 4 bytes
if (uid_offset + 4 > card.uid.len) return error.BufferOverflow;
@memcpy(card.uid[uid_offset..][0..4], uid_part[0..4]);
uid_offset += 4;
}
| fn csLow(self: *Self) void { | ||
| self.cs.setLow() catch {}; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The csLow function ignores potential errors from self.cs.setLow() by using catch {}. If asserting the chip select line fails (e.g., due to a GPIO error), subsequent SPI operations will either fail silently or, in a multi-device scenario, communicate with the wrong device. This is a significant risk and the error should be propagated.
fn csLow(self: *Self) Error!void {
self.cs.setLow() catch return error.TransportError;
}
| fn getRecordSize(data: []const u8) ?usize { | ||
| if (data.len < 3) return null; | ||
|
|
||
| const flags: RecordFlags = @bitCast(data[0]); | ||
| const type_len = data[1]; | ||
|
|
||
| var size: usize = 2; | ||
|
|
||
| // Payload length field | ||
| if (flags.sr) { | ||
| if (data.len < size + 1) return null; | ||
| size += 1 + data[size]; // 1 byte length + payload | ||
| } else { | ||
| if (data.len < size + 4) return null; | ||
| const payload_len = std.mem.readInt(u32, data[size..][0..4], .big); | ||
| size += 4 + payload_len; | ||
| } | ||
|
|
||
| // ID length field | ||
| if (flags.il) { | ||
| if (data.len < size + 1) return null; | ||
| size += 1 + data[size - 1]; // Already counted, adjust | ||
| } | ||
|
|
||
| size += type_len; | ||
|
|
||
| return size; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The getRecordSize function contains incorrect logic for calculating the size of NDEF records. It incorrectly calculates the offset of the ID length field and adds the payload length before accounting for other header fields (Type, ID). This leads to parser desynchronization, where the parser may misinterpret payload data as subsequent record headers. This can be exploited to hide malicious records or bypass security checks that rely on NDEF parsing.
Recommendation: Rewrite getRecordSize to correctly follow the NDEF record structure, or modify parseRecord to return the number of bytes consumed and use that value to advance the offset.
| var offset: usize = 0; | ||
| while (offset < buf.len) { | ||
| const chunk = @min(buf.len - offset, 63); | ||
| tx_buf[0] = addr_byte; | ||
| for (1..chunk + 1) |i| { | ||
| // Continue reading same register (FIFO auto-increment) | ||
| tx_buf[i] = if (i < chunk) addr_byte else 0x00; | ||
| } | ||
| self.spi.transfer(tx_buf[0 .. chunk + 1], rx_buf[0 .. chunk + 1]) catch { | ||
| return error.TransportError; | ||
| }; | ||
| @memcpy(buf[offset..][0..chunk], rx_buf[1..][0..chunk]); | ||
| offset += chunk; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of continuous read for has_transfer SPI appears inefficient and is likely incorrect for standard continuous read protocols. It re-sends the read address for every byte within a chunk, which is highly unusual. Most SPI devices, when reading from a FIFO or auto-incrementing register, expect the address only once at the beginning of the transaction.
A more standard and efficient approach is to send the address byte once, followed by dummy bytes to clock in the data. This can be done in chunks to avoid large stack allocations.
var offset: usize = 0;
while (offset < buf.len) {
const chunk = @min(buf.len - offset, 63);
tx_buf[0] = addr_byte;
@memset(tx_buf[1..], 0x00, chunk);
self.spi.transfer(tx_buf[0 .. chunk + 1], rx_buf[0 .. chunk + 1]) catch {
return error.TransportError;
};
@memcpy(buf[offset..][0..chunk], rx_buf[1..][0..chunk]);
offset += chunk;
}
| switch (mode) { | ||
| .off => try self.clearBits(.tx_control, 0x03), | ||
| .tx1 => { | ||
| try self.clearBits(.tx_control, 0x02); | ||
| try self.setBits(.tx_control, 0x01); | ||
| }, | ||
| .tx2 => { | ||
| try self.clearBits(.tx_control, 0x01); | ||
| try self.setBits(.tx_control, 0x02); | ||
| }, | ||
| .both => try self.setBits(.tx_control, 0x03), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The setRf function can be made more efficient. It currently performs up to two register modifications for modes .tx1 and .tx2. This can be consolidated into a single read-modify-write operation, which is more atomic and reduces bus traffic.
var value = try self.readReg(.tx_control);
value &= ~@as(u8, 0x03); // Clear TX1 and TX2 enable bits
value |= @intFromEnum(mode);
try self.writeReg(.tx_control, value);
|
|
||
| while (offset < data.len and record_idx < msg._records_buf.len) { | ||
| const record = try parseRecord(data[offset..], &msg._records_buf[record_idx]); | ||
| offset += getRecordSize(data[offset..]) orelse return error.InvalidFormat; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Message.parse function calls getRecordSize immediately after parseRecord. This is inefficient because getRecordSize re-parses the record header just to determine its length, work that parseRecord has already done.
To improve performance, parseRecord should return the number of bytes it consumed. This avoids redundant processing and makes the parsing loop more efficient.
const consumed = try parseRecord(data[offset..], &msg._records_buf[record_idx]);
offset += consumed;
Summary
addr_iotrait for unified I2C/SPI register operationsArchitecture
Files Added
lib/trait/src/addr_io.ziglib/hal/src/i2c_device.ziglib/hal/src/spi_device.ziglib/nfc/Test Plan
zig build testfor hal, nfc, fm175xx)Made with Cursor