Skip to content

Conversation

@idy
Copy link
Contributor

@idy idy commented Feb 2, 2026

Summary

  • Add platform-independent FM175XX/MFRC522/PN512 NFC reader driver
  • Introduce addr_io trait for unified I2C/SPI register operations
  • Add I2cDevice and SpiDevice adapters in HAL layer
  • Implement ISO14443A/B protocols with NTAG/Mifare Ultralight support
  • Include NDEF parsing/encoding utilities

Architecture

FM175XX Driver (addr_io)
        │
   ┌────┴────┐
   │         │
I2cDevice  SpiDevice
   │         │
i2c trait  spi + gpio

Files Added

Path Description
lib/trait/src/addr_io.zig Unified register R/W interface
lib/hal/src/i2c_device.zig I2C to addr_io adapter
lib/hal/src/spi_device.zig SPI + CS GPIO to addr_io adapter
lib/nfc/ Common NFC definitions and FM175XX driver

Test Plan

  • Unit tests pass (zig build test for hal, nfc, fm175xx)
  • Hardware test with FM175XX module (pending - no hardware available)

Made with Cursor

Add platform-independent FM175XX/MFRC522/PN512 NFC reader driver with:

- lib/trait/addr_io: unified register R/W interface for I2C/SPI
- lib/hal: I2cDevice and SpiDevice adapters (bus -> addr_io)
- lib/nfc: common NFC definitions (card types, protocols, NDEF)
- lib/nfc/drivers/fm175xx: full driver implementation
  - ISO14443A (REQA, anticollision, select, NTAG read/write)
  - ISO14443B (REQB, ATTRIB)
  - Register definitions and chip documentation

Pending hardware testing with actual FM175XX module.

Co-authored-by: Cursor <cursoragent@cursor.com>
@gemini-code-assist
Copy link

Summary of Changes

Hello @idy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the project's capabilities by introducing a robust and modular NFC communication stack. It provides a new hardware driver for common NFC reader ICs, abstracts hardware communication through a generic I/O trait, and includes essential protocol and data format handling. This foundational work enables future features requiring NFC interaction, such as reading and writing NFC tags.

Highlights

  • New FM175XX NFC Reader Driver: A new platform-independent driver has been added for the FM175XX NFC reader IC, which is compatible with MFRC522 and PN512 chips. This driver provides core functionality for NFC communication.
  • Unified Address-based I/O Trait: A new addr_io trait has been introduced to provide a unified interface for register-based communication, abstracting away the underlying I2C or SPI transport details.
  • HAL Device Adapters: I2cDevice and SpiDevice adapters have been added to the Hardware Abstraction Layer (HAL), allowing existing I2C and SPI bus implementations to conform to the new addr_io trait for seamless integration with the NFC driver.
  • ISO14443A/B Protocol Support: The driver includes support for ISO14443A and ISO14443B protocols, enabling interaction with a wide range of NFC cards, including NTAG and Mifare Ultralight.
  • NDEF Parsing and Encoding Utilities: Utilities for parsing and encoding NFC Data Exchange Format (NDEF) messages and records have been added, facilitating the handling of structured NFC data.
  • Comprehensive Documentation: Extensive documentation for the FM175XX chip, including its register map, commands, and protocol specifics, has been added to aid future development and understanding.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive, platform-independent NFC stack for the FM175XX reader, featuring a well-designed architecture and good documentation. However, two significant security vulnerabilities were identified: a buffer overflow in the ISO14443A UID parsing logic and a flawed NDEF record size calculation leading to parser desynchronization, which are critical for handling untrusted card data. Additionally, a critical bug in the I2C adapter's large-write implementation and a significant stack memory issue in the NDEF parser for embedded targets were found, alongside several medium-severity suggestions to improve efficiency and robustness.

Comment on lines +90 to +99
// For larger writes, write register first then data
// Note: This may not work for all I2C devices that require
// contiguous write. In that case, use a heap allocator.
self.i2c.write(self.addr, &.{reg}) catch |err| {
return mapError(err);
};
self.i2c.write(self.addr, data) catch |err| {
return mapError(err);
};
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The implementation for writes larger than 32 bytes is incorrect for most I2C devices. It performs two separate I2C write transactions: one for the register address and another for the data. This results in [START][ADDR+W][REG][STOP][START][ADDR+W][DATA...][STOP], which will fail for devices that expect a single, contiguous write transaction for register writes.

The comment correctly notes that a heap allocator should be used, but the implemented fallback is buggy. It's better to explicitly disallow this operation until a correct implementation with an allocator is in place, rather than failing silently or unpredictably on some hardware.

            // For larger writes, this implementation is incorrect as it issues two separate
            // I2C transactions. This will not work for most I2C devices that expect a
            // contiguous write. A heap allocator should be used to create a single buffer.
            // Until then, this path should return an error to prevent silent failures.
            // TODO: Pass an allocator and use it for writes > 32 bytes.
            return error.TransportError;

Comment on lines +174 to +178
pub const Message = struct {
records: []Record,
record_count: usize = 0,

_records_buf: [16]Record = undefined,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The Message struct's design leads to a very large stack allocation. It contains an array of 16 Records, and each Record has large internal buffers (e.g., _payload_buf: [1024]u8). When a Message object is created on the stack, this results in an allocation of approximately 24 KB (16 * (256 + 256 + 1024)), which will cause a stack overflow on most embedded systems.

To fix this, you should either significantly reduce the size and number of these buffers or, for a more flexible solution, refactor the parsing logic to use a dynamic allocator.

    _records_buf: [4]Record = undefined, // Reduced from 16

Comment on lines +51 to +60
// Store UID bytes (skip cascade tag 0x88 if present)
if (uid_part[0] == TypeA.CASCADE_TAG and level < card.cascade_level - 1) {
// Cascade tag present, copy only 3 bytes
@memcpy(card.uid[uid_offset..][0..3], uid_part[1..4]);
uid_offset += 3;
} else {
// No cascade tag, copy all 4 bytes
@memcpy(card.uid[uid_offset..][0..4], uid_part[0..4]);
uid_offset += 4;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The driver is vulnerable to a buffer overflow (or safety-checked panic) when parsing the UID of an ISO14443A card. If a malicious card claims to be triple-size (cascade level 3) in its ATQA response but does not provide the expected cascade tags (0x88) during the anticollision phase, uid_offset will increment by 4 at each level. On the third level, uid_offset will be 8, and the code will attempt to copy 4 bytes into card.uid[8..]. Since card.uid has a fixed size of 10 bytes, this will result in an out-of-bounds write or a panic.

Recommendation: Validate that uid_offset + 4 <= card.uid.len before copying, and strictly enforce the presence of cascade tags for multi-level UIDs.

                // Store UID bytes (skip cascade tag 0x88 if present)
                if (uid_part[0] == TypeA.CASCADE_TAG and level < card.cascade_level - 1) {
                    // Cascade tag present, copy only 3 bytes
                    if (uid_offset + 3 > card.uid.len) return error.BufferOverflow;
                    @memcpy(card.uid[uid_offset..][0..3], uid_part[1..4]);
                    uid_offset += 3;
                } else {
                    // No cascade tag, copy all 4 bytes
                    if (uid_offset + 4 > card.uid.len) return error.BufferOverflow;
                    @memcpy(card.uid[uid_offset..][0..4], uid_part[0..4]);
                    uid_offset += 4;
                }

Comment on lines +94 to +96
fn csLow(self: *Self) void {
self.cs.setLow() catch {};
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The csLow function ignores potential errors from self.cs.setLow() by using catch {}. If asserting the chip select line fails (e.g., due to a GPIO error), subsequent SPI operations will either fail silently or, in a multi-device scenario, communicate with the wrong device. This is a significant risk and the error should be propagated.

        fn csLow(self: *Self) Error!void {
            self.cs.setLow() catch return error.TransportError;
        }

Comment on lines +279 to +306
fn getRecordSize(data: []const u8) ?usize {
if (data.len < 3) return null;

const flags: RecordFlags = @bitCast(data[0]);
const type_len = data[1];

var size: usize = 2;

// Payload length field
if (flags.sr) {
if (data.len < size + 1) return null;
size += 1 + data[size]; // 1 byte length + payload
} else {
if (data.len < size + 4) return null;
const payload_len = std.mem.readInt(u32, data[size..][0..4], .big);
size += 4 + payload_len;
}

// ID length field
if (flags.il) {
if (data.len < size + 1) return null;
size += 1 + data[size - 1]; // Already counted, adjust
}

size += type_len;

return size;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The getRecordSize function contains incorrect logic for calculating the size of NDEF records. It incorrectly calculates the offset of the ID length field and adds the payload length before accounting for other header fields (Type, ID). This leads to parser desynchronization, where the parser may misinterpret payload data as subsequent record headers. This can be exploited to hide malicious records or bypass security checks that rely on NDEF parsing.

Recommendation: Rewrite getRecordSize to correctly follow the NDEF record structure, or modify parseRecord to return the number of bytes consumed and use that value to advance the offset.

Comment on lines +167 to +180
var offset: usize = 0;
while (offset < buf.len) {
const chunk = @min(buf.len - offset, 63);
tx_buf[0] = addr_byte;
for (1..chunk + 1) |i| {
// Continue reading same register (FIFO auto-increment)
tx_buf[i] = if (i < chunk) addr_byte else 0x00;
}
self.spi.transfer(tx_buf[0 .. chunk + 1], rx_buf[0 .. chunk + 1]) catch {
return error.TransportError;
};
@memcpy(buf[offset..][0..chunk], rx_buf[1..][0..chunk]);
offset += chunk;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation of continuous read for has_transfer SPI appears inefficient and is likely incorrect for standard continuous read protocols. It re-sends the read address for every byte within a chunk, which is highly unusual. Most SPI devices, when reading from a FIFO or auto-incrementing register, expect the address only once at the beginning of the transaction.

A more standard and efficient approach is to send the address byte once, followed by dummy bytes to clock in the data. This can be done in chunks to avoid large stack allocations.

                var offset: usize = 0;
                while (offset < buf.len) {
                    const chunk = @min(buf.len - offset, 63);
                    tx_buf[0] = addr_byte;
                    @memset(tx_buf[1..], 0x00, chunk);

                    self.spi.transfer(tx_buf[0 .. chunk + 1], rx_buf[0 .. chunk + 1]) catch {
                        return error.TransportError;
                    };
                    @memcpy(buf[offset..][0..chunk], rx_buf[1..][0..chunk]);
                    offset += chunk;
                }

Comment on lines +164 to +175
switch (mode) {
.off => try self.clearBits(.tx_control, 0x03),
.tx1 => {
try self.clearBits(.tx_control, 0x02);
try self.setBits(.tx_control, 0x01);
},
.tx2 => {
try self.clearBits(.tx_control, 0x01);
try self.setBits(.tx_control, 0x02);
},
.both => try self.setBits(.tx_control, 0x03),
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The setRf function can be made more efficient. It currently performs up to two register modifications for modes .tx1 and .tx2. This can be consolidated into a single read-modify-write operation, which is more atomic and reduces bus traffic.

            var value = try self.readReg(.tx_control);
            value &= ~@as(u8, 0x03); // Clear TX1 and TX2 enable bits
            value |= @intFromEnum(mode);
            try self.writeReg(.tx_control, value);


while (offset < data.len and record_idx < msg._records_buf.len) {
const record = try parseRecord(data[offset..], &msg._records_buf[record_idx]);
offset += getRecordSize(data[offset..]) orelse return error.InvalidFormat;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Message.parse function calls getRecordSize immediately after parseRecord. This is inefficient because getRecordSize re-parses the record header just to determine its length, work that parseRecord has already done.

To improve performance, parseRecord should return the number of bytes it consumed. This avoids redundant processing and makes the parsing loop more efficient.

            const consumed = try parseRecord(data[offset..], &msg._records_buf[record_idx]);
            offset += consumed;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants