Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Responses are unconditionally cached for EXCHANGE_LIFETIME #586

Open
hasheddan opened this issue Oct 29, 2024 · 1 comment
Open

Responses are unconditionally cached for EXCHANGE_LIFETIME #586

hasheddan opened this issue Oct 29, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@hasheddan
Copy link
Contributor

When processing a response, all responses to CON and NON messages are cached for 247 seconds (EXCHANGE_LIFETIME):

err := cc.addResponseToCache(w.Message())

cc.responseMsgCache.LoadOrStore(cc.responseMsgCacheID(resp.MessageID()), cache.NewElement(cacheMsg, time.Now().Add(ExchangeLifetime), nil))

const ExchangeLifetime = 247 * time.Second

This is due to the language in Section 4.5 of RFC 7252.

A recipient might receive the same Confirmable message (as indicated
by the Message ID and source endpoint) multiple times within the
EXCHANGE_LIFETIME (Section 4.8.2), for example, when its
Acknowledgement went missing or didn't reach the original sender
before the first timeout. The recipient SHOULD acknowledge each
duplicate copy of a Confirmable message using the same
Acknowledgement or Reset message but SHOULD process any request or
response in the message only once.

Each Conn maintains its own responseMsgCache:

responseMsgCache *cache.Cache[string, []byte]

The cache is checked for expiration whenever the Conn expirations are checked:

cc.responseMsgCache.CheckExpirations(now)

This allows for a recipient to respond to duplicate requests without processing multiple times. However, it also means that endpoints that send large volumes of data have the potential to grow very large caches. There is no mechanism in go-coap today to modify the EXCHANGE_LIFETIME, and the responseMsgCache has unbounded size. This is particularly an issue in the case of blockwise transfers, where a single large object may be transmitted over many messages. With the current implementation, that entire large object could end up in the cache. Furthermore, if multiple connections are all fetching the same large object, many copies of the same object may end up in the cache (i.e. one in each Conn responseMsgCache). In the same section of RFC 7252, it allows for relaxation of caching behavior:

A server might relax the requirement to answer all retransmissions
of an idempotent request with the same response (Section 4.2), so
that it does not have to maintain state for Message IDs. For
example, an implementation might want to process duplicate
transmissions of a GET, PUT, or DELETE request as separate
requests if the effort incurred by duplicate processing is less
expensive than keeping track of previous responses would be.

There are a few different mechanisms that could be introduced (potentially alongside each other) to address this behavior:

  • Make a Conn responseMsgCache size bounded such that it cannot grow unconditionally. Using an LRU strategy could be simple approach to ensuring that responses sent most recently are more likely to be in the cache.
  • Support specifying messages that are not to be cached.
  • Support configuring the EXCHANGE_LIFTETIME.

I would be happy to work on implementation if these features are of interest. Thanks!

@jkralik
Copy link
Member

jkralik commented Oct 30, 2024

@hasheddan Thank you so much for your interest in contributing this feature! Let me know if there’s anything I can clarify or assist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants