Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: message pickup live mode support #1638

Merged
merged 19 commits into from
Jan 31, 2024

Conversation

genaris
Copy link
Contributor

@genaris genaris commented Nov 15, 2023

As promised, some progress in supporting missing features from RFC 0685, most notably the "Live Mode", in such a way that might be compatible with a multi-instance architecture as discussed in #1625.

Conceptually, it consists mostly in changes in Mediator role, where we can choose our strategy when a Forward message is received:

  • We can simply queue it to the MessagePickupRepository, so it will be in charge of manually trigger a delivery of queued messages: this can be possible because MessagePickupModule now emits events when Live Sessions are opened and closed (so any consumer of an AFJ instance can subscribe to a central Message Pickup Repository and know if a newly queued message belongs to any of its connected clients), and MessagePickupApi exposes a method to deliver any queued message using V2 protocol.
  • We can try to deliver it immediately using V2 protocol, previously adding it to the queue and . This is mostly useful in single instance AFJ scenarios, yet taking advantage of the protocol-level acknowledges Pickup V2 provides
  • We can try to deliver the encrypted message immediately, without any protocol-level encapsulation. This is what AFJ does now, in a kind of 'implicit mode' (not recommended as it is more unreliable). I left it as the default value for now, just to make it safer.

On mediatee side, there is a new MediatorPickupStrategy called PickUpV2LiveMode where, once it connects to Mediator WebSocket, it will set Live Delivery mode on (retrieving, as usual, any pending message). Current PickUpV2 mode will work in polling mode (sending a status-request message every mediatorPollingInterval milliseconds. This represents a breaking change in terms of settings, as previously it was just sending a Trust Ping and expected the mediator to send messages implicitly.

Some changes done:

  • MediatorModule now has a config parameter to select its message forwarding strategy
  • MessagePickupApi includes Live Mode session service that keeps track of all connected clients using V2 protocol in Live Mode and exposes a method to deliver any queued message to them. When new sessions are added or removed, it emits events
  • MessageRepository was renamed to MessagePickupRepository (any better naming is more than welcome!) and methods were changed to allow removal of messages by their id and filter by connectionId/recipientKey
  • MessagePickupRepository is now completely responsibility of MessagePickupModule: an in-memory default implementation will be instantiated if not specified in its config or defined externally before instantiating the Agent
  • TransportService now emits events when sessions are created and removed. This was done mainly to be consumed by MessagePickupSessionService, but they can also be useful externally

There are several TODOs that I would like to handle in further PRs and probably move to 0.5.x or 0.6.0 so we can have more time to fix and improve the architecture:

  • Add tests for all the different settings, as currently we only have some simple E2E tests
  • Remove MessageSender dependency on messagePickupRepository, e.g. by emitting an event when a packed message should be added to the pickup queue (which will be caught by MessagePickupApi). This will allow to remove InjectionSymbol 'MessagePickupRepository'.
  • Add somewhere a parameter to use only Queue Transport for mediatees or specific connections (see Only queue messages that have a mediator record associated #1199). A simple way can be to add a field to connection records but this would certainly introduce an storage change so it will be better left to a 0.6 (where we expect other storage changes).

Some issues related to the spec that we'll need to double check:

  • Filtering by recipientKey doesn't sound that hard to implement, but Pickup V2 protocol is so open that it can allow a client to create a dedicated Live Mode session for a single recipient key
  • It does not explicitly tells about that, but the protocol assumes that delivery message always come as a response to a delivery-request. Otherwise, how can I know what's the limit of messages I can batch on it (or if I should set recipient_key parameter)?

@codecov-commenter
Copy link

codecov-commenter commented Nov 15, 2023

Codecov Report

Attention: 88 lines in your changes are missing coverage. Please review.

Comparison is base (4c08179) 84.83% compared to head (9e7b88a) 74.85%.

Files Patch % Lines
...sage-pickup/protocol/v2/V2MessagePickupProtocol.ts 34.21% 24 Missing and 1 partial ⚠️
...sage-pickup/protocol/v1/V1MessagePickupProtocol.ts 20.00% 11 Missing and 1 partial ⚠️
...age-pickup/services/MessagePickupSessionService.ts 67.64% 11 Missing ⚠️
.../core/src/modules/routing/MediationRecipientApi.ts 23.07% 10 Missing ⚠️
...ore/src/modules/message-pickup/MessagePickupApi.ts 65.38% 8 Missing and 1 partial ⚠️
...rotocol/v2/messages/V2LiveDeliveryChangeMessage.ts 56.25% 6 Missing and 1 partial ⚠️
...re/src/modules/routing/services/MediatorService.ts 78.26% 5 Missing ⚠️
packages/core/src/agent/MessageSender.ts 57.14% 3 Missing ⚠️
...rotocol/v2/handlers/V2LiveDeliveryChangeHandler.ts 62.50% 3 Missing ⚠️
packages/core/src/modules/routing/MediatorApi.ts 33.33% 2 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1638      +/-   ##
==========================================
- Coverage   84.83%   74.85%   -9.98%     
==========================================
  Files         972      869     -103     
  Lines       23620    21239    -2381     
  Branches     4168     3739     -429     
==========================================
- Hits        20038    15899    -4139     
- Misses       3376     4971    +1595     
- Partials      206      369     +163     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@TimoGlastra TimoGlastra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice @genaris :)

@@ -48,15 +48,15 @@ export class MessageSender {
public constructor(
envelopeService: EnvelopeService,
transportService: TransportService,
@inject(InjectionSymbols.MessageRepository) messageRepository: MessageRepository,
@inject(InjectionSymbols.MessagePickupRepository) messagePickupRepository: MessagePickupRepository,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should rename repository, as it conflicts with all the other repositories that are based on the storage service.

Maybe we can use messagePickupQueue? Or messagePickupStorage?

Or maybe I should change my thinking on what a repository can be. As repository does make sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the beginning I named it MessagePickupQueue but discussing about it with my colleagues we noticed that we are not using it as a queue, in the sense that we are enqueuing data to it but not exactly dequeuing data when retrieving it (as we must store until it is actually acknowledged by the mediatee). Also, the data on it is somewhat organized (i.e. by connectionId, recipientKey, state, etc.) so the term 'Repository' feels to me more appropriate than 'Storage', which is usually used to generically save data.

Having said this, I'm not happy because the unfortunate name clash with Repository interface, so I'm more than open to opinions about how to improve the naming!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think MessageRepository is best i.e. it's the place where you store and fetch messages. I think xxxQueue would tend to incorrectly enforce the underlying technology, when any persistent storage may actually work. Pickup is just a fragment of the interface (we're not 'picking up' messages received that cannot be delivered, rather we're queuing them for later pickup), so I would just drop it to avoid narrowing the exhibited responsibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with MessageRepository is that it collides with Repository classes used within the framework, most notably DidCommMessageRepository, so leaving such generic name would be more confusing. The 'Pickup' word tries to specify that it is used for message pickup protocol.

It seems it is still not clear, as this MessagePickupRepository is an interface rather than an implementation, like the other Repository.

@@ -176,7 +176,7 @@ export class MessageSender {
// If the other party shared a queue service endpoint in their did doc we queue the message
if (queueService) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we at some point change this to only queue when it is allowed? So a connection must have queueing enabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, I remember this from #1199. I think it will be important to find a way to configure this. I'm trying to find the best place to update the code to support this, as I don't want to make MessageSender dependent on Mediator or Pickup modules API (we'll have some circular dependencies).

packages/core/src/agent/MessageSender.ts Outdated Show resolved Hide resolved
@@ -1,5 +1,5 @@
export const InjectionSymbols = {
MessageRepository: Symbol('MessageRepository'),
MessagePickupRepository: Symbol('MessagePickupRepository'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we make this a module config option, and expose the config on the new API, we don't need the injection symbol anymore (as you can inject the api and then do api.config.messagePickupRepository)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. However I think I'll need to update a bit the logic in order to avoid dependency cycles between MessageSender and MessagePickupApi. Probably adding some parameter to sendMessage or even triggering an internal event that can be listened in Message Pickup.

* Deliver messages in the Message Pickup Queue for a given connection and key (if specified).
*
* Note that this is only available when there is an active session with the recipient. Message
* Pickup protocol messages themselves are not added to pickup queue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Smart! :)

* Pickup protocol messages themselves are not added to pickup queue
*
*/
public async deliverQueuedMessages(options: DeliverQueuedMessagesOptions) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this take a sessionId instead, so you wont have the case where there's no session? Maybe that'll complicate it without benefits

I think this should at least return an object, where it e.g. optionally returns the message, and a state which indicates how the delivery went (e.g. for now no open session, or delivered). Also will this deliver all messages? Or should you call this in a loop?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to make it more flexible, I've added an API method to get live sessions and make this one to use sessionId instead. This way, the only possibility is that it will proceed with the delivery (e.g. the delivery message has been properly submitted) or an error happened (MessageSendingError or so), so it will throw and can be handled by the caller.

This method, now called deliverMessagesFromQueue, will retrieve any pending messages from the queue and send the initial batch of batchSize messages, starting a pickup loop that will be finished once all messages are delivered (the client will automatically request more messages when the status shows that there are pending messages). This initial batchSize number is sent as an input parameter or taken from configuration.

There is also another method called deliverMessages which is similar but intended to send specific messages. It will not retrieve other messages from the queue. This can be more appropriate when several messages for a given mediatee are received within a short time frame, as there will not be need of querying the queue multiple times.

@ericvergnaud
Copy link
Contributor

Hi, as per the live-mode, I think the proposed approach will fix the problems we observed with websockets, where messages were irremediably lost, being enqueued without being acknowledged. So thanks for that, looking forward to it!

@ericvergnaud
Copy link
Contributor

I'm less convinced that the "central Message Pickup Repository" will address #1625.

The "central" wording suggests that this repo will run in a separate instance.
If that's the case, then what happens when that instance is down ? Am I correct in suspecting that having 2 pickup instances will exhibit the exact same problem hence we would have just pushed the problem to a separate component ?
If that's not the case, then the problem is that each mediator instance will be required to know every other mediator instance, which is not viable because in an auto-scaling scenario, the number of mediator instances may vary dynamically from 1 to 3 or even 100.
I may misunderstand what is being proposed, but I suspect it's trying to solve the problem completely internally, which I'm not sure can address disaster scenarios.

IMHO, the only robust solution is to rely on a notification mechanism that exists outside the mediator itself. Then each mediator instance subscribes to that notification platform (for example AWS EventBridge), and the responsibility of sending notifications lies entirely with the storage layer for queued messages.

In short, the API should simply make it possible to configure a notification listener that will invoke the appropriate AFJ message pickup API. Also worth noting, sending those notifications should only happen after storage, not before, and should also not be a responsibility of AFJ because that's easily achievable using standard middleware (Redis or a CDC can do that).

In even shorter, AFJ should handle notifications, not provide an internalised notification platform. All I need is an entry point that I can call from my app to tell it that a message for recipient x has been stored. That entry point will check whether the corresponding mediatee is connected and if so forward the message to them (as described in the original PR comment)(if not it does nothing).

Sorry if that sounds a bit negative, I might simply be misunderstanding what is being proposed. Happy to help with testing.

@TimoGlastra
Copy link
Contributor

Hey @ericvergnaud,

I think you're correct this PR doesn't directly address the issue of multiple mediator instances, but it does add methods to build w plugin on top of AFJ that does this.

There's new events added that allow you to know when a message has been queued. The idea is that a) you hook into this event and emit some kind of notification that will be received by other mediators (we use redis pub/sub) and b) in the other mediator you call deliver messages method to deliver the messages to the recipient when a notification is received.

A potential next step would be to make some default implementations plugins for notification, so it's easier to set up a mediator.

Does that resolve your uncertainties with this PR, or are there things we should do differently?

@ericvergnaud
Copy link
Contributor

ericvergnaud commented Jan 2, 2024

Hey @ericvergnaud,

I think you're correct this PR doesn't directly address the issue of multiple mediator instances, but it does add methods to build w plugin on top of AFJ that does this.

There's new events added that allow you to know when a message has been queued. The idea is that a) you hook into this event and emit some kind of notification that will be received by other mediators (we use redis pub/sub) and b) in the other mediator you call deliver messages method to deliver the messages to the recipient when a notification is received.

A potential next step would be to make some default implementations plugins for notification, so it's easier to set up a mediator.

Does that resolve your uncertainties with this PR, or are there things we should do differently?

@TimoGlastra thanks for the clarification. I guess the current wording is slightly misleading then...

One thing I remain concerned with is putting the responsibility of sending notifications with the mediator. Although it seems convenient, it won't be 100% reliable unless the mediator implements 2-phase commit:

  • If the event is broadcasted before the message is stored, then it could be processed before the message is available from the store (not speaking of scenarios where it is not stored at all due to some failure)
  • if the event is broadcasted after the message is stored, then anything can happen between the moment is is stored and the moment it gets broadcasted, including a crash of the mediator, in which case no notification will happen.

I don't think the mediator should implement 2-phase commit. Rather I recommend delegating that task to the storage itself. PostgreSQL comes with a CDC that is guaranteed to run for every data change, even after a crash, because it relies on the journal. Most database engines provide the same, including sqlite.

That said, the proposed mechanism can be very useful for testing, so I would keep it. But not advertise it as the standard way of syncing instances in a mediator cluster, rather provide examples of a correct end-to-end implementation.

@TimoGlastra
Copy link
Contributor

That makes sense, thank you for the explanation.

I think CDC is a great solution to this. Is there anything we can do in AFJ to make this more easy to set up?

CC @swcurran @dbluhm I think ACA-Py kafka / redis plugin would be susceptible to the same problem that @ericvergnaud describes here.

@ericvergnaud
Copy link
Contributor

ericvergnaud commented Jan 2, 2024

That makes sense, thank you for the explanation.

I think CDC is a great solution to this. Is there anything we can do in AFJ to make this more easy to set up?

CC @swcurran @dbluhm I think ACA-Py kafka / redis plugin would be susceptible to the same problem that @ericvergnaud describes here.

I guess a useful thing that can be done:

  • provide a sample plugin that listens to Redis pub/sub
  • provide a sample pgsql cdc that writes to Redis pub/sub

(since sqlite cannot act as a server, not sure it makes sense to provide a cdc sample sample for it)
Adapting these for Kafka and AWS EventBridge would also be useful (we'll be using EventBridge so I can help with that).

Another useful thing is updating the documentation for describing a 'supported' cluster setup.

For NR testing:

  • provide a sample http plugin that listens to http notifications
  • provide a sample event listener that invokes the above http endpoint
  • write a test that:
    • runs 2 mediators with the above plugins and a mediatee listening to instance A
    • sends a forward message M to instance B
    • ensures that the mediatee receives message M

@genaris
Copy link
Contributor Author

genaris commented Jan 2, 2024

The "central" wording suggests that this repo will run in a separate instance. If that's the case, then what happens when that instance is down ? Am I correct in suspecting that having 2 pickup instances will exhibit the exact same problem hence we would have just pushed the problem to a separate component ? If that's not the case, then the problem is that each mediator instance will be required to know every other mediator instance, which is not viable because in an auto-scaling scenario, the number of mediator instances may vary dynamically from 1 to 3 or even 100. I may misunderstand what is being proposed, but I suspect it's trying to solve the problem completely internally, which I'm not sure can address disaster scenarios.

In this PR description, when I referred to a a'central message pickup repository' I was meaning a common storage shared among all AFJ instances (e.g. a PostgreSQL or Mongo database). That does not mean that it will reside in one of them.

Through MessagePickupRepository interface, each mediator instance can enqueue and retrieve messages from it, just like it does right now.

What is different from previous implementation is that now the framework gives a way for each mediator instance to push messages for a certain mediatee on-demand, so if it receives a notification that a new message has been added for it, it can send it immediately (by calling agent.messagePickup.deliverQueuedMessages()).

Also, there are new events triggered when Live Mode Sessions are opened and closed. This allows each mediator instance to keep track of all mediatees are connected to it, so it can e.g. subscribe to receive notifications only when there are new messages for them instead of listening to every change in message pickup database.

Whether you want to use a CDC to notify about changes, or create a pub/sub mechanism where each AFJ instance publish a message to a channel and other instances listen to it, it is up to each implementation. It is not something enforced by thir PR.

IMHO, the only robust solution is to rely on a notification mechanism that exists outside the mediator itself. Then each mediator instance subscribes to that notification platform (for example AWS EventBridge), and the responsibility of sending notifications lies entirely with the storage layer for queued messages.

In short, the API should simply make it possible to configure a notification listener that will invoke the appropriate AFJ message pickup API. Also worth noting, sending those notifications should only happen after storage, not before, and should also not be a responsibility of AFJ because that's easily achievable using standard middleware (Redis or a CDC can do that).

I agree on this. I hope to have been clear that the goal of this PR is to make it possible to build an architecture like the one you are describing here.

@genaris
Copy link
Contributor Author

genaris commented Jan 2, 2024

Hey @ericvergnaud,

I think you're correct this PR doesn't directly address the issue of multiple mediator instances, but it does add methods to build w plugin on top of AFJ that does this.

There's new events added that allow you to know when a message has been queued. The idea is that a) you hook into this event and emit some kind of notification that will be received by other mediators (we use redis pub/sub) and b) in the other mediator you call deliver messages method to deliver the messages to the recipient when a notification is received.

A potential next step would be to make some default implementations plugins for notification, so it's easier to set up a mediator.

Does that resolve your uncertainties with this PR, or are there things we should do differently?

Something I want to clarify is that the new events are more related to session opening/closing than individual message queuing. You can of course hook into either AgentMessageSent event or MessagePIckupRepository.addMessage() to implement a pub/sub mechanism as @TimoGlastra is describing, but this is not a practice being enforced by this PR.

I agree that this should be clearly documented, and add some reference plug-ins to show a full implementation.

I'll do some more changes to this PR based on these comments and some experience I had recently while testing it, and submit it for review.

@ericvergnaud
Copy link
Contributor

@genaris I've provided more samples, as a PR to your branch, see genaris#2

genaris and others added 9 commits January 29, 2024 16:42
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
…ation#1644)

Signed-off-by: Berend Sliedrecht <sliedrecht@berend.io>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>
…oundation#1648)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat: deliver messages individually, not fetching from the queue every time

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: revert to free runners (openwallet-foundation#1662)

Signed-off-by: Ry Jones <ry@linux.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: create settings.yml (openwallet-foundation#1663)

Signed-off-by: Ry Jones <ry@linux.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: fix ci and add note to readme (openwallet-foundation#1669)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

docs: update active maintainers (openwallet-foundation#1664)

Signed-off-by: Karim Stekelenburg <karim@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat: did:peer:2 and did:peer:4 support in DID Exchange (openwallet-foundation#1550)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat(presentation-exchange): added PresentationExchangeService (openwallet-foundation#1672)

Signed-off-by: Berend Sliedrecht <sliedrecht@berend.io>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: removed jan as maintainer (openwallet-foundation#1678)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

ci: change secret (openwallet-foundation#1679)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: add meta to rxjs timeout errors (openwallet-foundation#1683)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

build(deps): bump ws and @types/ws (openwallet-foundation#1686)

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

build(deps): bump follow-redirects from 1.15.2 to 1.15.4 (openwallet-foundation#1694)

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: update shared components libraries (openwallet-foundation#1691)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

fix: properly print key class (openwallet-foundation#1684)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat(present-proof): add support for aries RFC 510 (openwallet-foundation#1676)

Signed-off-by: Berend Sliedrecht <sliedrecht@berend.io>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

fix(present-proof): isolated tests (openwallet-foundation#1696)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat(indy-vdr): register revocation registry definitions and status list (openwallet-foundation#1693)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

chore: rename to credo-ts (openwallet-foundation#1703)

Signed-off-by: Ry Jones <ry@linux.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

ci: fix git checkout path and update setup-node (openwallet-foundation#1709)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

fix: remove check for DifPresentationExchangeService dependency (openwallet-foundation#1702)

Signed-off-by: Sai Ranjit Tummalapalli <sairanjit.tummalapalli@ayanworks.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

docs: update zoom meeting link (openwallet-foundation#1706)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

refactor(oob)!: make label optional (openwallet-foundation#1680)

Signed-off-by: Timo Glastra <timo@animo.id>
Co-authored-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat: support short legacy connectionless invitations (openwallet-foundation#1705)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat(dids)!: did caching (openwallet-foundation#1710)

feat: did caching

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

fix: jsonld document loader node 18 (openwallet-foundation#1454)

Signed-off-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

build(deps): bump amannn/action-semantic-pull-request from 5.3.0 to 5.4.0 (openwallet-foundation#1656)

build(deps): bump amannn/action-semantic-pull-request

Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/amannn/action-semantic-pull-request/releases)
- [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md)
- [Commits](amannn/action-semantic-pull-request@v5.3.0...v5.4.0)

---
updated-dependencies:
- dependency-name: amannn/action-semantic-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Timo Glastra <timo@animo.id>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>

feat: did rotate (openwallet-foundation#1699)

Signed-off-by: Ariel Gentile <gentilester@gmail.com>

refactor: pickup protocol method names

Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
Signed-off-by: Ariel Gentile <gentilester@gmail.com>
@genaris genaris marked this pull request as ready for review January 31, 2024 13:14
Copy link
Contributor

@TimoGlastra TimoGlastra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @genaris! I have some nits, but those can also be addressed later.

packages/core/src/agent/MessageSender.ts Outdated Show resolved Hide resolved
* - `MessageForwardingStrategy.DirectDelivery` - Deliver message directly. Do not add into queue (it might be manually added after, e.g. in case of failure)
*
* @default MessageForwardingStrategy.DirectDelivery
* @todo Update default to QueueAndLiveModeDelivery
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do we want to change the defualt?

Isn't this something that can be determined automatically? E.g. first try direct delivery (in case the mediatee has an endpoint), if it fails, try live delivery (if an open session), if it fails add to queue.

I think you'd often want a combination of 2 and 3 right? Or is direct delivery only direct live delivery (I'm assuming it is when you can send it directly to an endpoint, but maybe that's wrong. If so, maybe we should call it DirectLiveDelivery)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more about selecting if we want to encapsulate forwarded messages in Message Pickup's Delivery message (so we can have explicit acknowledge) or we want to simply send them to either the mediatee endpoint or an existing session with them. Do you have a better wording to reflect what this setting does?

I think this way of sending forwarded messages should be deprecated at some point, but the encapsulated one should be used only in case the other party supports Message Pickup V2, and that's something that probably can be determined automatically if we do some smarter logic on the agent (e.g. we can query the other party for protocol support or mark it as supported as soon as we receive the first Pickup V2 protocol loop from them).

@genaris genaris merged commit 1b70d24 into openwallet-foundation:main Jan 31, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants