Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Accepted with Revisions] SDL 0075 - OEM specific Human Interface Device support as Plug-in architecture in SDL proxy #219

Closed
theresalech opened this issue Jul 12, 2017 · 18 comments

Comments

@theresalech
Copy link
Contributor

theresalech commented Jul 12, 2017

Hello SDL community,

The review of "OEM specific Human Interface Device support as Plug-in architecture in SDL proxy" begins now and runs through July 18, 2017. The proposal is available here:

https://github.com/smartdevicelink/sdl_evolution/blob/master/proposals/0075-HID-Support-Plug-in.md

Reviews are an important part of the SDL evolution process. All reviews should be sent to the associated Github issue at:

#219

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of SDL. When writing your review, here are some questions you might want to answer in your review:

  • Is the problem being addressed significant enough to warrant a change to SDL?
  • Does this proposal fit well with the feel and direction of SDL?
  • If you have used competitors with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?
    Please state explicitly whether you believe that the proposal should be accepted into SDL.

More information about the SDL evolution process is available at

https://github.com/smartdevicelink/sdl_evolution/blob/master/process.md

Thank you,
Theresa Lech

Program Manager - Livio
theresa@livio.io

@brandon-salahat-tm
Copy link
Contributor

@theresalech @joeljfischer @joeygrover @jhodges55 @Toyota-Sbetts

This is a big proposal, but the changes to core are small. Since we are coming up on the deadline for 4.4, would it be possible to evaluate the core changes now, and defer evaluating the proxy changes until more details can be provided?

Toyota prefers using a RPC along the lines of alternative 2 to avoid introducing proprietary blobs to SDL, but will accept any solution that introduces a new RPC for haptic/human interface support.

Additionally it appears that the graphics for alternative 1 and alternative 2 are reversed.

@joeljfischer
Copy link
Contributor

@Toyota-BSalahat I'm not sure about that. Saying that we will create a number of core changes without corresponding proxy changes even being agreed upon seems fairly dangerous to me, although that would depend on the amount of tie-in between the proxy changes and core changes. But, we wouldn't even be able to really test that the system works as designed because only 1/2 of the system would be designed and implemented.

@joeygrover
Copy link
Member

@Toyota-BSalahat @joeljfischer I think what we could evaluate are the RPC changes for both sides as well as the Core changes needed to support the feature. The first pass would be to implement the communication piece to the proxy, but wait to refine the actual translation in the proxies.

@joeygrover
Copy link
Member

joeygrover commented Jul 13, 2017

This proposal uses a plug-in architecture to represent the OEM controls as a HID device with corresponding device support module. OEMs can provide their support module quickly using this architecture and ISVs do not need to do OEM-specific integration work, even for video streaming.

I believe this is the wrong approach to take. It would be on the app developers to implement each OEM's variate of HID device control. The whole spirit of SDL is that developers develop once for SDL, and work with all OEMs. This basically undermines that idea. It would also create a burden on OEMs to create and implement their own HID control libraries that they would have to distribute, inform developers, and test with every proxy update. The effort from OEMs and developers then becomes unscalable. Will the HID control libraries remain the same for each OEM forever? For every model? I don't believe that to be the case. I would prepare for not only different HID specs through years and models, but regions. That would ask app developers to bloat their projects with more and more libraries that are essentially the same thing. Lastly, who would be responsible for testing? The proxy libraries would have to make sure they work with a defined interface which is doable, but what about app developers? When they implement these new OEM specific plug-ins they would have to test with each module from every OEM for each different HID library. This approach is unscalable, misses the mark of SDL, and introduces a nightmare testing scenario and therefore I advise at rejecting it as it stands.

SDLCarWindow will need to be implemented into the iOS proxy
VirtualDisplayEncoder will need to be implemented into the Android proxy

This (and all alternatives) would force developers to use the provided video encoding techniques presented or have a real challenge in implementing for themselves. For example, if an app uses OpenGL to render their UI instead.

VirtualDisplayEncoder will need to be implemented into the Android proxy

This was already an accepted proposal and will be included in the next release of the Android Proxy.

Alternatives

Alternative 1:

Plugin architecture is replaced with a more defined RPC interface. This RPC would be the standardized SDL interface for haptic events, and would be interpreted by OEM's on the HMI side.

This seems like a much better option than the one being proposed. It at least standardizes the communication piece as well as the proxy integration aspect. It seems to be pretty complicated to be able to take an app's provided layout for the proxy video encoders and determine their location of each element. We would have to ensure that app developers do not use complicated layouts if we wanted to handle it automatically.

Suggested Alternatives
Define HID Types

Would it not be possible to better classify HID types and develop RPCs around that? For example, just thinking of a control wheel:

<function name="SendHIDUpdate" functionID="SendHIDUpdateID" messagetype="request">
	<description>Send updates when a HID device is activated and requests the app to handle it</description>
	<param name="hidType" type="HIDType" mandatory = "false"/>
	<param name="hidAction" type="HIDAction" mandatory = "false"/>
	...
</function>

<function name="SendHIDUpdate" functionID="SendHIDUpdateID" messagetype="response">
	<description>See Result</description>
	<param name="position" type="Integer" mandatory="false"/>
   <element name="SUCCESS"/>
   <element name="GENERIC_ERROR"/>
	<element name="OUT_OF_BOUNDS"/> 
    </param>
</function>


<enum name="HIDAction">
	<element name="ADVANCE" />
	<element name="RETREAT" />
	<element name="SELECT" />
</enum>

<enum name="HIDType">
	<element name="CENTRAL_CONTROL_WHEEL"/>
	...
</enum>

<function name="onItemSelected" functionID="OnItemSelectedID" messagetype="notification">
	<description> This notification is sent when an app's onscreen element has been selected.</description>
</function>

This would be the module telling the app that an event is happening from the HID. The app would take and consume the event, and return if it was successful or not. It would also respond with if there are no further elements to be traversed. Then the module would know if it needs to "move" its selection back to onboard elements. If it received a SUCCESS it could perform it's haptic feedback. This would need to be expanded, but is a high-level idea.

Additionally, we could add a new notification like onItemSelected that would be sent from the proxy to the module when an on-screen item has been selected.

Limit HID Scope Increase

What other HID types beyond voice, steering wheel controls, touch screen, and control wheel are we trying to solve for? The first three are covered already in SDL, if we just need to solve for the last we can consider it a Button. This is a bit hacky, but simplifies things a bit.

<enum name="ButtonName">
	...
	<element name="ADVANCE" />
	<element name="RETREAT" />
	<element name="SELECT" />
</enum>

Then possibly add the onItemSelected RPC as well from the previous selection.

@brandon-salahat-tm
Copy link
Contributor

This seems like a much better option than the one being proposed. It at least standardizes the communication piece as well as the proxy integration aspect. It seems to be pretty complicated to be able to take an app's provided layout for the proxy video encoders and determine their location of each element. We would have to ensure that app developers do not use complicated layouts if we wanted to handle it automatically.

@joeygrover , I think the idea would be to traverse the view hierarchy for UIControls that have been marked as highlight-able already by the application, to conform to the built in user accessibility functions. I can only speak to iOS, but on iOS this would not be very complicated, and in theory should hook into something app developers already configure (shouldn't require SDL specific work).

Of course, this does hinge on the app utilizing UIKit to render their view, and would have problems if something like OpenGL was rendering everything. I think to cover this case, the proxy could expose the new RPC for the app to call directly, and they could set an array of spatial data to support the interface if they are not using UIKit. (I can't speak to Android as well, but I believe it has similar built in concepts)

What other HID types beyond voice, steering wheel controls, touch screen, and control wheel are we trying to solve for? The first three are covered already in SDL, if we just need to solve for the last we can consider it a Button. This is a bit hacky, but simplifies things a bit.

Lexus vehicles use a touchpad interface, but I can't speak to what other possibilities might arise in the future. The goal of this proposal was to introduce a SDL interface that could power the haptic feedback engines that already exist on OEM head units.

I think your suggested alternative would be a fantastic interface for allowing a application to control haptic feedback events, and I do think that is something SDL should support (whether now or in the future). I think that we would be open to any interface that is reasonably painless for app developers, and allows use of the existing OEM haptic feedback engine.

@joeljfischer
Copy link
Contributor

I agree with Joey that the current proposal is simply too complex and will not scale well. We could end up in a situation where developers need to integrate 3 libs from a single OEM to support different interface types on different product lines, not to mention if we want to scale this widely across OEMs. In addition, the work on the OEM side to keep their libs up to date and devs to update numerous libs when a proxy updates will discourage them from updating their SDL proxy version.

I think alternative 1 is pretty solid. It would require for the OEM to implement the highlighting / focus of elements on their end based on the rects provided, which may look non-native to the app (e.g. if blue highlighting is used in Spotify's generally green & black colored app), but we probably also don't necessarily want to put the focus UI burden on the app developer either (and this would require an additional RPC to tell the app which element to focus).

I think alternative 1 is a better option than Joey's proposal precisely for the reason of something like the touchpad vs. wheel interface differences. It puts more burden on the OEM to decide which element should be highlighted based on the position of the focus rects, but this will ensure consistency and compatibility with any focus-style input devices that may come up in the future (e.g. eye -tracking, hand-waving, voodoo magic).

@joeygrover
Copy link
Member

joeygrover commented Jul 14, 2017

I think alternative 1 would be ok if the layouts were static, but I can already see apps switching between different menus/screens. That would mean we'd have to parse out the new layout every time that happens, and send over an array of SpatialStruct data. (Since we can't ensure that these layouts will always be the same) I can see that taking a decent amount of time between traversing the layout to sending the RPC then for the module to parse that data and translating HID actions.

Also, shouldn't we allow an app to define the order of focus for its elements? Simply giving an array of elements to the module puts that burden on the module. We could at least add a focusPriority param.

It would require for the OEM to implement the highlighting / focus of elements on their end based on the rects provided, which may look non-native to the app (e.g. if blue highlighting is used in Spotify's generally green & black colored app), but we probably also don't necessarily want to put the focus UI burden on the app developer either (and this would require an additional RPC to tell the app which element to focus).

This is going to look messy in my mind. What if elements are very close together? Will we have to define how think these highlight lines can be? Or just let them overlay on top of other elements? Are we going to take into consideration that some elements might be placed over a background that actually extends their visual representation but not their clickable region? This is another concern that is easily accomplished using my alternative. By drilling down the HID action we can use common UI commands to move from one element to another and maintain the app's highlighting.

I believe touchpads could essentially send onTouchEvents. We could extend the RPC to make it robust enough if necessary. Are there scenarios that are not believed to be covered by using that RPC? Same with other types of HIDs mentioned. They are all essentially just moving an invisible cursor across the screen. If we wanted to capture gestures at the module level and send them as HIDActions that's possible too.

If we use onTouchEvents the work is nearly complete in the Android Proxy. See here.

@joeljfischer
Copy link
Contributor

I can see that taking a decent amount of time between traversing the layout to sending the RPC then for the module to parse that data and translating HID actions.

At least on the phone side (iOS), I can't see it taking more than 200-300ms total (traverse view heirarchy, assemble RPC, send it), and I'd bet it's closer to 100ms.

Also, shouldn't we allow an app to define the order of focus for its elements? Simply giving an array of elements to the module puts that burden on the module.

I don't think so, no. Because then:

  1. We'd have to translate the variety of gestures / inputs into spatial directions. This puts a large burden on the app developer to implement focus logic for all the different possible input types, instead of on the OEM for their individual input type.
  2. Apps would likely translate them differently than the OEM might on their native UI (e.g. going top-down, then left-right, instead of left-right then top-down, using an advance / retreat wheel).
  3. A focusPriority might work, but I'm not sure why it may be needed. I understand that it's used on the web for tabbing priority, but shouldn't spatial directions work better here? Additionally, this continues to have the problems of (2).

It would require for the OEM to implement the highlighting / focus of elements on their end based on the rects provided, which may look non-native to the app (e.g. if blue highlighting is used in Spotify's generally green & black colored app), but we probably also don't necessarily want to put the focus UI burden on the app developer either (and this would require an additional RPC to tell the app which element to focus).

This is going to look messy in my mind. What if elements are very close together? Will we have to define how think these highlight lines can be? Or just let them overlay on top of other elements? Are we going to take into consideration that some elements might be placed over a background that actually extends their visual representation but not their clickable region? This is another concern that is easily accomplished using my alternative. By drilling down the HID action we can use common UI commands to move from one element to another and maintain the app's highlighting.

Yes, it could. That's why I brought it up as a definite concern. Even still, I don't think your solution accomplishes anything that my solution / Alt 1 wouldn't. We would just have to have an id that the app sends for each focus region, then the HU sends a focus button event (or something similar), and the app would highlight the region. I was trying to alleviate some app dev burden, but this would likely not be too difficult for them, and would look better with their own UI since video streaming already takes us far afield from OEM UI.

I believe touchpads could essentially send onTouchEvents. We could extend the RPC to make it robust enough if necessary. Are there scenarios that are not believed to be covered by using that RPC? Same with other types of HIDs mentioned. They are all essentially just moving an invisible cursor across the screen. If we wanted to capture gestures at the module level and send them as HIDActions that's possible too.

I'm not quite sure what you're saying, but no, I don't see how HID events or touchpads would be captured by onTouchEvents. First, we don't have a cursor implemented, but second, we'd probably want gestures to move between elements, in my mind. This would be the equivalent of a wheel "advance / retreat" but allow for all cardinal directions and diagonals.

I'm still fairly strongly in favor of a tweaked alternative 1.

@joeygrover
Copy link
Member

After giving it some time, here's a few more thoughts:

  • Touchpads are essentially touchscreens. Depending on their size they might have their touch events translated, or even simply used for gestures. Both of which I believe can be accounted for by passing the events to the proxy.
  • Android has a very nice way of passing in MotionEvents that should be considered during this feature discussion.
  • Alternative 1 does not address dynamic screens very well. For example, how are scrollable views handled? Assuming there are a list of selectable items that extends beyond the screen, if the user starts scrolling down the list how are the elements sent over? Are they updated continuously? Are they only updated when the scrolling stops? I'd assume on scroll start we would have to clear out all rects, even the scrollbar if visible, wait until scroll stop, and send over new rects. However, I believe this will be difficult to create a smooth interface and the user will experience a good amount of 'jank'.
  • The amount of packets having to be sent should be considered as we are already pushing whatever transport's MTU to its limit to achieve decent video streaming quality. Any extra chatter that we add will likely reduce the quality of the streaming experience. This becomes relevant in more complicated app integrations obviously; the more times an app's screen changes the more packets we would have to send.
  • If we are already sending our layout through Alternative 1, why don't we just send all of our layout including text, colors, etc so the head unit can render the whole thing? We could include a background layer that allows for the continuing of video streaming, but controls can be sent via the rect arrays. I don't really like this approach, but it seems worth mentioning.
  • I would really appreciate hearing from the proposal's authors on their thoughts. It is incredibly important to have an in-depth dialogue during the week of review rather than trying to defend it during our short weekly call. Are we missing anything? Are you 100% behind your original proposal or were you gunning for one of the provided solutions and you don't care which? Do any of the suggestions sound reasonable?

@Toyota-Sbetts
Copy link
Contributor

@joeygrover

Touchpads are essentially touchscreens. Depending on their size they might have their touch events translated, or even simply used for gestures. Both of which I believe can be accounted for by passing the events to the proxy.

Just to clarify, the touchpad is very different than a touchscreen. Here's a demo of the Lexus Remote Touchpad: https://www.youtube.com/watch?v=HP3K0PObTBw
If we treat touchpads just as a touchscreen, the HU cannot know when or where to properly draw the cursor and boxes. This means we cannot supply the OEM look and feel which I believe is what is trying to be preserved here. Additionally, some HID devices report haptic feedback when highlighting and selecting buttons which is not available if we treat touchpads as touchscreens. While this is a viable option, it completely removes the OEM branding and control out of the equation which is generally one of the strong points of SDL. This will also leave templates behaving drastically differently than VPM apps when using these devices.

@brandon-salahat-tm
Copy link
Contributor

Alternative 1 does not address dynamic screens very well. For example, how are scrollable views handled? Assuming there are a list of selectable items that extends beyond the screen, if the user starts scrolling down the list how are the elements sent over? Are they updated continuously? Are they only updated when the scrolling stops? I'd assume on scroll start we would have to clear out all rects, even the scrollbar if visible, wait until scroll stop, and send over new rects. However, I believe this will be difficult to create a smooth interface and the user will experience a good amount of 'jank'.

I don't think this would be a major issue, as this is already how tvOS and accessibility work on iOS and there is not a noticeable impact to scroll performance. The only additional issue with SDL would be the slight additional latency of communicating with SDLCore.

I think that proxy would just need to update the haptic layout once the scroll is completed. I think your example would be an important issue for a touch/swipe based user interaction scenario, but should not occur with a wheel/touch pad based interface. The user would have to select a scroll-down button/control on the screen and essentially be clicking it to initiate scrolling, and realistically the layout should have time to be updated before the user could navigate to the new items. The worst case would be that the haptic box would draw slightly before the scrolling item had arrived to that location.

I guess my point is that scrolling with these controls is paginated scrolling vs free/bouncy scrolling you would do with touch controls. Paginated scrolling should not cause issues with alternative 1. Free scrolling like what you see on a mobile device would not work with this solution, but I am not sure this solution is intended to support that kind of behavior.

If we are already sending our layout through Alternative 1, why don't we just send all of our layout including text, colors, etc so the head unit can render the whole thing? We could include a background layer that allows for the continuing of video streaming, but controls can be sent via the rect arrays. I don't really like this approach, but it seems worth mentioning.

I see your point with this, but the point of this proposal is to allow the existing OEM haptic feedback engines to interface with SDL. I don't think the intention was ever to have the head units render the controls for video streaming. I agree that would be the wrong direction to move in.

I would really appreciate hearing from the proposal's authors on their thoughts. It is incredibly important to have an in-depth dialogue during the week of review rather than trying to defend it during our short weekly call. Are we missing anything? Are you 100% behind your original proposal or were you gunning for one of the provided solutions and you don't care which? Do any of the suggestions sound reasonable?

I can't speak for Xevo on the original proposal, but I did draft the alternative 1 approach that most of the discussion has been around.

@joeygrover
Copy link
Member

I guess my point is that scrolling with these controls is paginated scrolling vs free/bouncy scrolling you would do with touch controls. Paginated scrolling should not cause issues with alternative 1. Free scrolling like what you see on a mobile device would not work with this solution, but I am not sure this solution is intended to support that kind of behavior.

I think "how robust is this" is something to consider when introducing a feature. I want to make sure any viable scenario is covered, or at least understand the challenges in getting ourselves to a position where we are covered. I can easily see free/bouncy scrolling views introduced in VPM apps.

Touchpad events can be translated into touchscreen events pretty easily. Just plug in a mouse to an Android phone that supports OTG and you can see how the translation happens without apps having to make changes. All we would have to do is add better support for gestures and a "hover state" for a pointer.

After watching that video I think I am even more against what is being proposed. How are OEMs going to use these rects to display current selection? By splashes of color gradients? Are we going to enforce how a hover selection looks during VPM apps?

lexus_rc_remote_touchpad_controller_-_youtube

While it might allow the OEMs to continue its branding, visually I'm not sure I see it looking very good at all. Taking the referenced picture above, imagine a music app's color scheme of green and black. However, the OEM's color scheme is red and white. They use red as their accent color over the current selection. The combination is neither OEM or app branding, but a combination of both that isn't either.

Or even imagine the image/text in the current selection was red from the app and the OEM's selection overlay was red as well. Not all OEM's color branding will be the same so it's not something enforceable to not use certain colors (unless we said only black/white). This also goes back to dealing with the shape of the selection. Are we going to take into consideration that some elements might be placed over a background that actually extends their visual representation but not their clickable region?

IMO, VPM apps will never be conducive to allowing a lot of OEM branding. I think by forcing something like this for OEM branding we are compromising the feature for little to no net gain. If you have example renderings to share that might help us understand how Xevo/Toyota plans on it looking and possibly gain more confidence in the feature.

@brandon-salahat-tm
Copy link
Contributor

@joeygrover

I think "how robust is this" is something to consider when introducing a feature. I want to make sure any viable scenario is covered, or at least understand the challenges in getting ourselves to a position where we are covered. I can easily see free/bouncy scrolling views introduced in VPM apps.

VPM apps may very well support this type of scrolling through a touch interface, but will always be limited on haptic platforms to the experience that hardware control supports. Carplay on BMW is an excellent example of this. I believe this solution is robust enough to support the haptic interfaces it was designed to support. Even with free scrolling, I am not convinced the latency to update the layout and render the highlighting would necessarily be noticeable to the user, but this would need to be studied and bench marked.

After watching that video I think I am even more against what is being proposed. How are OEMs going to use these rects to display current selection? By splashes of color gradients? Are we going to enforce how a hover selection looks during VPM apps?

The implementations I have seen highlight the bounding rectangle of the control, much like Android or iOS do for accessibility users. I do not see an issue with this user experience.

This also goes back to dealing with the shape of the selection.

This was handled in a similar fashion to what Google and Apple do. The bounding rectangle is highlighted around the control, independent of how the view is actually drawn (circle, triangle, irregular shape, etc). This is a standard approach.

IMO, VPM apps will never be conducive to allowing a lot of OEM branding. I think by forcing something like this for OEM branding we are compromising the feature for little to no net gain. If you have example renderings to share that might help us understand how Xevo/Toyota plans on it looking and possibly gain more confidence in the feature.

I agree with @Toyota-Sbetts previous comment on this point. I do not think it will be a good experience if the haptic behavior is vastly different between template and VPM applications. The goal of this proposal is to allow a consistent user experience.

@brandon-salahat-tm
Copy link
Contributor

brandon-salahat-tm commented Jul 18, 2017

Here are some images of what BMW does for highlighting, and Enform.

https://i.ytimg.com/vi/eHr5ZGxBCHY/maxresdefault.jpg

http://images.hgmsites.net/lrg/lexus-enform-app-suite_100392186_l.jpg

@Toyota-Sbetts
Copy link
Contributor

@joeygrover

Touchpad events can be translated into touchscreen events pretty easily. Just plug in a mouse to an Android phone that supports OTG and you can see how the translation happens without apps having to make changes. All we would have to do is add better support for gestures and a "hover state" for a pointer.

This still doesn't account for haptic feedback. Based on our testing and benchmarking, free cursor support similar to what you are suggesting is very inaccurate and users have a tough time lining up their finger movement to the movement on screen. When pressing in to select, this can cause additional movement and miss-selection. While we could add a notification from the app to the HU when the current cursor position is in "hover state", I think this would be slow and increase the chatter even more.

I think your example of a mouse explains why it's not a viable solution. The OTG implementation only works if the mouse is reporting 1:1 movement to the app/system which over SDL would be extremely taxing and slow and I think it's not an option. Just selection location and gesture is not accurate enough for a touchpad.

@theresalech theresalech changed the title [In Review] SDL 0075 - OEM specific Human Interface Device support as Plug-in architecture in SDL proxy [Accepted with Revisions] SDL 0075 - OEM specific Human Interface Device support as Plug-in architecture in SDL proxy Jul 19, 2017
@theresalech
Copy link
Contributor Author

The Steering Committee has agreed to move forward with a revised version of Alternative 1 for this proposal, and focus on RPC changes only. While this would typically warrant "returning for revisions," the Steering Committee has decided to make an exception and "accept with revisions" so that the feature can meet the deadline to be contained within the Core 4.4 Release.

This proposal has been accepted with the following revisions, to be made by the Xevo team:

  • Alternative 1 will be the planned approach, with focus only on RPC changes - a separate proposal describing a fuller structure around proxy implementation should be entered at a later date
  • Add ID to SpatialStruct
  • Change HapticSpatialData array from mandatory to optional
  • Increase the maximum size of the HapticSpatialData array (to at least 100, possibly 999)
  • Add the following to the description of SendHapticData RPC: when a request is sent, if successful, it will replace all spatial data previously sent through RPC

@smartdevicelink smartdevicelink locked and limited conversation to collaborators Jul 19, 2017
@theresalech
Copy link
Contributor Author

@jhodges55 @shoamano83 @Toyota-BSalahat please advise when a new PR has been entered to update the proposal to reflect the agreed upon revisions. I'll then merge the PR so the proposal is up to date, and enter issues in respective repositories for implementation. Thanks!

@theresalech
Copy link
Contributor Author

theresalech commented Jul 26, 2017

PR with revisions has been merged into proposal .md file.

Issues have been entered:
Core
iOS
Android
RPC

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants