-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ASCornerRounding] Is it possible to use a lower-than-contentsScale resolution when precompositing corners? #1152
base: master
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting discussion, but I don't think we can change this codepath without affecting the visual appearance of the corners.
@@ -323,7 +323,7 @@ - (void)__didDisplayNodeContentWithRenderingContext:(CGContextRef)context image: | |||
bounds.size.height *= contentsScale; | |||
CGFloat white = 0.0f, alpha = 0.0f; | |||
[backgroundColor getWhite:&white alpha:&alpha]; | |||
ASGraphicsBeginImageContextWithOptions(bounds.size, (alpha == 1.0f), contentsScale); | |||
ASGraphicsBeginImageContextWithOptions(bounds.size, (alpha == 1.0f), (*image).scale); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Particularly for corner rounding, I don't think this is correct. For the corners to have crisp edges, the backing store has to be as high-res as the screen scale to capture the fine detail of the corner's curve. Otherwise the corners will be up-scaled when presented.
This does mean increased memory usage, in which case the Clip Corner mode may be a better fit (if this significantly affects your app).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@appleguy Thanks for the information. I understand the backing store has to be as much pixels as there are on the screen, but the problem is that(please refer to #1151 and the demo project in it) we are creating a backing store with 9 times(on 3x device) pixels as there are actually on the screen, which is a huge waste of memory.
Per my understanding, the function is trying to precomposite rounded corners without modifying the amount of pixels which will be used as layer backing store. For example we would set a 300px * 300px image to a 100 * 100 imageNode on a 3x device, for the backing image we either use an image with 3x scale/100point or a 1x scale/300point combination. Both won't cause any up-scaling issue since we've provided the exact amount of pixels needed on the screen. Please correct me if I'm wrong.
But the code here is both multiplying size with scale and setting context scale to 3x, creating a 3x scale/300point context, resulting in a 900px * 900px backing store for an imageNode which only occupies 300px * 300px of screen, which seems quite unreasonable.
PS: We've tried 2 possible solutions to resolve the issue:
-
changing context scale to image scale, resulting in a 1x/300point backing store
-
remove all contentsScale multiplication in this func, resulting in a 3x/100point backing store
Both work great, and we don't see sacrificing any image/corner rounding details even zooming in the screenshot of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@appleguy Hope you have time review this, since it really saves a lot of memory. We've been applying the fix in our own fork since last year, everything looks great :)
@Adlai-Holler @nguyenhuy Please help if possible. The demo project should be quite self-explanatory #1151 |
@appleguy Can you take another look on this PR? Thanks! |
Try to fix #1151
In __didDisplayNodeContentWithRenderingContext with cornerRoundingType set to precomposited, the image after and before processing should share the same scale. So just use the input image's scale when creating context.