Conversation
When maxImagePixelCount > 0 and the image exceeds the threshold: - Display uses ImageIO-downsampled image to prevent UIImageView black screen - Crop uses memory-efficient CIImage pipeline (CIPerspectiveCorrection) that only decodes needed source pixels via GPU tiling - Full original resolution is preserved in crop output - Default (maxImagePixelCount = 0) leaves all existing code paths untouched Changes: - CropViewConfig: add maxImagePixelCount property - UIImageExtensions: add exifOrientation, downsampledIfNeeded (public), cropWithCIImage lazy pipeline supporting all transforms - Mantis.swift: downsample display image in buildCropView - CropView: add isLargeImageMode, branch crop/asyncCrop/update paths - CropViewController+CropAPI: branch crop(by:) for large images - DemoViewController: show downsampled preview, pass original to crop Co-Authored-By: Claude <noreply@anthropic.com>
Add a UISwitch on the navigation bar left side to toggle between sunflower.jpg (default) and large.jpg for testing large image support. Co-Authored-By: Claude <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds configurable large-image support: images may be downsampled for display while a CIImage-based crop path preserves full-resolution output; demo app gets a Large Image toggle and per-config Changes
Sequence DiagramsequenceDiagram
participant User
participant DemoVC as Demo ViewController
participant UIImageExt as UIImage Extensions
participant CropView
participant CI as CoreImage
User->>DemoVC: select image / toggle Large Image
DemoVC->>UIImageExt: downsampledIfNeeded(maxPixelCount)
UIImageExt-->>DemoVC: cached display image (possibly downsampled)
DemoVC->>CropView: update(display image)
User->>DemoVC: request crop (cropInfo)
DemoVC->>CropView: crop(originalImage, cropInfo)
CropView->>CropView: evaluate isLargeImageMode (maxImagePixelCount)
alt Large Image Mode
CropView->>UIImageExt: cropWithCIImage(cropInfo)
UIImageExt->>CI: build & run CI pipeline (perspective/transform)
CI-->>UIImageExt: cropped full-res image
else Normal Mode
CropView->>UIImageExt: crop(by: cropInfo)
end
UIImageExt-->>CropView: cropped UIImage
CropView-->>DemoVC: CropOutput (full-res)
DemoVC-->>User: deliver cropped image
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (4)
Sources/Mantis/CropViewController/CropViewController+CropAPI.swift (1)
14-22: Code duplication withCropView.isLargeImageMode.The pixel count check duplicates the logic in
CropView.isLargeImageMode(lines 62-66 in CropView.swift). Consider reusing the existing computed property to ensure consistency:♻️ Proposed refactor to reuse isLargeImageMode
public func crop(by cropInfo: CropInfo) { - let croppedImage: UIImage? - let maxPixels = config.cropViewConfig.maxImagePixelCount - let pixelW = Int(cropView.image.size.width * cropView.image.scale) - let pixelH = Int(cropView.image.size.height * cropView.image.scale) - if maxPixels > 0 && pixelW * pixelH > maxPixels { - croppedImage = cropView.image.cropWithCIImage(by: cropInfo) - } else { - croppedImage = cropView.image.crop(by: cropInfo) - } + let croppedImage = cropView.isLargeImageMode + ? cropView.image.cropWithCIImage(by: cropInfo) + : cropView.image.crop(by: cropInfo)Note: This would require exposing
isLargeImageModeasinternalor higher visibility onCropView, or adding a method onCropViewProtocol.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Sources/Mantis/CropViewController/CropViewController`+CropAPI.swift around lines 14 - 22, The pixel-count branching in CropViewController+CropAPI.swift duplicates CropView.isLargeImageMode; replace the manual maxPixels/pixelW*pixelH check by using cropView.isLargeImageMode (or add an isLargeImageMode accessor to CropViewProtocol and use that) to decide between cropWithCIImage(by:) and crop(by:); if visibility prevents direct use, change CropView.isLargeImageMode to internal (or add a protocol method like isLargeImageMode()) and update the call site to use that instead so the logic is centralized in CropView.Sources/Mantis/Extensions/UIImageExtensions.swift (2)
379-384: Fix SwiftLint colon spacing warnings.Static analysis flagged inconsistent colon spacing in the array literal.
🔧 Proposed fix for colon spacing
let screenCorners = [ - CGPoint(x: -halfCropW, y: -halfCropH), - CGPoint(x: halfCropW, y: -halfCropH), - CGPoint(x: halfCropW, y: halfCropH), - CGPoint(x: -halfCropW, y: halfCropH) + CGPoint(x: -halfCropW, y: -halfCropH), + CGPoint(x: halfCropW, y: -halfCropH), + CGPoint(x: halfCropW, y: halfCropH), + CGPoint(x: -halfCropW, y: halfCropH) ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Sources/Mantis/Extensions/UIImageExtensions.swift` around lines 379 - 384, The array literal assigned to screenCorners contains inconsistent spacing around colons in the CGPoint initializers; update each CGPoint(initializer:) to use a single space after each colon (e.g., "x: value, y: value") so all four entries are consistently formatted. Locate the screenCorners constant in UIImageExtensions (the CGPoint(...) entries) and normalize the colon spacing throughout the array literal to satisfy SwiftLint.
313-343: Memory consideration in downsampling approach.The method converts the UIImage to JPEG/PNG data before creating the thumbnail. This temporarily requires the full image data in memory before ImageIO can create the thumbnail. For extremely large images, this could still trigger memory pressure.
For future optimization, consider accepting a file URL or Data parameter directly when the source is a file, allowing ImageIO to stream from disk without fully loading the UIImage first. However, this is acceptable for the current opt-in implementation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Sources/Mantis/Extensions/UIImageExtensions.swift` around lines 313 - 343, The downsampledIfNeeded(maxPixelCount:) implementation currently calls jpegData(compressionQuality:) and pngData() on UIImage which forces the full image into memory before ImageIO can create a thumbnail; update the API and implementation to accept an optional Data or URL input so callers can provide on-disk image data (or a file URL) to CGImageSourceCreateWithData/CGImageSourceCreateWithURL instead of always serializing the UIImage, and when UIImage-only input is used keep the existing fallback but document the memory caveat; locate the method downsampledIfNeeded(maxPixelCount:) and the code paths that call jpegData/pngData and replace them with branching to use provided Data/URL (using CGImageSourceCreateWithURL when URL present) while preserving existing behavior when no external source is supplied.Sources/Mantis/CropView/CropView+Crop.swift (1)
12-30: Consider moving the crop operation to the background thread for large images.The cropping logic (lines 14-20) runs on the main thread before dispatching to the background. For large images,
cropWithCIImagemay cause UI blocking whenCIContext.createCGImage()renders the final output. Consider wrapping the entire crop operation in the background dispatch:♻️ Proposed refactor to move cropping to background
func asyncCrop(_ image: UIImage, completion: `@escaping` (CropOutput) -> Void) { let cropInfo = getCropInfo() - let croppedImage: UIImage? - if isLargeImageMode { - croppedImage = image.cropWithCIImage(by: cropInfo) - } else { - croppedImage = image.crop(by: cropInfo) - } - let cropOutput = (croppedImage, makeTransformation(), cropInfo) - + let transformation = makeTransformation() + let isLarge = isLargeImageMode + DispatchQueue.global(qos: .userInteractive).async { + let croppedImage: UIImage? + if isLarge { + croppedImage = image.cropWithCIImage(by: cropInfo) + } else { + croppedImage = image.crop(by: cropInfo) + } + let cropOutput = (croppedImage, transformation, cropInfo) let maskedCropOutput = self.addImageMask(to: cropOutput) DispatchQueue.main.async { self.activityIndicator.stopAnimating() self.activityIndicator.isHidden = true completion(maskedCropOutput) } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Sources/Mantis/CropView/CropView`+Crop.swift around lines 12 - 30, The crop work is happening on the main thread inside asyncCrop; move the heavy cropping into the background block to avoid UI jank. In asyncCrop, call getCropInfo() (cheap) on main if needed, but perform the image cropping (use image.cropWithCIImage(by:) when isLargeImageMode is true, otherwise image.crop(by:)) inside DispatchQueue.global(qos: .userInteractive).async, then build cropOutput, call addImageMask(to:) on that background thread, and only dispatch back to DispatchQueue.main to stop activityIndicator and invoke completion(maskedCropOutput). Ensure any UI access (activityIndicator) remains on the main thread and do not touch UI from the background.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@Sources/Mantis/CropView/CropView`+Crop.swift:
- Around line 12-30: The crop work is happening on the main thread inside
asyncCrop; move the heavy cropping into the background block to avoid UI jank.
In asyncCrop, call getCropInfo() (cheap) on main if needed, but perform the
image cropping (use image.cropWithCIImage(by:) when isLargeImageMode is true,
otherwise image.crop(by:)) inside DispatchQueue.global(qos:
.userInteractive).async, then build cropOutput, call addImageMask(to:) on that
background thread, and only dispatch back to DispatchQueue.main to stop
activityIndicator and invoke completion(maskedCropOutput). Ensure any UI access
(activityIndicator) remains on the main thread and do not touch UI from the
background.
In `@Sources/Mantis/CropViewController/CropViewController`+CropAPI.swift:
- Around line 14-22: The pixel-count branching in
CropViewController+CropAPI.swift duplicates CropView.isLargeImageMode; replace
the manual maxPixels/pixelW*pixelH check by using cropView.isLargeImageMode (or
add an isLargeImageMode accessor to CropViewProtocol and use that) to decide
between cropWithCIImage(by:) and crop(by:); if visibility prevents direct use,
change CropView.isLargeImageMode to internal (or add a protocol method like
isLargeImageMode()) and update the call site to use that instead so the logic is
centralized in CropView.
In `@Sources/Mantis/Extensions/UIImageExtensions.swift`:
- Around line 379-384: The array literal assigned to screenCorners contains
inconsistent spacing around colons in the CGPoint initializers; update each
CGPoint(initializer:) to use a single space after each colon (e.g., "x: value,
y: value") so all four entries are consistently formatted. Locate the
screenCorners constant in UIImageExtensions (the CGPoint(...) entries) and
normalize the colon spacing throughout the array literal to satisfy SwiftLint.
- Around line 313-343: The downsampledIfNeeded(maxPixelCount:) implementation
currently calls jpegData(compressionQuality:) and pngData() on UIImage which
forces the full image into memory before ImageIO can create a thumbnail; update
the API and implementation to accept an optional Data or URL input so callers
can provide on-disk image data (or a file URL) to
CGImageSourceCreateWithData/CGImageSourceCreateWithURL instead of always
serializing the UIImage, and when UIImage-only input is used keep the existing
fallback but document the memory caveat; locate the method
downsampledIfNeeded(maxPixelCount:) and the code paths that call
jpegData/pngData and replace them with branching to use provided Data/URL (using
CGImageSourceCreateWithURL when URL present) while preserving existing behavior
when no external source is supplied.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Example/large.jpgis excluded by!**/*.jpg
📒 Files selected for processing (8)
Example/DemoViewController.swiftExample/MantisExample.xcodeproj/project.pbxprojSources/Mantis/CropView/CropView+Crop.swiftSources/Mantis/CropView/CropView.swiftSources/Mantis/CropViewConfig.swiftSources/Mantis/CropViewController/CropViewController+CropAPI.swiftSources/Mantis/Extensions/UIImageExtensions.swiftSources/Mantis/Mantis.swift
There was a problem hiding this comment.
Pull request overview
Adds a “large image mode” to Mantis to avoid UIImageView black rendering / CGContext allocation failures when working with very large images (Fixes #95).
Changes:
- Introduces
CropViewConfig.maxImagePixelCountto enable downsampling for display and a CIImage-based cropping path for oversized inputs. - Updates crop flows to switch between the existing CGContext-based crop and the new CIImage pipeline based on pixel count.
- Updates the example app to include a large-image demo asset and a UI toggle.
Reviewed changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| Sources/Mantis/Mantis.swift | Uses a downsampled image for the display ImageContainer when configured. |
| Sources/Mantis/Extensions/UIImageExtensions.swift | Adds downsampling helper + CIImage crop implementation for large images. |
| Sources/Mantis/CropViewController/CropViewController+CropAPI.swift | Switches crop pipeline based on configured max pixel threshold. |
| Sources/Mantis/CropViewConfig.swift | Adds maxImagePixelCount configuration knob and documentation. |
| Sources/Mantis/CropView/CropView.swift | Switches crop output pipeline and downsample-on-update display behavior. |
| Sources/Mantis/CropView/CropView+Crop.swift | Updates async crop to use CIImage pipeline for large images. |
| Example/MantisExample.xcodeproj/project.pbxproj | Adds large.jpg to the example project resources. |
| Example/DemoViewController.swift | Adds UI toggle and uses downsampled display image in the example. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let scaleFactor = sqrt(ratio) | ||
| let maxDimension = max(CGFloat(pixelWidth), CGFloat(pixelHeight)) * scaleFactor | ||
|
|
||
| guard let imageData = jpegData(compressionQuality: 1.0) ?? pngData(), |
There was a problem hiding this comment.
downsampledIfNeeded() builds the CGImageSource from jpegData(...) ?? pngData(). For images with alpha, jpegData will succeed but discard transparency (flattening pixels), which can change how transparent PNGs render in the crop UI. Prefer preserving alpha by choosing PNG when the source has an alpha channel (or avoid JPEG altogether for this helper).
| guard let imageData = jpegData(compressionQuality: 1.0) ?? pngData(), | |
| let imageData: Data? | |
| if let cgImage = self.cgImage { | |
| let alphaInfo = cgImage.alphaInfo | |
| let hasAlpha: Bool = ( | |
| alphaInfo == .first || | |
| alphaInfo == .last || | |
| alphaInfo == .premultipliedFirst || | |
| alphaInfo == .premultipliedLast | |
| ) | |
| if hasAlpha { | |
| imageData = self.pngData() | |
| } else { | |
| imageData = self.jpegData(compressionQuality: 1.0) | |
| } | |
| } else { | |
| // Fallback when cgImage is unavailable: prefer PNG to preserve potential transparency. | |
| imageData = self.pngData() ?? self.jpegData(compressionQuality: 1.0) | |
| } | |
| guard let imageData, |
Use CGImage.dataProvider (zero-copy for file-backed images) instead of jpegData()/pngData() which forces a full decode then re-encode. Also add optional sourceData parameter for callers that have original compressed data available. Re-encoding is now only a last-resort fallback. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Sources/Mantis/Extensions/UIImageExtensions.swift`:
- Around line 397-400: The CGPoint literals inside screenCorners have extra
spaces after the parameter labels (e.g., "x: halfCropW") causing SwiftLint
colon spacing violations; update the four CGPoint initializers in the
screenCorners code to use a single space after each colon for both x: and y:
(e.g., "x: -halfCropW, y: -halfCropH") so the colon spacing conforms to
SwiftLint rules.
- Around line 333-344: The current else-if chain sets imageSource using
sourceData or providerData and skips the jpegData()/pngData() fallback if
provider-based CGImageSourceCreateWithData(providerData, ...) returns nil;
change the logic in UIImageExtensions (around the imageSource initialization) to
try each approach independently: first attempt CGImageSourceCreateWithData for
sourceData, if that yields nil then attempt CGImageSourceCreateWithData for
providerData (from cgImage.dataProvider.data), and if that also yields nil then
fall back to creating encoded data via jpegData(compressionQuality:) ??
pngData() and pass that to CGImageSourceCreateWithData; ensure imageSource is
assigned the first non-nil CGImageSource result so the re-encode fallback always
runs when provider-based creation fails.
Change displayImage from a computed property to a cached stored property that updates via didSet when image changes, preventing redundant downsampledIfNeeded calls on every access. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Example/DemoViewController.swift (1)
93-97: Consider localizing and labeling the Large Image toggle.At Line 93, the text is hardcoded. If this demo is user-facing, prefer
NSLocalizedStringand set an explicit accessibility label/value on the switch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Example/DemoViewController.swift` around lines 93 - 97, Replace the hardcoded label text and add accessibility metadata: use NSLocalizedString for switchLabel.text (e.g., NSLocalizedString("Large Image", "toggle label for large image demo")) instead of the literal string in DemoViewController, and set explicit accessibility properties on largeImageSwitch such as accessibilityLabel (matching the localized string) and accessibilityValue/state so screen readers can convey the toggle state; update references to switchLabel and largeImageSwitch accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Example/DemoViewController.swift`:
- Around line 16-18: The synchronous call to
image?.downsampledIfNeeded(maxPixelCount: maxImagePixelCount) inside the image
didSet blocks the main thread; change didSet to perform downsampling on a
background queue (e.g., DispatchQueue.global), capture the current image/local
maxImagePixelCount at task start, then when the downsampled result is ready
dispatch back to the main queue to assign cachedDisplayImage only if the source
image hasn’t changed (stale-result guard comparing the captured image or using
an incrementing token). Ensure all UI/state writes (cachedDisplayImage) happen
on the main thread and avoid retaining long-lived work for cancelled/stale
images.
---
Nitpick comments:
In `@Example/DemoViewController.swift`:
- Around line 93-97: Replace the hardcoded label text and add accessibility
metadata: use NSLocalizedString for switchLabel.text (e.g.,
NSLocalizedString("Large Image", "toggle label for large image demo")) instead
of the literal string in DemoViewController, and set explicit accessibility
properties on largeImageSwitch such as accessibilityLabel (matching the
localized string) and accessibilityValue/state so screen readers can convey the
toggle state; update references to switchLabel and largeImageSwitch accordingly.
…der yields raw pixel data CGImage.dataProvider.data can return decoded bitmap data rather than an encoded image format. The previous else-if chain would accept the data but fail at thumbnail creation, skipping the jpegData/pngData fallback. Restructure as independent if-blocks so each approach is tried end-to-end before moving to the next. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
Sources/Mantis/Extensions/UIImageExtensions.swift (1)
443-444: Reuse a sharedCIContextfor repeated crops.Creating a new
CIContextper crop is avoidable overhead; a shared context improves throughput for repeated operations.Proposed refactor
extension UIImage { + private static let sharedCIContext = CIContext(options: [.useSoftwareRenderer: false]) + func cgImageWithFixedOrientation() -> CGImage? { if imageOrientation == .up { return cgImage @@ - let context = CIContext(options: [.useSoftwareRenderer: false]) + let context = Self.sharedCIContext guard let resultCG = context.createCGImage(scaledImage, from: scaledImage.extent) else { return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Sources/Mantis/Extensions/UIImageExtensions.swift` around lines 443 - 444, The code currently allocates a new CIContext for each crop (the local let context = CIContext(...) used before calling createCGImage), which is expensive; refactor by adding a shared static CIContext (e.g., a private static let sharedCIContext = CIContext(options: [.useSoftwareRenderer: false]) on the UIImageExtensions/UIImage extension) and replace the per-call context with that shared context when calling createCGImage(scaledImage, from: ...), ensuring any references to the local variable 'context' are updated to use the shared property (e.g., Self.sharedCIContext or UIImageExtensions.sharedCIContext).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Sources/Mantis/Extensions/UIImageExtensions.swift`:
- Around line 357-359: The fallback data creation currently prefers
jpegData(compressionQuality: 1.0) over pngData(), which can strip alpha; change
the selection to prefer pngData() first (i.e., use pngData() ??
jpegData(compressionQuality: 1.0)) so transparent images retain their alpha
before passing to
CGImageSourceCreateWithData/CGImageSourceCreateThumbnailAtIndex; update the
variable/condition that sets fallbackData accordingly in UIImageExtensions.swift
where fallbackData is created.
---
Nitpick comments:
In `@Sources/Mantis/Extensions/UIImageExtensions.swift`:
- Around line 443-444: The code currently allocates a new CIContext for each
crop (the local let context = CIContext(...) used before calling createCGImage),
which is expensive; refactor by adding a shared static CIContext (e.g., a
private static let sharedCIContext = CIContext(options: [.useSoftwareRenderer:
false]) on the UIImageExtensions/UIImage extension) and replace the per-call
context with that shared context when calling createCGImage(scaledImage, from:
...), ensuring any references to the local variable 'context' are updated to use
the shared property (e.g., Self.sharedCIContext or
UIImageExtensions.sharedCIContext).
…annel Co-Authored-By: Claude <noreply@anthropic.com>
Fixes #95
Summary by CodeRabbit
New Features
Configuration
Performance Improvements