⚡️ Speed up method InferenceModelsDepthAnythingV2Adapter.predict by 403% in PR #1959 (feature/the-great-unification-of-inference)#1973
Open
codeflash-ai[bot] wants to merge 1 commit intofeature/the-great-unification-of-inferencefrom
Conversation
The optimization replaces matplotlib's colormap application with OpenCV's `cv2.applyColorMap`, achieving a **4x speedup** (403% faster, from 218ms to 43.4ms).
## Key Changes
**What was changed:**
- Removed `matplotlib.pyplot` import and `plt.get_cmap("viridis")` call
- Added `cv2` import
- Replaced the line `colored_depth = (cmap(depth_for_viz)[:, :, :3] * 255).astype(np.uint8)` with two OpenCV operations:
- `cv2.applyColorMap(depth_for_viz, cv2.COLORMAP_VIRIDIS)`
- `cv2.cvtColor(colored_depth, cv2.COLOR_BGR2RGB)` (to convert OpenCV's BGR output to RGB)
## Why This is Faster
**Original bottleneck:** The line profiler shows that matplotlib's colormap application consumed **84% of total runtime** (200ms out of 238ms). Matplotlib's `cmap(array)` applies the colormap through Python-level iteration and normalization, which is extremely slow for per-pixel operations.
**OpenCV's advantage:** `cv2.applyColorMap` is a compiled C++ function optimized for image processing. It directly maps uint8 values to color values using a pre-computed lookup table, avoiding Python overhead entirely. The additional `cvtColor` operation is also highly optimized and adds minimal overhead (4.1ms, only 7.1% of new total time).
**Performance breakdown:** In the optimized version, the colormap operations now take only ~20ms combined (26.7% + 7.1%), compared to 200ms previously—a **10x improvement** in the visualization step itself.
## Impact on Workloads
This optimization is particularly valuable for:
- **High-throughput depth estimation pipelines** where the `predict` method is called repeatedly
- **Real-time applications** that need fast depth map visualization
- **Large-scale batch processing** - as shown in the test with 600x600 images where runtime improved from 4.48ms to 840μs (434% faster)
The optimization preserves all functionality including normalization behavior and output format. All test cases show either speedups or minimal variations within measurement noise, confirming correctness while delivering consistent performance improvements across different input sizes and depth value ranges.
aseembits93
reviewed
Feb 4, 2026
Comment on lines
+83
to
+84
|
|
||
| # Convert numpy array to WorkflowImageData |
Contributor
There was a problem hiding this comment.
Suggested change
| # Convert numpy array to WorkflowImageData |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #1959
If you approve this dependent PR, these changes will be merged into the original PR branch
feature/the-great-unification-of-inference.📄 403% (4.03x) speedup for
InferenceModelsDepthAnythingV2Adapter.predictininference/models/depth_anything_v2/depth_anything_v2_inference_models.py⏱️ Runtime :
218 milliseconds→43.4 milliseconds(best of21runs)📝 Explanation and details
The optimization replaces matplotlib's colormap application with OpenCV's
cv2.applyColorMap, achieving a 4x speedup (403% faster, from 218ms to 43.4ms).Key Changes
What was changed:
matplotlib.pyplotimport andplt.get_cmap("viridis")callcv2importcolored_depth = (cmap(depth_for_viz)[:, :, :3] * 255).astype(np.uint8)with two OpenCV operations:cv2.applyColorMap(depth_for_viz, cv2.COLORMAP_VIRIDIS)cv2.cvtColor(colored_depth, cv2.COLOR_BGR2RGB)(to convert OpenCV's BGR output to RGB)Why This is Faster
Original bottleneck: The line profiler shows that matplotlib's colormap application consumed 84% of total runtime (200ms out of 238ms). Matplotlib's
cmap(array)applies the colormap through Python-level iteration and normalization, which is extremely slow for per-pixel operations.OpenCV's advantage:
cv2.applyColorMapis a compiled C++ function optimized for image processing. It directly maps uint8 values to color values using a pre-computed lookup table, avoiding Python overhead entirely. The additionalcvtColoroperation is also highly optimized and adds minimal overhead (4.1ms, only 7.1% of new total time).Performance breakdown: In the optimized version, the colormap operations now take only ~20ms combined (26.7% + 7.1%), compared to 200ms previously—a 10x improvement in the visualization step itself.
Impact on Workloads
This optimization is particularly valuable for:
predictmethod is called repeatedlyThe optimization preserves all functionality including normalization behavior and output format. All test cases show either speedups or minimal variations within measurement noise, confirming correctness while delivering consistent performance improvements across different input sizes and depth value ranges.
✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-pr1959-2026-02-04T22.37.23and push.