Skip to content

Conversation

@hoolioh
Copy link
Contributor

@hoolioh hoolioh commented May 30, 2025

What does this PR do?

Prevent panics from unwindind in the host language so we can avoid UB.

Motivation

Currently there is no guarantee that the trace exporter methods are panic free so there is the possibility that unwinding in the host language can cause UB.

Additional Notes

The wrapper is feature gated by "catch_unwind" feature and it's enabled by default. The aim of the feature is having a mechanism to disable the feature if the performance penalty is higher than expected.

@hoolioh hoolioh force-pushed the julio/catch-panics-trace-exporter branch from 9b2b792 to 762337a Compare May 30, 2025 12:46
@pr-commenter
Copy link

pr-commenter bot commented May 30, 2025

Benchmarks

Comparison

Benchmark execution time: 2025-06-26 11:20:17

Comparing candidate commit 39b17b6 in PR branch julio/catch-panics-trace-exporter with baseline commit 3804290 in branch main.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 51 metrics, 2 unstable metrics.

scenario:tags/replace_trace_tags

  • 🟥 execution_time [+159.923ns; +165.110ns] or [+6.603%; +6.817%]

Candidate

Candidate benchmark details

Group 1

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
credit_card/is_card_number/ execution_time 3.891µs 3.912µs ± 0.003µs 3.911µs ± 0.002µs 3.913µs 3.916µs 3.919µs 3.921µs 0.24% -1.157 11.231 0.07% 0.000µs 1 200
credit_card/is_card_number/ throughput 255050157.703op/s 255654877.548op/s ± 192201.179op/s 255671607.672op/s ± 101899.026op/s 255765968.439op/s 255876317.975op/s 255975592.799op/s 257005433.398op/s 0.52% 1.184 11.423 0.07% 13590.676op/s 1 200
credit_card/is_card_number/ 3782-8224-6310-005 execution_time 77.248µs 77.836µs ± 0.815µs 77.463µs ± 0.146µs 77.921µs 79.742µs 81.067µs 81.396µs 5.08% 2.648 7.169 1.04% 0.058µs 1 200
credit_card/is_card_number/ 3782-8224-6310-005 throughput 12285577.389op/s 12848841.599op/s ± 130995.748op/s 12909422.623op/s ± 24311.795op/s 12924010.875op/s 12934217.278op/s 12939453.209op/s 12945391.722op/s 0.28% -2.581 6.791 1.02% 9262.798op/s 1 200
credit_card/is_card_number/ 378282246310005 execution_time 70.878µs 71.420µs ± 0.362µs 71.328µs ± 0.209µs 71.595µs 72.158µs 72.516µs 72.639µs 1.84% 1.054 0.985 0.51% 0.026µs 1 200
credit_card/is_card_number/ 378282246310005 throughput 13766626.766op/s 14002056.599op/s ± 70573.935op/s 14019666.106op/s ± 40932.651op/s 14052049.502op/s 14090288.601op/s 14106751.560op/s 14108785.793op/s 0.64% -1.026 0.907 0.50% 4990.331op/s 1 200
credit_card/is_card_number/37828224631 execution_time 3.893µs 3.911µs ± 0.002µs 3.911µs ± 0.001µs 3.912µs 3.914µs 3.915µs 3.915µs 0.12% -2.131 16.687 0.06% 0.000µs 1 200
credit_card/is_card_number/37828224631 throughput 255401257.150op/s 255721434.203op/s ± 146146.800op/s 255716608.794op/s ± 95427.665op/s 255813504.453op/s 255906374.416op/s 255973538.699op/s 256860315.101op/s 0.45% 2.156 16.912 0.06% 10334.139op/s 1 200
credit_card/is_card_number/378282246310005 execution_time 67.770µs 68.140µs ± 0.452µs 67.948µs ± 0.126µs 68.250µs 69.028µs 69.938µs 70.202µs 3.32% 2.326 5.921 0.66% 0.032µs 1 200
credit_card/is_card_number/378282246310005 throughput 14244616.698op/s 14676396.118op/s ± 95919.732op/s 14717072.577op/s ± 27382.064op/s 14736527.431op/s 14748602.915op/s 14754645.512op/s 14755858.302op/s 0.26% -2.277 5.635 0.65% 6782.549op/s 1 200
credit_card/is_card_number/37828224631000521389798 execution_time 44.563µs 45.064µs ± 0.176µs 45.065µs ± 0.121µs 45.188µs 45.346µs 45.400µs 45.474µs 0.91% -0.146 -0.441 0.39% 0.012µs 1 200
credit_card/is_card_number/37828224631000521389798 throughput 21990433.369op/s 22191035.920op/s ± 86776.509op/s 22190162.045op/s ± 59660.652op/s 22246613.889op/s 22328572.869op/s 22394562.188op/s 22440267.949op/s 1.13% 0.164 -0.427 0.39% 6136.026op/s 1 200
credit_card/is_card_number/x371413321323331 execution_time 6.026µs 6.035µs ± 0.004µs 6.034µs ± 0.003µs 6.037µs 6.043µs 6.047µs 6.048µs 0.23% 0.609 0.003 0.07% 0.000µs 1 200
credit_card/is_card_number/x371413321323331 throughput 165333626.905op/s 165705396.297op/s ± 123466.435op/s 165720909.698op/s ± 75036.591op/s 165792153.892op/s 165888835.966op/s 165907085.061op/s 165936648.830op/s 0.13% -0.606 -0.003 0.07% 8730.395op/s 1 200
credit_card/is_card_number_no_luhn/ execution_time 3.894µs 3.912µs ± 0.003µs 3.911µs ± 0.002µs 3.913µs 3.918µs 3.920µs 3.920µs 0.23% -0.302 5.228 0.08% 0.000µs 1 200
credit_card/is_card_number_no_luhn/ throughput 255069595.495op/s 255639948.856op/s ± 202825.984op/s 255660704.992op/s ± 112781.560op/s 255762278.115op/s 255908304.243op/s 255939161.866op/s 256818882.827op/s 0.45% 0.319 5.310 0.08% 14341.963op/s 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time 65.149µs 65.435µs ± 0.049µs 65.430µs ± 0.022µs 65.458µs 65.511µs 65.576µs 65.588µs 0.24% -0.844 7.924 0.07% 0.003µs 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput 15246673.236op/s 15282290.458op/s ± 11371.966op/s 15283577.032op/s ± 5138.367op/s 15288223.100op/s 15294217.333op/s 15313325.838op/s 15349383.572op/s 0.43% 0.864 8.016 0.07% 804.119op/s 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time 52.943µs 53.006µs ± 0.036µs 53.002µs ± 0.020µs 53.026µs 53.054µs 53.091µs 53.291µs 0.54% 2.617 18.301 0.07% 0.003µs 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 throughput 18764864.826op/s 18865629.128op/s ± 12750.824op/s 18867087.311op/s ± 7211.628op/s 18873636.832op/s 18882313.883op/s 18885515.282op/s 18888245.845op/s 0.11% -2.589 18.030 0.07% 901.619op/s 1 200
credit_card/is_card_number_no_luhn/37828224631 execution_time 3.889µs 3.912µs ± 0.003µs 3.911µs ± 0.002µs 3.914µs 3.917µs 3.920µs 3.923µs 0.29% -1.146 11.459 0.08% 0.000µs 1 200
credit_card/is_card_number_no_luhn/37828224631 throughput 254933440.637op/s 255643042.899op/s ± 214758.013op/s 255662852.188op/s ± 123143.066op/s 255769076.839op/s 255911645.332op/s 255957056.137op/s 257158189.799op/s 0.58% 1.177 11.677 0.08% 15185.685op/s 1 200
credit_card/is_card_number_no_luhn/378282246310005 execution_time 49.736µs 49.797µs ± 0.032µs 49.794µs ± 0.020µs 49.815µs 49.862µs 49.884µs 49.911µs 0.23% 0.709 0.553 0.06% 0.002µs 1 200
credit_card/is_card_number_no_luhn/378282246310005 throughput 20035681.455op/s 20081366.448op/s ± 12856.623op/s 20082543.249op/s ± 8234.680op/s 20090559.948op/s 20098237.688op/s 20104089.977op/s 20106295.281op/s 0.12% -0.705 0.544 0.06% 909.101op/s 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time 44.596µs 45.071µs ± 0.169µs 45.088µs ± 0.122µs 45.200µs 45.316µs 45.392µs 45.403µs 0.70% -0.397 -0.331 0.38% 0.012µs 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput 22024749.072op/s 22187397.831op/s ± 83566.319op/s 22178902.319op/s ± 60043.635op/s 22242821.955op/s 22341486.109op/s 22374217.208op/s 22423766.769op/s 1.10% 0.414 -0.314 0.38% 5909.031op/s 1 200
credit_card/is_card_number_no_luhn/x371413321323331 execution_time 6.027µs 6.034µs ± 0.012µs 6.033µs ± 0.002µs 6.035µs 6.040µs 6.084µs 6.170µs 2.28% 8.675 87.404 0.20% 0.001µs 1 200
credit_card/is_card_number_no_luhn/x371413321323331 throughput 162066200.277op/s 165724588.078op/s ± 324686.210op/s 165759565.701op/s ± 65964.209op/s 165829529.447op/s 165888196.842op/s 165918250.198op/s 165931939.665op/s 0.10% -8.591 85.912 0.20% 22958.782op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
credit_card/is_card_number/ execution_time [3.911µs; 3.912µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ throughput [255628240.313op/s; 255681514.783op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 execution_time [77.723µs; 77.949µs] or [-0.145%; +0.145%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 throughput [12830686.848op/s; 12866996.350op/s] or [-0.141%; +0.141%] None None None
credit_card/is_card_number/ 378282246310005 execution_time [71.370µs; 71.470µs] or [-0.070%; +0.070%] None None None
credit_card/is_card_number/ 378282246310005 throughput [13992275.730op/s; 14011837.468op/s] or [-0.070%; +0.070%] None None None
credit_card/is_card_number/37828224631 execution_time [3.910µs; 3.911µs] or [-0.008%; +0.008%] None None None
credit_card/is_card_number/37828224631 throughput [255701179.662op/s; 255741688.744op/s] or [-0.008%; +0.008%] None None None
credit_card/is_card_number/378282246310005 execution_time [68.077µs; 68.202µs] or [-0.092%; +0.092%] None None None
credit_card/is_card_number/378282246310005 throughput [14663102.565op/s; 14689689.670op/s] or [-0.091%; +0.091%] None None None
credit_card/is_card_number/37828224631000521389798 execution_time [45.040µs; 45.088µs] or [-0.054%; +0.054%] None None None
credit_card/is_card_number/37828224631000521389798 throughput [22179009.530op/s; 22203062.309op/s] or [-0.054%; +0.054%] None None None
credit_card/is_card_number/x371413321323331 execution_time [6.034µs; 6.035µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/x371413321323331 throughput [165688285.036op/s; 165722507.557op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ execution_time [3.911µs; 3.912µs] or [-0.011%; +0.011%] None None None
credit_card/is_card_number_no_luhn/ throughput [255611839.125op/s; 255668058.587op/s] or [-0.011%; +0.011%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time [65.429µs; 65.442µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput [15280714.413op/s; 15283866.503op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time [53.001µs; 53.011µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 throughput [18863861.987op/s; 18867396.270op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/37828224631 execution_time [3.911µs; 3.912µs] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/37828224631 throughput [255613279.504op/s; 255672806.294op/s] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/378282246310005 execution_time [49.793µs; 49.802µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/378282246310005 throughput [20079584.644op/s; 20083148.252op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time [45.048µs; 45.095µs] or [-0.052%; +0.052%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput [22175816.343op/s; 22198979.319op/s] or [-0.052%; +0.052%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 execution_time [6.032µs; 6.036µs] or [-0.028%; +0.028%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 throughput [165679589.693op/s; 165769586.464op/s] or [-0.027%; +0.027%] None None None

Group 2

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
concentrator/add_spans_to_concentrator execution_time 8.167ms 8.202ms ± 0.043ms 8.184ms ± 0.009ms 8.197ms 8.299ms 8.310ms 8.322ms 1.70% 1.604 0.887 0.53% 0.003ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
concentrator/add_spans_to_concentrator execution_time [8.196ms; 8.208ms] or [-0.073%; +0.073%] None None None

Group 3

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
tags/replace_trace_tags execution_time 2.540µs 2.585µs ± 0.015µs 2.583µs ± 0.003µs 2.587µs 2.625µs 2.630µs 2.634µs 1.98% 0.801 3.634 0.60% 0.001µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
tags/replace_trace_tags execution_time [2.582µs; 2.587µs] or [-0.083%; +0.083%] None None None

Group 4

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
two way interface execution_time 17.768µs 26.005µs ± 10.447µs 18.021µs ± 0.196µs 35.367µs 43.452µs 45.856µs 73.309µs 306.80% 1.124 1.441 40.07% 0.739µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
two way interface execution_time [24.557µs; 27.453µs] or [-5.568%; +5.568%] None None None

Group 5

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
write only interface execution_time 1.172µs 3.248µs ± 1.419µs 3.036µs ± 0.024µs 3.061µs 3.704µs 13.914µs 14.881µs 390.20% 7.318 54.870 43.58% 0.100µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
write only interface execution_time [3.052µs; 3.445µs] or [-6.055%; +6.055%] None None None

Group 6

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sql/obfuscate_sql_string execution_time 86.448µs 86.684µs ± 0.262µs 86.651µs ± 0.050µs 86.711µs 86.800µs 87.121µs 90.082µs 3.96% 11.121 139.563 0.30% 0.019µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sql/obfuscate_sql_string execution_time [86.648µs; 86.721µs] or [-0.042%; +0.042%] None None None

Group 7

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time 205.130µs 205.708µs ± 0.464µs 205.657µs ± 0.187µs 205.851µs 206.198µs 208.237µs 209.330µs 1.79% 4.351 27.887 0.22% 0.033µs 1 200
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput 4777136.881op/s 4861282.528op/s ± 10857.823op/s 4862462.297op/s ± 4410.628op/s 4866684.735op/s 4872605.499op/s 4873914.423op/s 4874945.783op/s 0.26% -4.278 27.196 0.22% 767.764op/s 1 200
normalization/normalize_name/normalize_name/bad-name execution_time 18.214µs 18.317µs ± 0.107µs 18.284µs ± 0.028µs 18.335µs 18.506µs 18.666µs 19.285µs 5.47% 4.564 33.094 0.58% 0.008µs 1 200
normalization/normalize_name/normalize_name/bad-name throughput 51855116.804op/s 54595184.838op/s ± 311225.095op/s 54691398.031op/s ± 83776.902op/s 54758019.442op/s 54817758.605op/s 54858843.604op/s 54904175.928op/s 0.39% -4.326 29.990 0.57% 22006.937op/s 1 200
normalization/normalize_name/normalize_name/good execution_time 10.582µs 10.785µs ± 0.087µs 10.780µs ± 0.067µs 10.848µs 10.919µs 10.982µs 11.006µs 2.10% 0.054 -0.619 0.80% 0.006µs 1 200
normalization/normalize_name/normalize_name/good throughput 90858333.825op/s 92726797.344op/s ± 747568.203op/s 92765659.571op/s ± 578275.547op/s 93336869.828op/s 93991318.274op/s 94208271.218op/s 94496277.135op/s 1.87% -0.021 -0.628 0.80% 52861.055op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time [205.644µs; 205.772µs] or [-0.031%; +0.031%] None None None
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput [4859777.738op/s; 4862787.318op/s] or [-0.031%; +0.031%] None None None
normalization/normalize_name/normalize_name/bad-name execution_time [18.302µs; 18.332µs] or [-0.081%; +0.081%] None None None
normalization/normalize_name/normalize_name/bad-name throughput [54552052.034op/s; 54638317.643op/s] or [-0.079%; +0.079%] None None None
normalization/normalize_name/normalize_name/good execution_time [10.773µs; 10.797µs] or [-0.112%; +0.112%] None None None
normalization/normalize_name/normalize_name/good throughput [92623191.581op/s; 92830403.107op/s] or [-0.112%; +0.112%] None None None

Group 8

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_trace/test_trace execution_time 243.961ns 254.199ns ± 14.135ns 247.469ns ± 2.229ns 253.619ns 283.410ns 298.407ns 303.720ns 22.73% 1.732 1.889 5.55% 1.000ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_trace/test_trace execution_time [252.240ns; 256.158ns] or [-0.771%; +0.771%] None None None

Group 9

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching string interning on wordpress profile execution_time 151.574µs 152.158µs ± 0.264µs 152.129µs ± 0.143µs 152.279µs 152.595µs 152.960µs 153.109µs 0.64% 0.826 1.365 0.17% 0.019µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching string interning on wordpress profile execution_time [152.121µs; 152.194µs] or [-0.024%; +0.024%] None None None

Group 10

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time 535.225µs 536.267µs ± 0.701µs 536.184µs ± 0.294µs 536.482µs 537.098µs 537.469µs 543.673µs 1.40% 5.970 60.484 0.13% 0.050µs 1 200
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput 1839339.932op/s 1864746.747op/s ± 2419.026op/s 1865032.021op/s ± 1022.175op/s 1865990.849op/s 1867503.525op/s 1868237.233op/s 1868374.346op/s 0.18% -5.866 59.031 0.13% 171.051op/s 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time 381.916µs 383.134µs ± 0.903µs 383.063µs ± 0.350µs 383.376µs 384.157µs 384.500µs 393.396µs 2.70% 7.435 81.677 0.24% 0.064µs 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput 2541965.834op/s 2610066.997op/s ± 6045.859op/s 2610538.507op/s ± 2379.641op/s 2613118.717op/s 2615983.493op/s 2616906.086op/s 2618373.388op/s 0.30% -7.237 78.674 0.23% 427.507op/s 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time 195.138µs 195.745µs ± 0.309µs 195.753µs ± 0.199µs 195.908µs 196.150µs 196.529µs 197.403µs 0.84% 1.443 6.437 0.16% 0.022µs 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput 5065783.194op/s 5108701.641op/s ± 8043.480op/s 5108487.470op/s ± 5184.531op/s 5114147.778op/s 5119442.667op/s 5123880.968op/s 5124568.828op/s 0.31% -1.413 6.263 0.16% 568.760op/s 1 200
normalization/normalize_service/normalize_service/[empty string] execution_time 38.102µs 38.226µs ± 0.098µs 38.219µs ± 0.026µs 38.244µs 38.288µs 38.372µs 39.480µs 3.30% 10.542 132.225 0.26% 0.007µs 1 200
normalization/normalize_service/normalize_service/[empty string] throughput 25329230.706op/s 26160151.442op/s ± 65343.568op/s 26165060.010op/s ± 17728.120op/s 26183337.928op/s 26202766.221op/s 26223380.050op/s 26245588.934op/s 0.31% -10.364 129.201 0.25% 4620.488op/s 1 200
normalization/normalize_service/normalize_service/test_ASCII execution_time 45.965µs 46.681µs ± 0.432µs 46.682µs ± 0.448µs 47.117µs 47.274µs 47.425µs 47.868µs 2.54% -0.026 -1.081 0.92% 0.031µs 1 200
normalization/normalize_service/normalize_service/test_ASCII throughput 20890724.797op/s 21423673.211op/s ± 198428.775op/s 21421512.531op/s ± 206843.165op/s 21642832.922op/s 21710234.585op/s 21735119.413op/s 21755562.716op/s 1.56% 0.051 -1.099 0.92% 14031.033op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time [536.170µs; 536.364µs] or [-0.018%; +0.018%] None None None
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput [1864411.493op/s; 1865082.000op/s] or [-0.018%; +0.018%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time [383.009µs; 383.259µs] or [-0.033%; +0.033%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput [2609229.100op/s; 2610904.895op/s] or [-0.032%; +0.032%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time [195.702µs; 195.788µs] or [-0.022%; +0.022%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput [5107586.892op/s; 5109816.390op/s] or [-0.022%; +0.022%] None None None
normalization/normalize_service/normalize_service/[empty string] execution_time [38.213µs; 38.240µs] or [-0.036%; +0.036%] None None None
normalization/normalize_service/normalize_service/[empty string] throughput [26151095.452op/s; 26169207.432op/s] or [-0.035%; +0.035%] None None None
normalization/normalize_service/normalize_service/test_ASCII execution_time [46.621µs; 46.741µs] or [-0.128%; +0.128%] None None None
normalization/normalize_service/normalize_service/test_ASCII throughput [21396172.891op/s; 21451173.531op/s] or [-0.128%; +0.128%] None None None

Group 11

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
ip_address/quantize_peer_ip_address_benchmark execution_time 4.986µs 5.050µs ± 0.039µs 5.040µs ± 0.032µs 5.086µs 5.111µs 5.120µs 5.122µs 1.63% 0.243 -1.221 0.76% 0.003µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
ip_address/quantize_peer_ip_address_benchmark execution_time [5.044µs; 5.055µs] or [-0.106%; +0.106%] None None None

Group 12

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
redis/obfuscate_redis_string execution_time 31.792µs 32.803µs ± 1.449µs 31.888µs ± 0.069µs 34.853µs 35.095µs 35.581µs 35.976µs 12.82% 0.922 -1.086 4.41% 0.102µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
redis/obfuscate_redis_string execution_time [32.602µs; 33.004µs] or [-0.612%; +0.612%] None None None

Group 13

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 39b17b6 1750936134 julio/catch-panics-trace-exporter
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching deserializing traces from msgpack to their internal representation execution_time 72.073ms 72.456ms ± 0.374ms 72.418ms ± 0.061ms 72.497ms 72.762ms 73.372ms 77.141ms 6.52% 10.038 121.525 0.52% 0.026ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching deserializing traces from msgpack to their internal representation execution_time [72.405ms; 72.508ms] or [-0.072%; +0.072%] None None None

Baseline

Omitted due to size.

@codecov-commenter
Copy link

codecov-commenter commented May 30, 2025

Codecov Report

Attention: Patch coverage is 75.00000% with 42 lines in your changes missing coverage. Please review.

Project coverage is 71.20%. Comparing base (3804290) to head (39b17b6).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1083      +/-   ##
==========================================
- Coverage   71.21%   71.20%   -0.02%     
==========================================
  Files         340      340              
  Lines       51654    51692      +38     
==========================================
+ Hits        36785    36805      +20     
- Misses      14869    14887      +18     
Components Coverage Δ
datadog-crashtracker 44.33% <ø> (ø)
datadog-crashtracker-ffi 6.02% <ø> (ø)
datadog-alloc 98.73% <ø> (ø)
data-pipeline 88.96% <75.00%> (-0.27%) ⬇️
data-pipeline-ffi 87.35% <75.00%> (-1.59%) ⬇️
ddcommon 82.31% <ø> (ø)
ddcommon-ffi 67.87% <ø> (ø)
ddtelemetry 60.15% <ø> (ø)
ddtelemetry-ffi 21.32% <ø> (ø)
dogstatsd-client 83.26% <ø> (ø)
datadog-ipc 82.58% <ø> (ø)
datadog-profiling 77.17% <ø> (ø)
datadog-profiling-ffi 62.12% <ø> (ø)
datadog-sidecar 40.82% <ø> (ø)
datdog-sidecar-ffi 0.14% <ø> (ø)
spawn-worker 55.35% <ø> (ø)
tinybytes 90.96% <ø> (ø)
datadog-trace-normalization 98.24% <ø> (ø)
datadog-trace-obfuscation 94.17% <ø> (ø)
datadog-trace-protobuf 77.10% <ø> (ø)
datadog-trace-utils 89.07% <ø> (ø)
datadog-tracer-flare 60.47% <ø> (ø)
datadog-log 76.31% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@r1viollet
Copy link
Contributor

r1viollet commented May 30, 2025

Artifact Size Benchmark Report

aarch64-alpine-linux-musl
Artifact Baseline Commit Change
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so.debug 22.00 MB 22.00 MB -0% (-64 B) 👌
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.a 69.98 MB 69.98 MB +0% (+606 B) 👌
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so 9.16 MB 9.16 MB -0% (-8 B) 👌
aarch64-unknown-linux-gnu
Artifact Baseline Commit Change
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so 9.08 MB 9.08 MB 0% (0 B) 👌
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so.debug 26.05 MB 26.05 MB -0% (-16 B) 👌
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.a 82.19 MB 82.19 MB +0% (+8 B) 👌
libdatadog-x64-windows
Artifact Baseline Commit Change
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.dll 18.31 MB 18.31 MB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.lib 63.93 KB 63.93 KB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.pdb 124.23 MB 124.21 MB --.01% (-24.00 KB) 💪
/libdatadog-x64-windows/debug/static/datadog_profiling_ffi.lib 641.73 MB 641.73 MB +0% (+3.88 KB) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.dll 5.85 MB 5.85 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.lib 63.93 KB 63.93 KB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.pdb 17.29 MB 17.29 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/static/datadog_profiling_ffi.lib 32.04 MB 32.04 MB -0% (-20 B) 👌
libdatadog-x86-windows
Artifact Baseline Commit Change
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.dll 15.60 MB 15.60 MB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.lib 64.91 KB 64.91 KB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.pdb 126.54 MB 126.52 MB --.01% (-16.00 KB) 💪
/libdatadog-x86-windows/debug/static/datadog_profiling_ffi.lib 631.37 MB 631.37 MB +0% (+3.80 KB) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.dll 4.46 MB 4.46 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.lib 64.91 KB 64.91 KB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.pdb 18.41 MB 18.41 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/static/datadog_profiling_ffi.lib 30.09 MB 30.09 MB -0% (-68 B) 👌
x86_64-alpine-linux-musl
Artifact Baseline Commit Change
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.a 62.74 MB 62.74 MB -0% (-608 B) 👌
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so 9.81 MB 9.81 MB +0% (+8 B) 👌
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so.debug 20.86 MB 20.86 MB -0% (-8 B) 👌
x86_64-unknown-linux-gnu
Artifact Baseline Commit Change
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.a 77.18 MB 77.18 MB +0% (+144 B) 👌
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so 9.71 MB 9.71 MB 0% (0 B) 👌
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so.debug 23.99 MB 23.99 MB +0% (+24 B) 👌

@hoolioh hoolioh force-pushed the julio/catch-panics-trace-exporter branch 2 times, most recently from a4e42c9 to 67006bc Compare June 3, 2025 07:16
@hoolioh hoolioh marked this pull request as ready for review June 3, 2025 07:16
@hoolioh hoolioh requested a review from a team as a code owner June 3, 2025 07:16
Self::Serde => write!(f, "Serialization/Deserialization error"),
Self::TimedOut => write!(f, "Operation timed out"),
#[cfg(feature = "catch_panic")]
Self::Panic => write!(f, "Operation panicked"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be returning more information here? Something that would be helpful when these errors wind up in telemetry logs?

}

#[cfg(feature = "catch_panic")]
macro_rules! catch_panic {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non-blocking: should this eventually be a proc macro? So we can just do:

#[catch_unwind]
fn foo() {
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor

@ekump ekump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. A couple of ideas that we can talk more about for follow-up but this is a good start.

@gleocadie
Copy link
Contributor

gleocadie commented Jun 5, 2025

Hey nice, that we are going forward that way. I have one question: do we plan to have this for all the ffi APIs ?
I remember months back I brought this idea but at the time, it did not seem a good idea.

@hoolioh
Copy link
Contributor Author

hoolioh commented Jun 26, 2025

Hey nice, that we are going forward that way. I have one question: do we plan to have this for all the ffi APIs ? I remember months back I brought this idea but at the time, it did not seem a good idea.

At the moment we're unsure about the performance impact of having this integrated so we thought it might be a good idea test it in some of our current integrations and if it does not bring any overhead probably turning it in a proc macro so it can be used across different crates.

@hoolioh hoolioh force-pushed the julio/catch-panics-trace-exporter branch from 67006bc to ea6f479 Compare June 26, 2025 11:08
@hoolioh hoolioh requested a review from a team as a code owner June 26, 2025 11:08
@hoolioh hoolioh force-pushed the julio/catch-panics-trace-exporter branch from ea6f479 to 39b17b6 Compare June 26, 2025 11:08
@hoolioh
Copy link
Contributor Author

hoolioh commented Jun 26, 2025

/merge

@dd-devflow
Copy link

dd-devflow bot commented Jun 26, 2025

View all feedbacks in Devflow UI.

2025-06-26 11:10:45 UTC ℹ️ Start processing command /merge


2025-06-26 11:10:51 UTC ℹ️ MergeQueue: waiting for PR to be ready

This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.


2025-06-26 11:40:17 UTC ℹ️ MergeQueue: merge request added to the queue

The expected merge time in main is approximately 1h (p90).


2025-06-26 12:08:06 UTC ℹ️ MergeQueue: This merge request was merged

@dd-mergequeue dd-mergequeue bot deleted the julio/catch-panics-trace-exporter branch June 26, 2025 12:08
hoolioh added a commit to DataDog/dd-trace-dotnet that referenced this pull request Jul 21, 2025
## Summary of changes
Bump libdatadog from 19.0.1 to 19.1.0.
## Reason for change
New datadog version integrates new features:

API changes:
- ddog_trace_exporter_config_set_connection_timeout: this is aimed to
solve span duplications in the test enviroment due to the current
timeout is very short.
- ddog_trace_exporter_config_set_rates_payload_version: this will avoid
caching the agent response which was used to check if there were changes
in the sample rates.

Improvements:
- Prevent panics from unwinding in the host language so we can avoid
undefined behavior
([#1083](DataDog/libdatadog#1083)).

## Other details
Libdd release
[page](https://github.com/DataDog/libdatadog/releases/tag/v19.1.0).
ivoanjo added a commit that referenced this pull request Nov 11, 2025
…h_void_ffi_result`

**What does this PR do?**

This PR updates the `wrap_with_ffi_result` and
`wrap_with_void_ffi_result` macros to catch any panics that happen
inside them, returning them as errors.

The error handling is made in such a way (see `handle_panic_error`
for details) that it should be able to report back an error even if we
fail to do any allocations.

Important note: Because only the macros have been changed, and
ffi APIs that don't use the macros are of course not affected and
can still trigger panics. If we like this approach, I'll follow-up
with a separate PR to update other APIs to use the new macros.

**Motivation:**

In <https://docs.google.com/document/d/1weMu9P03KKhPQ-gh9BMqRrEzpa1BnnY0LaSRGJbfc7A/edit?usp=sharing>
(Datadog-only link, sorry!) we saw `ddog_prof_Exporter_send`
crashing due to what can be summed up as

`ddog_prof_Exporter_send` (report a profile) ->
  hyper-util tries to do dns resolution in a separate thread pool ->
    tokio failed to create a new thread ->
      panic and we tear down the app because we can't report a profile

This is not good at all, and this PR solves this inspired by
earlier work in #815 and #1083.

**Additional Notes:**

While I don't predict that will happen very often, callers that
want to opt-out of the catch unwind behavior can still use the
`..._no_catch` variants of the macros.

**How to test the change?**

This change includes test coverage. I've also separately tried to
sprinkle a few `panic!` calls manually and tested that it works as
expected.
ivoanjo added a commit that referenced this pull request Nov 11, 2025
…h_void_ffi_result`

**What does this PR do?**

This PR updates the `wrap_with_ffi_result` and
`wrap_with_void_ffi_result` macros to catch any panics that happen
inside them, returning them as errors.

The error handling is made in such a way (see `handle_panic_error`
for details) that it should be able to report back an error even if we
fail to do any allocations.

Important note: Because only the macros have been changed, and
ffi APIs that don't use the macros are of course not affected and
can still trigger panics. If we like this approach, I'll follow-up
with a separate PR to update other APIs to use the new macros.

**Motivation:**

In <https://docs.google.com/document/d/1weMu9P03KKhPQ-gh9BMqRrEzpa1BnnY0LaSRGJbfc7A/edit?usp=sharing>
(Datadog-only link, sorry!) we saw `ddog_prof_Exporter_send`
crashing due to what can be summed up as

`ddog_prof_Exporter_send` (report a profile) ->
  hyper-util tries to do dns resolution in a separate thread pool ->
    tokio failed to create a new thread ->
      panic and we tear down the app because we can't report a profile

This is not good at all, and this PR solves this inspired by
earlier work in #815 and #1083.

**Additional Notes:**

While I don't predict that will happen very often, callers that
want to opt-out of the catch unwind behavior can still use the
`..._no_catch` variants of the macros.

**How to test the change?**

This change includes test coverage. I've also separately tried to
sprinkle a few `panic!` calls manually and tested that it works as
expected.
ivoanjo added a commit that referenced this pull request Nov 11, 2025
…h_void_ffi_result`

**What does this PR do?**

This PR updates the `wrap_with_ffi_result` and
`wrap_with_void_ffi_result` macros to catch any panics that happen
inside them, returning them as errors.

The error handling is made in such a way (see `handle_panic_error`
for details) that it should be able to report back an error even if we
fail to do any allocations.

Important note: Because only the macros have been changed, and
ffi APIs that don't use the macros are of course not affected and
can still trigger panics. If we like this approach, I'll follow-up
with a separate PR to update other APIs to use the new macros.

**Motivation:**

In <https://docs.google.com/document/d/1weMu9P03KKhPQ-gh9BMqRrEzpa1BnnY0LaSRGJbfc7A/edit?usp=sharing>
(Datadog-only link, sorry!) we saw `ddog_prof_Exporter_send`
crashing due to what can be summed up as

`ddog_prof_Exporter_send` (report a profile) ->
  hyper-util tries to do dns resolution in a separate thread pool ->
    tokio failed to create a new thread ->
      panic and we tear down the app because we can't report a profile

This is not good at all, and this PR solves this inspired by
earlier work in #815 and #1083.

**Additional Notes:**

While I don't predict that will happen very often, callers that
want to opt-out of the catch unwind behavior can still use the
`..._no_catch` variants of the macros.

**How to test the change?**

This change includes test coverage. I've also separately tried to
sprinkle a few `panic!` calls manually and tested that it works as
expected.
ivoanjo added a commit that referenced this pull request Nov 11, 2025
…h_void_ffi_result`

**What does this PR do?**

This PR updates the `wrap_with_ffi_result` and
`wrap_with_void_ffi_result` macros to catch any panics that happen
inside them, returning them as errors.

The error handling is made in such a way (see `handle_panic_error`
for details) that it should be able to report back an error even if we
fail to do any allocations.

Important note: Because only the macros have been changed, and
ffi APIs that don't use the macros are of course not affected and
can still trigger panics. If we like this approach, I'll follow-up
with a separate PR to update other APIs to use the new macros.

**Motivation:**

In <https://docs.google.com/document/d/1weMu9P03KKhPQ-gh9BMqRrEzpa1BnnY0LaSRGJbfc7A/edit?usp=sharing>
(Datadog-only link, sorry!) we saw `ddog_prof_Exporter_send`
crashing due to what can be summed up as

`ddog_prof_Exporter_send` (report a profile) ->
  hyper-util tries to do dns resolution in a separate thread pool ->
    tokio failed to create a new thread ->
      panic and we tear down the app because we can't report a profile

This is not good at all, and this PR solves this inspired by
earlier work in #815 and #1083.

**Additional Notes:**

While I don't predict that will happen very often, callers that
want to opt-out of the catch unwind behavior can still use the
`..._no_catch` variants of the macros.

The return type change in `ddog_crasht_CrashInfoBuilder_build`
does change the tag enum entries, which unfortunately is a
breaking change.

Ideas on how to work around this? This makes the following
enum entries change:

* `DDOG_CRASHT_CRASH_INFO_NEW_RESULT_OK` =>
  `DDOG_CRASHT_RESULT_HANDLE_CRASH_INFO_OK_HANDLE_CRASH_INFO`
* `DDOG_CRASHT_CRASH_INFO_NEW_RESULT_ERR` =>
  `DDOG_CRASHT_RESULT_HANDLE_CRASH_INFO_ERR_HANDLE_CRASH_INFO`

**How to test the change?**

This change includes test coverage. I've also separately tried to
sprinkle a few `panic!` calls manually and tested that it works as
expected.
dd-mergequeue bot pushed a commit that referenced this pull request Nov 28, 2025
…h_void_ffi_result` (#1334)

[PROF-12853] Catch panics inside `wrap_with_ffi_result` and `wrap_with_void_ffi_result`

**What does this PR do?**

This PR updates the `wrap_with_ffi_result` and
`wrap_with_void_ffi_result` macros to catch any panics that happen
inside them, returning them as errors.

The error handling is made in such a way (see `handle_panic_error`
for details) that it should be able to report back an error even if we
fail to do any allocations.

Important note: Because only the macros have been changed, and
ffi APIs that don't use the macros are of course not affected and
can still trigger panics. If we like this approach, I'll follow-up
with a separate PR to update other APIs to use the new macros.

**Motivation:**

In <https://docs.google.com/document/d/1weMu9P03KKhPQ-gh9BMqRrEzpa1BnnY0LaSRGJbfc7A/edit?usp=sharing>
(Datadog-only link, sorry!) we saw `ddog_prof_Exporter_send`
crashing due to what can be summed up as

`ddog_prof_Exporter_send` (report a profile) ->
  hyper-util tries to do dns resolution in a separate thread pool ->
    tokio failed to create a new thread ->
      panic and we tear down the app because we can't report a profile

This is not good at all, and this PR solves this inspired by
earlier work in #815 and #1083.

**Additional Notes:**

While I don't predict that will happen very often, callers that
want to opt-out of the catch unwind behavior can still use the
`..._no_catch` variants of the macros.

The return type change in `ddog_crasht_CrashInfoBuilder_build`
does change the tag enum entries, which unfortunately is a
breaking change.

Ideas on how to work around this? This makes the following
enum entries change:

* `DDOG_CRASHT_CRASH_INFO_NEW_RESULT_OK` =>
  `DDOG_CRASHT_RESULT_HANDLE_CRASH_INFO_OK_HANDLE_CRASH_INFO`
* `DDOG_CRASHT_CRASH_INFO_NEW_RESULT_ERR` =>
  `DDOG_CRASHT_RESULT_HANDLE_CRASH_INFO_ERR_HANDLE_CRASH_INFO`

**How to test the change?**

This change includes test coverage. I've also separately tried to
sprinkle a few `panic!` calls manually and tested that it works as
expected.

Improve documentation around empty vec not allocating

Merge branch 'main' into ivoanjo/crash-handling-experiments

Fix off-by-one (including terminator in length)

I suspect in practice, since this is a static string, it doesn't make
a difference but let's fix it still.

Remove leftover comment

Ooops!

Clarify that failed allocation is the only expected source of an empty error

Linting fixes

Co-authored-by: taegyunkim <taegyun.kim@datadoghq.com>
Co-authored-by: ivo.anjo <ivo.anjo@datadoghq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants