Skip to content

Add Qfire compiler-aware quantum benchmarking submission#5

Open
ajiteshbankulaa wants to merge 1 commit intoqBraid:mainfrom
ajiteshbankulaa:qfire-submission
Open

Add Qfire compiler-aware quantum benchmarking submission#5
ajiteshbankulaa wants to merge 1 commit intoqBraid:mainfrom
ajiteshbankulaa:qfire-submission

Conversation

@ajiteshbankulaa
Copy link
Copy Markdown

qBraid Challenge Submission: Compiler-Aware Benchmarking for Reduced Wildfire Intervention QAOA

VideoLink: https://app.screencastify.com/watch/a7bJiBXEZm4uxbrKmFb5?checkOrg=1fb0f585-cfef-4b1b-8133-b84c1cf4701c

Summary

This PR submits Qfire / QuantumProj for the qBraid Challenge: Compiler-Aware Quantum Benchmarking.

Our project builds a portable quantum optimization workflow in Python around a reduced wildfire intervention QAOA workload derived from a 10x10 wildfire resilience scenario. The workflow starts from a Qiskit QuantumCircuit source representation, uses qBraid centrally in the transpilation / transformation pipeline, compares two compilation strategies, runs across multiple execution environments, and evaluates the tradeoff between:

  • output quality
  • compiled resource cost

The motivating application is wildfire resilience planning, where the full planning workflow operates on a 10x10 grid with a strict budget of 10 interventions, matching the wildfire challenge framing. :contentReference[oaicite:0]{index=0}


Technical Question

How well does a reduced wildfire intervention QAOA workload survive compilation across strategies and execution environments?

More specifically, we study which compilation path best preserves useful optimization behavior under realistic execution constraints.


Algorithm / Workload

We implement a nontrivial quantum algorithmic workload:

  • Algorithm: QAOA
  • Application context: reduced wildfire intervention planning
  • Source problem: adjacency-driven wildfire spread mitigation on a 10x10 spatial grid
  • Reduced benchmark workload: a critical subgraph / shortlisted intervention graph derived from the full wildfire scenario

This is not a toy state-preparation demo or a single circuit primitive. It is a reduced optimization workflow tied to a real application problem.


Source Representation

The benchmark begins from a framework-level source representation in Python:

  • Framework: Qiskit
  • Source object: QuantumCircuit
  • Language: Python

This satisfies the requirement to start from a framework qBraid can transpile.


How qBraid Is Used

qBraid is central to the workflow.

We use qBraid to transform / transpile the workload through distinct compilation paths rather than relying only on native framework transpilation. The benchmark uses qBraid in the circuit preparation and transformation pipeline to compare how compilation strategy changes both:

  • algorithmic behavior
  • compiled circuit cost

Examples of qBraid-centered usage in this project include:

  • qbraid.transpile(...)
  • qBraid-based framework / representation conversion
  • qBraid-assisted target-preparation workflow
  • qBraid compilation-path comparison

Compilation Strategies Compared

We compare two compilation strategies:

  1. Portable OpenQASM 2 bridge

    • prioritizes portability and simpler bridge structure
    • useful as a baseline compiler-aware path
  2. Target-aware OpenQASM 3 bridge

    • more target-aware path
    • allows comparison against a more execution-conscious transformation flow

These strategies are compared to study how compilation choices affect both quality and cost.


Execution Environments

We run the compiled workload across at least two execution environments:

  • ideal simulator
  • noisy simulator

Additionally, when available, we also support:

  • IBM hardware / constrained hardware execution

This allows us to compare unconstrained behavior against more realistic or noisy execution conditions.


Metrics Collected

Output-quality metrics

We report:

  • approximation ratio
  • success probability
  • expected cost (or equivalent optimization-quality metric)

Compiled-resource metrics

We report:

  • circuit depth
  • 2-qubit gate count
  • circuit width
  • total gate count
  • shot count

This directly addresses the qBraid challenge requirement to compare quality against compiled resource cost.


Key Result / Conclusion

Our main finding is that compilation strategy materially changes the usefulness of the workload after compilation.

In our benchmark:

  • one strategy better preserves optimization quality under constrained or noisy execution
  • the other can reduce compiled resource cost more aggressively
  • therefore, the best strategy is not simply the one that compiles, but the one that provides the strongest quality vs cost tradeoff

Best tradeoff conclusion: [replace with your exact result, for example:
“the target-aware OpenQASM 3 bridge preserved approximation quality better under noisy execution, while the portable OpenQASM 2 bridge reduced compilation complexity but degraded output quality more noticeably.”]


What This Repository Includes

  • Python source code
  • full-stack application context for wildfire planning
  • runnable benchmark workflow
  • benchmark service / scripts for qBraid comparison
  • setup instructions
  • README documentation
  • saved example benchmark outputs
  • demo / presentation materials

Required Questions — Direct Answers

What algorithm did you implement?

A reduced QAOA optimization workload derived from wildfire intervention planning.

What was your source representation?

A Qiskit QuantumCircuit implemented in Python.

How did qBraid transform the workload?

qBraid was used centrally to transpile / transform the workload through distinct compiler-aware paths before execution and evaluation.

What two compilation strategies did you compare?

A portable OpenQASM 2 bridge and a target-aware OpenQASM 3 bridge.

What changed in the compiled programs?

The compiled programs differed in resource metrics such as depth, 2-qubit gate count, total gates, and sometimes in how well they preserved useful optimization behavior.

Which strategy best preserved algorithm performance?

[replace with your exact result]

What was the cost of that strategy in compiled resources?

[replace with your exact result, for example: higher depth, higher 2Q count, or other measured tradeoff]


Getting Started

1. Clone the repo

git clone [REPO_URL]
cd [REPO_NAME]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant