Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: load a model entirely from an onnx file and build circuit at runtime #25

Merged
merged 35 commits into from
Oct 4, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
1832411
add layout printing to example
jasonmorton Sep 23, 2022
97608c0
constraints not satisfied
jasonmorton Sep 25, 2022
d83a4f2
Purely dynamic load from onnx file
jasonmorton Sep 25, 2022
8fac9b5
rm cmt
jasonmorton Sep 25, 2022
18c87fa
laod example
jasonmorton Sep 25, 2022
e77e957
Cleanup warnings
jasonmorton Sep 26, 2022
ee45844
cleanup
jasonmorton Sep 26, 2022
3f8826f
track last known state on onnx configure
alexander-camuto Sep 26, 2022
10a5544
examples that break
alexander-camuto Sep 26, 2022
2eb553b
fix examples
alexander-camuto Sep 26, 2022
8180d99
fix after rebase
alexander-camuto Sep 30, 2022
55abff3
change BITS to dynamic in OnnxModel
jasonmorton Oct 1, 2022
2cbf878
to quant
jasonmorton Oct 2, 2022
e0e5466
to quant
jasonmorton Oct 2, 2022
88a6e2e
2D padding and stride
alexander-camuto Oct 2, 2022
b868862
ops formatting
alexander-camuto Oct 2, 2022
15795f0
named tuple inputs
alexander-camuto Oct 2, 2022
fdc7c4a
conv with bias and consistent affine interface
alexander-camuto Oct 2, 2022
44807ce
cnvrl automatic type casting in layout
alexander-camuto Oct 2, 2022
2402752
basic auto quantization
jasonmorton Oct 3, 2022
5a03223
cleanup, correct scale-based rescaling
jasonmorton Oct 3, 2022
1927b2b
cleanup
jasonmorton Oct 3, 2022
53e870f
parameter extractor helper function
alexander-camuto Oct 3, 2022
9b17f65
arbitrary length input extractor
alexander-camuto Oct 3, 2022
d9d874f
conv layout function
alexander-camuto Oct 3, 2022
47ea1e6
correct opkind for pytorch Conv2D
alexander-camuto Oct 3, 2022
b1cde21
Create 1lcnvrl.onnx
alexander-camuto Oct 3, 2022
d2111b4
start of conv configuration
alexander-camuto Oct 3, 2022
85d1f98
simplified affine
alexander-camuto Oct 3, 2022
16ad2b5
shape, quantize, configure, layout from convolution onnx
jasonmorton Oct 4, 2022
40444b1
correct output for conv example
jasonmorton Oct 4, 2022
95ee663
ezkl cli
alexander-camuto Oct 4, 2022
bcad075
Update Cargo.toml
alexander-camuto Oct 4, 2022
535f741
cleanup readme
alexander-camuto Oct 4, 2022
2fdcd2a
rm smallonnx
alexander-camuto Oct 4, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
start of conv configuration
  • Loading branch information
alexander-camuto committed Oct 3, 2022
commit d2111b483efa3f691de96c4e606a478cd3ea7273
53 changes: 43 additions & 10 deletions src/onnx/onnxmodel.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
use super::utilities::{ndarray_to_quantized, node_output_shapes};
use crate::nn::affine::Affine1dConfig;
use crate::nn::cnvrl::ConvConfig;
use crate::nn::eltwise::{EltwiseConfig, ReLu, ReLu128, ReLu64, Sigmoid};
Expand All @@ -8,16 +9,20 @@ use anyhow::{Context, Result};
use halo2_proofs::{
arithmetic::FieldExt,
circuit::{Layouter, Value},
plonk::{Column, ConstraintSystem, Fixed, Instance},
plonk::{Column, ConstraintSystem, Instance},
};
use std::env;
use std::path::Path;
use tract_onnx;
use tract_onnx::prelude::{Framework, Graph, InferenceFact, Node, OutletId};
use tract_onnx::tract_hir::{infer::Factoid, internal::InferenceOp};

use super::utilities::{ndarray_to_quantized, node_output_shapes};

use tract_onnx::tract_hir::{
infer::Factoid,
internal::InferenceOp,
ops::cnn::Conv,
ops::expandable::Expansion,
ops::nn::DataFormat,
tract_core::ops::cnn::{conv::KernelFormat, PaddingSpec},
};
// Initially, some of these OpKinds will be folded into others (for example, Const nodes that
// contain parameters will be handled at the consuming node.
// Eventually, though, we probably want to keep them and treat them directly (layouting and configuring
Expand Down Expand Up @@ -59,9 +64,9 @@ pub struct OnnxModelConfig<F: FieldExt + TensorType> {
/// opkind: OpKind is our op enum.
/// output_max is an inferred maximum value that can appear in the output tensor given previous quantization choices.
/// in_scale and out_scale track the denominator in the fixed point representation. Tensors of differing scales should not be combined.
/// input_shapes and output_shapes are Option<Vec<Option<Vec<usize>>>>. These are the inferred shapes for input and output tensors. The first coordinate is the Onnx "slot" and the second is the tensor. The input_shape includes all the parameters, not just the activations that will flow into the node.
/// None indicates unknown, so input_shapes = Some(vec![None, Some(vec![3,4])]) indicates that we
/// know something, there are two slots, and the first tensor has unknown shape, while the second has shape [3,4].
/// input_shapes and output_shapes are of type `Option<Vec<Option<Vec<usize>>>>`. These are the inferred shapes for input and output tensors. The first coordinate is the Onnx "slot" and the second is the tensor. The input_shape includes all the parameters, not just the activations that will flow into the node.
/// None indicates unknown, so `input_shapes = Some(vec![None, Some(vec![3,4])])` indicates that we
/// know something, there are two slots, and the first tensor has unknown shape, while the second has shape `[3,4]`.
/// in_dims and out_dims are the shape of the activations only which enter and leave the node.
#[derive(Clone, Debug)]
pub struct OnnxNode {
Expand Down Expand Up @@ -270,8 +275,12 @@ impl OnnxModel {
fixeds: VarTensor, // Should use fixeds, but currently buggy
) -> Result<OnnxNodeConfig<F>> {
let node = &self.onnx_nodes[node_idx];

println!("Configuring Node {}, a {:?}", node_idx, node.opkind);

println!(
"Configuring Node {}, a {:?}",
node_idx,
node.node.op().name()
);

// Figure out, find, and load the params
match node.opkind {
Expand Down Expand Up @@ -306,6 +315,30 @@ impl OnnxModel {
Ok(OnnxNodeConfig::Affine(conf))
}
OpKind::Convolution => {
let inputs = self.extract_node_inputs(node);
let (input_node, weight_node, bias_node) = (inputs[0], inputs[1], inputs[2]);

let op = Box::new(node.node.op());

let conv_node: &Conv = match op.downcast_ref::<Box<dyn Expansion>>() {
Some(b) => match (*b).as_any().downcast_ref() {
Some(b) => b,
None => panic!("not a conv!"),
},
None => panic!("op isn't an Expansion!"),
};

// only support pytorch type formatting for now
assert_eq!(conv_node.data_format, DataFormat::NCHW);
assert_eq!(conv_node.kernel_fmt, KernelFormat::OIHW);

let stride = conv_node.strides.clone().unwrap();
let padding = match &conv_node.padding {
PaddingSpec::Explicit(p, _, _) => p,
_ => panic!("padding is not explicitly specified"),
};
// let padding = conv_node.padding.clone().unwrap();

todo!()
}
OpKind::ReLU => {
Expand Down
12 changes: 9 additions & 3 deletions src/tensor/val.rs
Original file line number Diff line number Diff line change
Expand Up @@ -79,17 +79,23 @@ impl<F: FieldExt + TensorType> ValTensor<F> {
pub fn reshape(&mut self, new_dims: &[usize]) {
match self {
ValTensor::Value { inner: v, dims: d } => {
assert!(d.iter().product::<usize>() == new_dims.iter().product());
assert_eq!(
d.iter().product::<usize>(),
new_dims.iter().product::<usize>()
);
v.reshape(new_dims);
*d = v.dims().to_vec();
}
ValTensor::AssignedValue { inner: v, dims: d } => {
assert!(d.iter().product::<usize>() == new_dims.iter().product());
assert_eq!(
d.iter().product::<usize>(),
new_dims.iter().product::<usize>()
);
v.reshape(new_dims);
*d = v.dims().to_vec();
}
ValTensor::PrevAssigned { inner: v, dims: d } => {
assert!(d.iter().product::<usize>() == new_dims.iter().product());
assert_eq!(d.iter().product::<usize>(),new_dims.iter().product::<usize>());
v.reshape(new_dims);
*d = v.dims().to_vec();
}
Expand Down
4 changes: 2 additions & 2 deletions src/tensor/var.rs
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,11 @@ impl VarTensor {
pub fn reshape(&mut self, new_dims: &[usize]) {
match self {
VarTensor::Advice { inner: _, dims: d } => {
assert!(d.iter().product::<usize>() == new_dims.iter().product());
assert_eq!(d.iter().product::<usize>(), new_dims.iter().product::<usize>());
*d = new_dims.to_vec();
}
VarTensor::Fixed { inner: _, dims: d } => {
assert!(d.iter().product::<usize>() == new_dims.iter().product());
assert_eq!(d.iter().product::<usize>(),new_dims.iter().product::<usize>());
*d = new_dims.to_vec();
}
}
Expand Down