Skip to content

Conversation

gussmith23
Copy link
Contributor

@gussmith23 gussmith23 commented Jun 15, 2020

Expands the functionality of the Bring Your Own Datatypes framework, originally introduced in #2900.

  • Enables the conversion of large workloads (e.g. whole models) to use a custom datatype
  • Expands API for registering custom datatypes and the lowering functions which implement them
  • Adds tests
  • Adds tutorial

Old task list:

  • Put PR up for review Monday 9/7

  • Keep resolving TODOs in the code -> Thu 9/3

  • (@hypercubestart) Notebook example -> Friday 9/4

    • (@gussmith23 @hypercubestart) Add note about SimplifyInference somewhere.
      • As I understand it, SimplifyInference is currently a mandatory pass in TVM, as un-simplified batch norms aren't implemented. However, the pass was also seemingly mandatory for the BYOD framework to work; if it wasn't run over workloads, workloads would break.
  • (@gussmith23 @hypercubestart ) Draft PR message -> Thu 9/3

  • (@gussmith23 @hypercubestart) Better documentation -> Thu 9/3

    • Docstrings of functions --> May already be done
    • Docstrings of files
  • Consider changing convert_ndarray and change_dtype API

  • Document convert_ndarray and change_dtype

  • Find a place for the main BYODT documentation/tutorial (maybe developer tutorials?)

    • @hogepodge says: Just put the PR up with the tutorial and we'll figure it out then.
  • Rebase onto current TVM

  • (@hypercubestart) Pass sanity check

  • Check whether batch norms work

  • More unit tests

  • (@hypercubestart and @gussmith23) Refactor unit test file

    • Goal: more readable, more thorough, and more extensible
    • More readable: Currently it's a mess, lots of duplicated code, etc etc. Get rid of that and make it cleaner. Also add code comments.
    • More thorough: Have a more principled way of testing a bunch of different Relay programs. Cover all ops, combinations of ops, etc etc. Maybe error out if a Relay construct is not tested, so that the test keeps getting updated in the future.
    • More extensible: Make it easy to add more tests in the future.
  • (@gussmith23) Go through TODOs in code, put them in this list

  • Settle on a design for exactly how bitwidths work

    • Lowering by custom datatype currently works only based on the result type. That is, an add with type result custom[posit]16 and an add with type custom[posit]32 will actually use the same lowering function.
      This is alright if the user's lowering function is aware of it, and lowers differently based on bitwidth. However, the lowering function generated by create_lower_func() is not aware of this, and does not lower to different functions based on bitwidth. This means that the easiest way to dispatch at the moment is to use different datatype names for each datatype you'd like to implement, e.g. custom[posit16]16 and custom[posit32]32. This is unnatural and clunky, though.
      There are many potential ways to fix this.
      • A solution that makes deeper changes to the framework could register different lowering functions for different bitwidths. Effectively, this would mean changing the format of the strings under which lowering functions are registered in the registry, adding in fields for bitwidth. This would change the register_op API -- not only would it include datatype, target, and op, but also bitwidth (at the very least -- potentially vector lanes too?)
      • A simpler (but I'm not necessarily sure better) solution would be to change create_lower_func() so that it can create a lowering function that lowers to different function calls based on bitwidth. This could work fine -- the user would just have to provide all function names for all supported bitwidths all at once.
  • Rebase and package up follow-on work

    • Branch which allows datatype definitions in pure Python
    • Branch enabling LLVM link-time optimizations over datatype libraries

@Ravenwater
Copy link

@gussmith23 the posit standard is moving towards posit<8,2>, posit<16,2>, posit<32,2>, posit<64,2> specifically to make conversions better, faster, cheaper.

@gussmith23
Copy link
Contributor Author

gussmith23 commented Jul 30, 2020 via email

@gussmith23
Copy link
Contributor Author

@tqchen looking for your input on testing (or feel free to point us to someone else who could advise us!)

@hypercubestart is working on making the BYODT tests more thorough. Yesterday we had a discussion about whether it's better to test entire networks (i.e. run resnet with posits, run resnet with floats, compare the results) or test operator-by-operator, or both. Do you have an opinion?

Comment on lines 349 to 351
# Simplifying inference is essential right now, as batch norms (which get
# removed) are broken with custom datatypes.
#expr = relay.ir_pass.simplify_inference(expr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gussmith23 why is this here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a while (not sure if still true -- we should set up a test for it) batch norms were broken with custom datatypes. It was a very odd bug that I didn't end up figuring out. It seemed to go pretty deep, and be related to the code being generated. However, the batch norm operator can generally be optimized out when you're running a network for inference (and not training), which is what simplify_inference did. So we'd run simplify_inference to remove the batch norms and things would be hunky-dory.

Copy link
Contributor Author

@gussmith23 gussmith23 Aug 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makefile Outdated

# Test scripts
pyunittest:
./tests/scripts/task_python_unittest.sh
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just a convenience thing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might have added this. We can probably remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# #expr = relay.ir_pass.simplify_inference(expr)

def run_conv2d(src_dtype, dst_dtype):
def run_test_conv2d(src_dtype,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be using your compare() here?

# 'float32',
# 'custom[posites2]32',
# num_classes=10)
# run_model(get_resnet, (3, 32, 32),
Copy link
Contributor

@hypercubestart hypercubestart Aug 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resnet (x is posits, y is floats):

Max absolute difference: 0.01120126
Max relative difference: 0.10072963
 x: array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], dtype=float32)
 y: array([[0.097013, 0.097102, 0.105917, 0.099155, 0.095107, 0.096329,
        0.097079, 0.111201, 0.102866, 0.09823 ]], dtype=float32)

@gussmith23 should we simply increase atol/rtol or look more into why posits are less accurate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't right! Something's going wrong. This sounds dumb, but I'm going to just try and apply SimplifyInference and see if it fixes things...

Copy link
Contributor Author

@gussmith23 gussmith23 Aug 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should debug small Resnet. Something is going wrong. The easiest way to debug is to return early from the Resnet constructor:

modified   python/tvm/relay/testing/resnet.py
@@ -162,6 +162,7 @@ def resnet(units,
     data = relay.var("data", shape=data_shape, dtype=dtype)
     data = layers.batch_norm_infer(data=data, epsilon=2e-5, scale=False, name='bn_data')
     (_, _, height, _) = data_shape
+    return relay.Function(relay.analysis.free_vars(data), data)
     if height <= 32:            # such as cifar10
         body = layers.conv2d(
             data=data, channels=filter_list[0], kernel_size=(3, 3),
@@ -194,7 +195,6 @@ def resnet(units,
     flat = relay.nn.batch_flatten(data=pool1)
     fc1 = layers.dense_add_bias(data=flat, units=num_classes, name='fc1')
     net = relay.nn.softmax(data=fc1)
-    return relay.Function(relay.analysis.free_vars(net), net)
 
 
 def get_net(batch_size,

This should work well with the current test file.
You can move the early return up and down to see where things diverge.
In my limited testing, it looks like posits are going to NaN after this first batch norm! Running the above version of resnet produces NaNs, but if we move the early return one layer up, then things are fine:

modified   python/tvm/relay/testing/resnet.py
@@ -160,6 +160,7 @@ def resnet(units,
     num_unit = len(units)
     assert num_unit == num_stages
     data = relay.var("data", shape=data_shape, dtype=dtype)
+    return relay.Function(relay.analysis.free_vars(data), data)
     data = layers.batch_norm_infer(data=data, epsilon=2e-5, scale=False, name='bn_data')
     (_, _, height, _) = data_shape
     if height <= 32:            # such as cifar10
@@ -194,7 +195,6 @@ def resnet(units,
     flat = relay.nn.batch_flatten(data=pool1)
     fc1 = layers.dense_add_bias(data=flat, units=num_classes, name='fc1')
     net = relay.nn.softmax(data=fc1)
-    return relay.Function(relay.analysis.free_vars(net), net)
 
 
 def get_net(batch_size,

This version of Resnet literally doesn't do anything, and just returns the input. But we know it works, and we can see that things go wrong after that first batch norm.

My first thought was "oh, we need to run SimplifyInference!" So I did:

modified   tests/python/unittest/test_custom_datatypes.py
@@ -49,6 +49,8 @@ def change_dtype(src, dst, module, params):
     return module, params
 
 def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+    module = relay.transform.SimplifyInference()(module)
+
     ex = relay.create_executor("graph", mod=module)
 
     correct = ex.evaluate()(*input, **params)

This fixes the batch norm...somewhat. We still get numerical errors between the outputs of the batch norms:

AssertionError: 
Not equal to tolerance rtol=0.0001, atol=0.0001

Mismatch: 99.2%
Max absolute difference: 0.01599717
Max relative difference: 0.01600081
 x: array([[[[0.692229, 0.071629, 0.014329, ..., 0.984048, 0.871523,
          0.445013],
         [0.728206, 0.627117, 0.563761, ..., 0.378826, 0.326279,...
 y: array([[[[0.681328, 0.070501, 0.014103, ..., 0.968551, 0.857798,
          0.438005],
         [0.716737, 0.61724 , 0.554882, ..., 0.37286 , 0.321141,...

There shouldn't be this amount of numerical error between posit32es2 and float32 -- posit32es2 should be far more accurate than float32 at values around [0,1].

Here's a debugging list:

  • I think we do still need to run SimplifyInference explicitly. This is confusing, because it's run in Optimize()...maybe it's not getting run?
    • Add SimplifyInference somewhere (maybe in compare() like I did? I think that might be the best place)
    • Figure out whether SimplifyInference is seemingly not getting run later on
  • I think we should add a test for batch norm itself, because it's been such a thorn in the side of this project. Have a test that runs just a batch norm through your compare(). Once we fix batch norm, hopefully everything else will be fixed.
  • To debug the numerical issues with batch norm, here are a few dead-simple ideas off the top of my head:
    • Make sure that the workload is using the posit32 C functions. Maybe your create_lower_func() call is lowering them to the wrong bitwidth?
    • Make sure the params going into the batch norm are the same between float32 and posit32. (Batch norms take four parameters: mean, var, beta, gamma)

Let me know where you get with this! Good luck!

Comment on lines 392 to 394
# run_model(get_inception, (3, 32, 32),
# 'float32',
# 'custom[posites2]32',
# num_classes=10)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

error: relay.concatenate requires all tensors have the same shape on non-concatenating axes

will need to take a closer look

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

turns out using default input size of (3, 299, 299) works, and doesn't run too slow. however (x is posit, y is float):

Mismatched elements: 4 / 10 (40%)
Max absolute difference: 0.00023866
Max relative difference: 0.00239235
 x: array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], dtype=float32)
 y: array([[0.100027, 0.100217, 0.099981, 0.099959, 0.100235, 0.099952,
        0.100033, 0.099761, 0.09995 , 0.099885]], dtype=float32)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this isn't good. If you look at how softmax works (which is the last operator in these networks), part of what it does is normalizing a tensor so that it sums to 1. So the fact that the values are all coming out 0.1 means that all the values are equal going into the softmax, and are probably very different from the "true" values that go into the softmax in the f32 case.

One way to debug this (which I'm going to do now, but may be useful to you in the future) is to change the definition of the network, and have the network constructor return early. I.e. here, we can return the body of the network before the softmax is added, to check whether it's the softmax that is broken.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strange, the default size is running REALLY slowly on my machine. On the order of minutes!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's normal and expected -- that's how it always was. I'm just confused as to why it's running faster on your machine?

Makefile Outdated

# Test scripts
pyunittest:
./tests/scripts/task_python_unittest.sh
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might have added this. We can probably remove it.

bias = relay.var('fc_bias')
fc = relay.nn.dense(data=flatten, weight=weight, units=num_classes)
fc = relay.nn.bias_add(fc, bias)
# TODO(gus) i think softmax is broken
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, remove this

op.call_type == _Call.PureIntrinsic):
return _Call(dtype, extern_func_map[t.bits], convert(op.args),
_Call.Extern)
return _Call(dtype, extern_func_map[t.bits], convert([op.a, op.b]),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's figure out -- what happens in the unary case?
How does this work for the unary case?
Are all of the unary ops intrinsics?
Does convert([op.a, op.b]) handle the case where op.b is none?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all unary ops are intrinsics

intrinsic_name=None):
"""Register an external function which computes the given op.
Currently, this will only work with Casts and binary expressions
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to mention a and b? Is it just confusing to the reader?

# run_model(get_resnet, (3, 32, 32),
# 'float32',
# 'custom[posit32]32',
# num_classes=10)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Experiment with these -- are they slow?

If they're slow -- we need a contingency plan.
If we can't run whole models, then we should be testing layer by layer
--> partly achieved by testing unary/binary ops
--> but then we should also test the larger operators i.e. convolution (in which case we should beef up/update conv2d tests)

(8, 32): 'Posit8es2ToFloat',
}),
"Cast", "llvm", "posites2", "float")
register_op(create_lower_func(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this cast from int to posites2 since we don't actually use it at all

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, if it's not used we can remove it!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

}

extern "C" {
TVM_DLL uint32_t RawPosit32es2(uint32_t in) { return in; }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are these for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convert uint_8 holding raw bits to posit number. this is useful for register_min_func, which has overflow errors otherwise

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's confusing me as is, because it's taking in a uint32_t and returning a uint32_t -- is there something I'm missing? It doesn't seem to be doing anything at the moment!

Copy link
Contributor

@hypercubestart hypercubestart Aug 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input is a IntImm, this is basically doing reinterpret_cast<posit>

Comment on lines 35 to 40
def change_dtype(src, dst, expr, params):
cdtype = ChangeDatatype(src, dst)
expr = cdtype.visit(expr)
expr = relay.ir_pass.infer_type(expr)
params = dict((p, tvm.nd.array(params[p].asnumpy().astype(dst))) for p in params)
return expr, params
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gussmith23 update this code

Comment on lines 97 to 104
type_name : str
src_type_name : str
The name of the custom datatype, e.g. posit (but not custom[posit]8).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update to posites2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

+ src_type_name
tvm._ffi.register_func(lower_func_name, lower_func)

# TODO(gus) could probably make this a decorator if i want
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

t = DataType(type_str)
return t.bits

def lower_ite(ite_intrin):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* same code. Generally, this should be straightforward, as the user will be manually registering
* all of their custom types.
* \param type_name The name of the type, e.g. "bfloat"
* \param type_name The name of the type, e.g. "posit"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to posites2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@gussmith23
Copy link
Contributor Author

@slyubomirsky I heard from Zach that you have been looking into how documentation is done in modern TVM. I need to document the datatypes framework, which is distributed across multiple files, and I'm trying to decide on a central place to keep the documentation. I'm wondering if anything you've learned recently would be relevant here?

@hypercubestart hypercubestart force-pushed the custom-datatypes branch 2 times, most recently from 85ac76c to b0fc673 Compare August 23, 2020 18:23
# run_model(get_mobilenet, (3, 224, 224), 'float32', 'custom[posit8]8')
# run_model(get_mobilenet, (3, 224, 224), 'float32', 'custom[posit32]32')
# run_model(get_inception, (3, 299, 299), 'float32', 'custom[posit32]32')
# run_model(get_resnet, (3, 224, 224), 'float32', 'custom[posit32]32')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we remove these?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's talk about it -- I think we should at least leave a few around, to document which models are too slow to run. We can move them down, though.

Copy link
Contributor

@hypercubestart hypercubestart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

Comment on lines 169 to 272
if isinstance(op, _Cast):
src_bits = bit_length(op.value.dtype)
return call_pure_extern(dtype, extern_func_map[(src_bits, t.bits)], op.value)
if isinstance(op, _FloatImm):
return call_pure_extern(dtype, extern_func_map[t.bits], op.value)
if isinstance(op, _Call):
return call_pure_extern(dtype, extern_func_map[t.bits], *op.args)
if isinstance(op, _BinaryOpExpr):
return call_pure_extern(dtype, extern_func_map[t.bits], op.a, op.b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gussmith23 should we improve debugging message here?

if the map does not contain the bit_width, this throws a KeyError on extern_func_map[t.bits], which may be somewhat cryptic to the user

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, please do! Better error messages are never a bad idea 😄

Comment on lines 463 to 464
// TODO(@gussmith23) is this ok?
// CHECK(op->dtype.is_float());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to do this @gussmith23 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think a better idea may be to do
CHECK(op->dtype.is_float() || datatype::Registry::Global()->GetTypeRegistered(op->dtype.code()));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, it seems FloatImm(custom_datatype, value) represents a custom_datatype with that float value. see here: https://github.com/gussmith23/tvm/blob/a45f7bb1975933118db7647261a6ddefb214595a/include/tvm/tir/op.h#L829, we freely mutate FloatImm as a double

@jroesch
Copy link
Member

jroesch commented Aug 26, 2020

@slyubomirsky I heard from Zach that you have been looking into how documentation is done in modern TVM. I need to document the datatypes framework, which is distributed across multiple files, and I'm trying to decide on a central place to keep the documentation. I'm wondering if anything you've learned recently would be relevant here?

@hogepodge maybe you can chime in

@slyubomirsky
Copy link
Contributor

@slyubomirsky I heard from Zach that you have been looking into how documentation is done in modern TVM. I need to document the datatypes framework, which is distributed across multiple files, and I'm trying to decide on a central place to keep the documentation. I'm wondering if anything you've learned recently would be relevant here?

As Jared said, @hogepodge would be best positioned to comment. What you are describing sounds like an "explainer" (describing the overall design and rationale as opposed to a tutorial), which should go in the docs directory with similar other "explainers."

@hogepodge
Copy link
Contributor

At a high level, the docs organization is described in this discussion thread.

https://discuss.tvm.ai/t/rfc-tvm-documentation-refactor/7456

It sounds like the document you're producing is either an explainer or a reference guide. If it's a design guide, or a higher level overview of the design, explainer is the best classification for it. If it's actually documenting APIs and is meant to be used as a reference for building on top of it, treat it like a reference file. @gussmith23 I think the best thing to do is send a PR with the documentation and add me to the review. We can sort out where it belongs or how to organize it once we see what it looks like.

@gussmith23
Copy link
Contributor Author

Thanks all, will do!

@hypercubestart
Copy link
Contributor

Bumping on this for review:
cc: @tqchen @jroesch @tmoreau89 @junrushao1994 @zhiics @ZihengJiang @hogepodge @comaniac

Copy link
Member

@tqchen tqchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

last few minor comment, other parts lgtm

# Contrib libraries
#---------------------------------------------
# Whether to build with posit custom datatype
set(USE_POSIT OFF)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider to change USE_BYOC_POSIT given it is a software emulated version.

Apache Software Foundation License 2.0
--------------------------------------

3rdparty/bfloat16/bfloat16.cc
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZihengJiang ZihengJiang merged commit 5aafff9 into apache:master Sep 26, 2020
@gussmith23
Copy link
Contributor Author

gussmith23 commented Sep 28, 2020 via email

@hypercubestart
Copy link
Contributor

@tqchen are tutorial docs generated using the Docker gpu image?
Is so, can we install the Universal repo to both the Docker cpu/gpu images so that the tutorial can use the posit codepaths?

TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Oct 13, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Oct 14, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Oct 15, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Oct 15, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
TusharKanekiDey pushed a commit to TusharKanekiDey/tvm that referenced this pull request Oct 16, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Oct 19, 2020
* Add ChangeDatatype pass and unittest

* [WIP] Jared's work on Fri

This was work that Jared did on my computer, trying to get Inception v3 running.

* Fix simplify inference to work over different data types.

* Formatting

* Copy setup code from other test file

* Logging in Relay

* Remove duplicate TVM_DLL

* Add Sub, Mul, Div, Max to bfloat lib

* Fix previous broken rebased commit

* Remove line

* Add LowerCustomDatatypes to build passes

* Upcast ints to custom datatypes too, as well as to floats

* Add and use convert_ndarray

* Lower Call

* Relay: create constant scalars of custom dtypes

We use the same method we use in TVM: store the value in a double.

* Custom datatype formatting in Relay

* Update unittests

* Add simpler example that's not working yet

* Add Python unittests to Makefile

* Fix bug

* Fix function name in GetPackedFunc call

* convert_ndarray makes its own executor

* Add simple test case

* Move setup() calls

* Use convert_ndarray

* Change import to make it more specific

* Fix another Registry::Get call

* Allow users to register minimum functions for custom datatypes

This commit allows users to register global functions named
`tvm.datatype.min.<type name>` which take the number of bits in the custom type
and return the corresponding minimum value (as a double).

A similar commit will need to be created for max, whenever that ends up being
needed!

* Remove check for float

* Add test

* Fix inception test

* Add MobileNet

* Lower custom datatypes before intrinsics

* Add exp and sqrt bfloat functions

* [buggy commit] Lower intrinsics like sqrt, exp

This commit has bugs in it, I'm fairly certain.

* Formatting

* Fix bug

* Add lowering for new ops in test

* Add int to bfloat

* Remove print

* Add all tests

* Correct image size

* Add TODO

* Add "notbfloat" type

This type is for testing purposes. It just stores a float in a uint32. It was
used to confirm the fact that my bfloat "implementation" is very numerically
unstable and was causing issues when running the model.

* Convert arguments

Not sure how necessary this actually is.

* Rewrite custom datatype constants in Relay

* Add test_ops

* Print constants in Relay

* Use topi.testing

* Test conv2d

* Add test_model

* Comment out model tests

* Register notbfloat

This could be unregistered at some point later

* Add commented code

Remove later

* Add posit tests

* test_ops_same_function

* [temporary] move incomplete commit to macbook

* Add more to tests

* Formatting

* Uncomment add

* Remove bad tests

* Change comments

* Change function name and docstring

* Change main function

* Restructure tests

* Fix visibility of posit functions

* YAPF

* Switching keywords around to resolve build errors on some systems

* Improve test by running smaller mobilenet

* Add test_cast

* Change datatype name; add simple test

* Rename to posit32

* Merge 3 posit types into one file

* Add a nop type

* Remove bfloat

* Refactor test comments

* Refactor conv2d test

* Add optional tolerance arguments

* Add posit8 and posit16

* Add comment about posit8

* Whoops -- actually add noptype to CMakeLists

* Add rtol, atol to run_workload

* Add noptype to tests

* Run noptype over other models, too

* Pass correct arguments to calls

* Fix line length errors

* Raise tolerances (again) to avoid flaky test

* fix style

* add test for tanh, log, sigmoid

* Remove references to bfloat, notbfloat

* Change comments

* Remove old test file

* fix min func

* refactoring unit test file

* use posits es2

* cleanup

* comment

* coment if_then_else

* support different bit widths

* use random seed to create stable tests

* update documentation

* removed nop-type and code consistency

* add batchnorm test

* rebase and update

* fix tests and format

* pylint

* change order of include

* include order

* fix style

* remove posit c linkage

* update universal

* fix style

* fix test

* fix overflow error with minfunc and posits

* style

* use change_dtype to convert params

* update universal

* fix fatal error

* fix constant repr

* minor update to posites2

* update universal

* fix rst

* fix invalid import and sqrt

* update universal

* comments

* comments and expand testing

* increase atol/rtol for custom[posites2]32

* Re-add newline

* Remove comment

* Remove opt level and comment

* Change docstring

* Add TODO

* Add file header and newline

* Update docstring

* Update file docstring

* Update docstrings

* Delete todos

* create_min_lower_func

* add better debugging message

* docs

* add BYODT tutorial

* add todo

* Reformat some of tutorial to RST, plus code fixes

* tutorial notebook runs now

* fix hyperlink

* rebase

* add to tutorial

* fix mobilenet model

* add skip tag

* black lint

* add compiler flag and add dummy float

* myfloat and posites2 test

* remove universal

* lint

* lint

* add setup

* build with USE_POSIT for CI/CD

* fix posit cmake

* add cd /

* undo docker changes

* change tutorial to use myfloat

* move files

* lint

* fix

* remove filter

* fix lint

* fix suggestions

Co-authored-by: Jared Roesch <roeschinc@gmail.com>
Co-authored-by: Andrew Liu <andrewlliu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need review status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants