You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
config (Union[AOBaseConfig, Callable[[torch.nn.Module], torch.nn.Module]]): either (1) a workflow configuration object or (2) a function that applies tensor subclass conversion to the weight of a module and return the module (e.g. convert the weight tensor of linear to affine quantized tensor). Note: (2) will be deleted in a future release.
498
+
config (AOBaseConfig): a workflow configuration object.
499
499
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `config` on
500
500
the weight of the module
501
501
set_inductor_config (bool, optional): Whether to automatically use recommended inductor config settings (defaults to None)
@@ -546,21 +546,10 @@ def quantize_(
546
546
)
547
547
548
548
else:
549
-
# old behavior, keep to avoid breaking BC
550
-
warnings.warn(
549
+
raiseAssertionError(
551
550
"""Passing a generic Callable to `quantize_` is no longer recommended and will be deprecated at a later release. Please see https://github.com/pytorch/ao/issues/1690 for instructions on how to pass in workflow configuration instead."""
0 commit comments