Description
There's a quite common pattern in python code which is:
def function(foo, *args, **kwargs):
# do something with foo
other_function(*args, **kwargs)
# possibly do something else
def other_function(color: str=..., temperature: float=..., style: Stylesheet=..., timeout: Optional[int]=..., database_adaptor: Adaptor=..., strict: bool=..., output: IO[str], allow_frogs: bool=..., mode: SomeEnum=...):
# do something with a lot of options
(a usual subcase of this one is when other_function
is actually super().function
). This presents two problems for a static analyzer:
- the call from
function
toother_function
can not be type-checked properly because of the*args, **kwargs
in the call arguments. - there is no sensible way to annotate
function
, so calls to it are unchecked.
The problem for me also affects readability of the code (which is for me one of the main problems that annotations tries to address). James Powell from numfocus even gave a pydata talk about the difficulties it brings at https://www.youtube.com/watch?v=MQMbnhSthZQ
Even if theoretically the args/kwargs packing feature of python can be used with more or less arbitrary data, IMO this use-case is common enough to warrant some special treatment. I was thinking on a way to flag this usage, for example:
@delegate_args(other_function)
def function(foo, *args, **kwargs):
other_function(*args, **kwargs)
This could hint an analyzer so:
- On calls to
function
, the "extra" arguments are checked to match the signature ofother_function
- The call to other_function is considered valid given that it uses the same arguments (I know that the code above could have modified the content of kwargs, but it's still more checking than what we have now).
For me, even without static analyzer, the readability benefits of seeing
@delegate_args(matplotlib.pyplot.plot)
def plot_valuation(ticker_symbol: str, start: date, end: date, *args, **kwargs): ...
and knowing that plot_valuation
accepts any valid arguments from matplotlib's plot function, is worth it.