Closed
Description
In PyGLM vs NumPy.py
have you considered compensating for how much time is spent in the NumPy/PyGLM functions versus the ancillary measurement bits: per-iteration time.time
, formatting, printing, etc. For example, here is my dot product measurement:
| dot product | 6.623M | 1.512M | 4.38x |
Commenting out func(*args, **kw)
and executing again:
| dot product | 8.303M | 8.282M | 1.00x |
This seems to suggest the measurement function is unintentionally a bottleneck for the libraries actual (or potential) throughput.
Thanks.