It's again a question on explicite vs implicit. Currently the kernel size defines the dimension. But this is maybe easier to read in the code, so that it is exactly clear what type of convolution you do.
PyTorch has Conv1d, Conv2d, Conv3d.
Flax has just Conv.