Assumption right now, it's only needed when there is not enough GPU memory, but perhaps sometimes it's just faster this way Right now we only doing tokenization on CPU and inference can run on either CPU or GPU