SingleTaskGP On Multiple Outputs - Train Inputs Has Extra Unexpected Dimension? #2654
-
| When training a multitask GP on multiple outputs, one thing that is confusing me is the shape of gp.train_inputs. For a GP with a single output, gp.train_inputs is of shape NxD, where N is the number of data points and D is the shape of the input vectors. However, for a GP with K outputs, gp.train_inputs is of shape KxNxD instead of NxD, and I am not sure why. Furthermore, gp.train_inputs[0] = gp.train_inputs[1] = ... gp.train_inputs[K-1]. This has led me to wonder if I am using SingleTaskGP incorrectly. Just to make sure I am understanding: 
 | 
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
| Hi @bchen0.  
 If you manually constructed a batched single-output model, then each  will have  | 
Beta Was this translation helpful? Give feedback.
Hi @bchen0.
train_inputsis a property of the underlying GPyTorch model, which follows different shape conventions than BoTorch. Under the hood, aSingleTaskGPis a batchedExactGPmodel, where the output dimension is represented as the right-most batch dimension (dim=-3). So, what you're observing is just as expected.If you manually constructed a batched single-output model, then each
train_inputs[i]could be different. Something likewill have
train_inputsof shapebatch x (m) x q x d(mdime…