- 
                Notifications
    
You must be signed in to change notification settings  - Fork 451
 
fix bug where input transforms are not applied in fully Bayesian models in train mode #2859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
          Codecov ReportAll modified and coverable lines are covered by tests ✅ 
 Additional details and impacted files@@            Coverage Diff            @@
##              main     #2859   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          211       211           
  Lines        19349     19353    +4     
=========================================
+ Hits         19349     19353    +4     ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
  | 
    
6014565    to
    15406e3      
    Compare
  
    …ls in train mode (meta-pytorch#2859) Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
…ls in train mode (meta-pytorch#2859) Summary: Pull Request resolved: meta-pytorch#2859 This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
57cdf81    to
    61a4668      
    Compare
  
    …ls in train mode (meta-pytorch#2859) Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
61a4668    to
    7b9f40e      
    Compare
  
    …ls in train mode (meta-pytorch#2859) Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
…ls in train mode (meta-pytorch#2859) Summary: Pull Request resolved: meta-pytorch#2859 This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
7b9f40e    to
    3751998      
    Compare
  
    Summary: see title Differential Revision: D74824675
Summary: see title. Differential Revision: D74826655
3751998    to
    cafb2e6      
    Compare
  
    …ls in train mode (meta-pytorch#2859) Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
cafb2e6    to
    7bf7a7f      
    Compare
  
    …ls in train mode (meta-pytorch#2859) Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
…ls in train mode (meta-pytorch#2859) Summary: Pull Request resolved: meta-pytorch#2859 This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in `train` mode. Reviewed By: saitcakmak Differential Revision: D74827275
| 
           This pull request was exported from Phabricator. Differential Revision: D74827275  | 
    
7bf7a7f    to
    a8d98f3      
    Compare
  
    | 
           This pull request has been merged in d247a33.  | 
    
Summary: This fixes a bug where input transforms were not applied to fully Bayesian GPs in training mode. This only affects computing MLL, AIC, and BIC (which previously where computing without applying normalization/warping) for fully Bayesian GPs. We don't evaluate fully Bayesian models in
trainmode.Reviewed By: saitcakmak
Differential Revision: D74827275