-
Notifications
You must be signed in to change notification settings - Fork 39
WP1.2 Coordination Meeting December 9, 2020
Meeting Report WP1.2 ‘Modelica library for MPC’
Subject: WP1.2
Date: 09-12-2020
Location: Skype for Business
Minutes taken by: Lieve Helsen (KU Leuven)
From now on the Monthly Progress Meetings have a duration of 1.5 hour (CET: 17h-18h30) and focus on pre-defined discussion points:
DECEMBER 9
- Weather uncertainties
- Outline BOPTEST journal paper
JANUARY 20
- Reporting results from testing & sign-up spreadsheet (all & Dave)
- Weather uncertainty: analysis for Denver and Norway locations (Laura)
- SDU emulator: ready for testing advanced controllers? (Konstantin)
- BOPTEST methodology journal paper: update status (Dave)
MARCH 1
- Suggestions for next on-line IBPSA Project 1 Expert Meeting
Goal: check whether advanced controllers tested in BOPTEST are robust towards prediction uncertainties.
-
Focus on Autoregressive models
-
Define scenarios with different levels of uncertainty
-
Krzysztoff’s approach was analyzed (Autocorrelation = 0.9, standard deviation calculated by iterative process to reach desired MAE, target MAE defined for each scenario).
-
Three alternatives proposed:
- Autocorrelation = 0.9: define constant standard deviation for each scenario (how to define?) and build the autoregressive model - easy.
- Autocorrelation = 0.9: define variable standard deviation for each scenario over the prediction horizon and build the autoregressive model - standard deviation increases over prediction horizon, closer to real data
- Autocorrelation calculated by iterative process to reach desired MAE, define constant standard deviation (how to define?) and MAE (target defined) for each scenario.
- Assumptions:
- Data has normal distribution
- Mean expected value of the initial error is considered zero - to be introduced as a parameter different from zero?
- Where to find the reference values if a constant value is assumed?
- Discussion:
- Approach 2 seems the preferred one: closer to real data and no iteration needed
- Evolution of standard deviation has the same behavior for different locations (Norway and Berkeley). More locations should be looked at to define the range of uncertainty. Javier will provide data for Leuven. Also use more inland locations that have more variations. Denver (Nicholas Long) has! Norway (Harald) as well! Does the methodology still work? First step to be taken. To be discussed again.
- Wind is harder to predict than temperature.
- Prediction uncertainties of plus or minus 10°C are too high to be useful in MPC.
- Normal distribution? In literature this was always assumed. Checked here for measured data and for the deviation between measurements and predictions, and they differ slightly from normal distribution.
- Norway: 2 peaks
- Berkeley: uniform distribution
- Mean expected value of the initial error is not zero, starts negative for both locations. Parameter to be introduced, dependent on the location.
Dave prepared an outline for a potential journal paper focusing on BOPTEST, which was discussed. Discussion:
- Split in 2 papers
- Architecture, methodology (PAPER 1) – Journal of Building Performance Simulation
- Application oriented: showcase the benchmarking aspect for selected test cases (PAPER 2) – Applied Energy
- Not describing all emulators that we plan to add at some time, restrict to the ones ready
- BESTEST Air
- BESTEST Hydronic
- BESTEST Hydronic Heat Pump
- Multizone Residential Hydronic
- Test scenario definition
- Extend with more complex system - updates on SDU emulator in 2nd week of January
- Specific weeks (around peak heating day, around peak cooling day, mid-season period) instead of annual simulations
-
MPCs tested need to be described to some level.
-
Mobilize people towards testing: sign-up spreadsheet – track/anticipate who will test on what cases? Action Dave
-
Colorado-Boulder:
- Testing framework that relies on BOPTEST to be published (PAPER 3) - which paper to refer to? BS19? Or wait for the new one? We prefer the latter, and the timeline might allow this, plan to submit PAPER 1 by March. Let’s coordinate these two papers (PAPER 1 & PAPER 3).
- Extensions:
- Spawn integration
- OpenAI Gym to be included
- Interested to test different controllers in BOPTEST
- Development of approach of representative days that can be extrapolated to a full year (PNNL) can be described in PAPER 4. Should we include in PAPER 1 that the framework allows this? No, the question on how to determine the weights will pop up.
a. Call to work on emulators or testing of advanced controllers on emulators that have been finalized
b. No updates on Emulators, so the status remains: