Skip to content

WP1.2 Coordination Meeting December 9, 2020

Javier edited this page Dec 10, 2020 · 1 revision

Meeting Report WP1.2 ‘Modelica library for MPC’

1. MEETING SUBJECT, DATE

Subject: WP1.2

Date: 09-12-2020

Location: Skype for Business

Minutes taken by: Lieve Helsen (KU Leuven)

2. PARTICIPANTS

3. AGENDA and REPORT

From now on the Monthly Progress Meetings have a duration of 1.5 hour (CET: 17h-18h30) and focus on pre-defined discussion points:

DECEMBER 9

  1. Weather uncertainties
  2. Outline BOPTEST journal paper

JANUARY 20

  1. Reporting results from testing & sign-up spreadsheet (all & Dave)
  2. Weather uncertainty: analysis for Denver and Norway locations (Laura)
  3. SDU emulator: ready for testing advanced controllers? (Konstantin)
  4. BOPTEST methodology journal paper: update status (Dave)

MARCH 1

  1. Suggestions for next on-line IBPSA Project 1 Expert Meeting

Weather uncertainties

Goal: check whether advanced controllers tested in BOPTEST are robust towards prediction uncertainties.

  • Focus on Autoregressive models

  • Define scenarios with different levels of uncertainty

  • Krzysztoff’s approach was analyzed (Autocorrelation = 0.9, standard deviation calculated by iterative process to reach desired MAE, target MAE defined for each scenario).

  • Three alternatives proposed:

  1. Autocorrelation = 0.9: define constant standard deviation for each scenario (how to define?) and build the autoregressive model - easy.
  2. Autocorrelation = 0.9: define variable standard deviation for each scenario over the prediction horizon and build the autoregressive model - standard deviation increases over prediction horizon, closer to real data
  3. Autocorrelation calculated by iterative process to reach desired MAE, define constant standard deviation (how to define?) and MAE (target defined) for each scenario.
  • Assumptions:
  1. Data has normal distribution
  2. Mean expected value of the initial error is considered zero - to be introduced as a parameter different from zero?
  • Where to find the reference values if a constant value is assumed?
  • Discussion:
  1. Approach 2 seems the preferred one: closer to real data and no iteration needed
  2. Evolution of standard deviation has the same behavior for different locations (Norway and Berkeley). More locations should be looked at to define the range of uncertainty. Javier will provide data for Leuven. Also use more inland locations that have more variations. Denver (Nicholas Long) has! Norway (Harald) as well! Does the methodology still work? First step to be taken. To be discussed again.
  3. Wind is harder to predict than temperature.
  4. Prediction uncertainties of plus or minus 10°C are too high to be useful in MPC.
  5. Normal distribution? In literature this was always assumed. Checked here for measured data and for the deviation between measurements and predictions, and they differ slightly from normal distribution.
  • Norway: 2 peaks
  • Berkeley: uniform distribution
  1. Mean expected value of the initial error is not zero, starts negative for both locations. Parameter to be introduced, dependent on the location.

Outline BOPTEST journal paper

Dave prepared an outline for a potential journal paper focusing on BOPTEST, which was discussed. Discussion:

  • Split in 2 papers
  1. Architecture, methodology (PAPER 1) – Journal of Building Performance Simulation
  2. Application oriented: showcase the benchmarking aspect for selected test cases (PAPER 2) – Applied Energy
  • Not describing all emulators that we plan to add at some time, restrict to the ones ready
  1. BESTEST Air
  2. BESTEST Hydronic
  3. BESTEST Hydronic Heat Pump
  4. Multizone Residential Hydronic
  • Test scenario definition
  1. Extend with more complex system - updates on SDU emulator in 2nd week of January
  2. Specific weeks (around peak heating day, around peak cooling day, mid-season period) instead of annual simulations
  • MPCs tested need to be described to some level.

  • Mobilize people towards testing: sign-up spreadsheet – track/anticipate who will test on what cases? Action Dave

  • Colorado-Boulder:

  1. Testing framework that relies on BOPTEST to be published (PAPER 3) - which paper to refer to? BS19? Or wait for the new one? We prefer the latter, and the timeline might allow this, plan to submit PAPER 1 by March. Let’s coordinate these two papers (PAPER 1 & PAPER 3).
  2. Extensions:
  • Spawn integration
  • OpenAI Gym to be included
  1. Interested to test different controllers in BOPTEST
  • Development of approach of representative days that can be extrapolated to a full year (PNNL) can be described in PAPER 4. Should we include in PAPER 1 that the framework allows this? No, the question on how to determine the weights will pop up.

Further issues

a. Call to work on emulators or testing of advanced controllers on emulators that have been finalized

b. No updates on Emulators, so the status remains:

Clone this wiki locally