Skip to content

Commit e9302b7

Browse files
authored
Blueprint: Merge the four maps keyed by sled ID into a single map (#7652)
Sorry for the size of this PR. Most of the bulk of the diff is made up of mechanical changes to tests; I don't think there's a way to split this up any smaller. If you ignore whitespace that will trim a few hundred lines off the diff. This PR should make zero behavioral changes. The main thrust of the PR is to replace the four maps in `Blueprint` (`sled_state`, `blueprint_zones`, `blueprint_disks`, `blueprint_datasets`) with a single map containing the config for each sled (`sleds: BTreeMap<SledUuid, BlueprintSledConfig>`). Only one expectorate file changed (and that's because I changed the test; it was previously doing a thing we can't do anymore with the combined maps). Nontrivial fallout from this change: * `BlueprintBuilder` no longer has to do work to ingest a parent blueprint where sleds present in `blueprint_zones` were missing in one or more of the other three maps. * ... but `nexus-db-model` has to do that work now instead, to allow us to continue to load old blueprints from the db that didn't require all four maps to have consistent keys. We should be able to drop this and instead `bail!` after R14, since upgrading to that release will involve creating a blueprint that uses this combined map (and thus guarantees the blueprint in the db has entries for all its fields). * `blippy` got to delete several checks that are now statically impossible. * The blueprint diff implementation similarly got to delete a bunch of summary fields that were complicated to construct; the `daft` output from the `sleds` map is now directly usable in place of all those summary fields. I think this is pretty clearly a win, but there are a couple places where users of blueprints or diffs are a little uglier IMO: * Tests that were constructing a blueprint that only cared about one of the four maps now have to fill in the full `sleds` map. This is straightforward and mechanical, but is a little noisy. * The executor library had substeps that took the individual maps as arguments. Those maps no longer exist, so I changed them to take iterators instead. This was also straightforward and mechanical, but even more noisy. * Using the blueprint `summary.diff.sleds` directly in tests instead of the old summary sets is more verbose. (Personally I prefer this change, despite the verbosity, because I found it easier to look at `daft`'s docs then remember what all the helper methods in the diff summary did. But I could see others feeling differently.) Fixes #7078. After this change, it'll be possible to start work on #7309 (merging the disks/datasets/zone sled-agent endpoints into one), since this restructuring will allow us to tag the combined config with a single generation.
1 parent d22a3d0 commit e9302b7

File tree

28 files changed

+1556
-1996
lines changed

28 files changed

+1556
-1996
lines changed

nexus/db-model/src/deployment.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ impl From<BpTarget> for nexus_types::deployment::BlueprintTarget {
138138
}
139139
}
140140

141-
/// See [`nexus_types::deployment::Blueprint::sled_state`].
141+
/// See [`nexus_types::deployment::BlueprintSledConfig::state`].
142142
#[derive(Queryable, Clone, Debug, Selectable, Insertable)]
143143
#[diesel(table_name = bp_sled_state)]
144144
pub struct BpSledState {

nexus/db-queries/src/db/datastore/deployment.rs

Lines changed: 80 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ use nexus_types::deployment::Blueprint;
5757
use nexus_types::deployment::BlueprintDatasetsConfig;
5858
use nexus_types::deployment::BlueprintMetadata;
5959
use nexus_types::deployment::BlueprintPhysicalDisksConfig;
60+
use nexus_types::deployment::BlueprintSledConfig;
6061
use nexus_types::deployment::BlueprintTarget;
6162
use nexus_types::deployment::BlueprintZonesConfig;
6263
use nexus_types::deployment::ClickhouseClusterConfig;
@@ -199,79 +200,83 @@ impl DataStore {
199200
let blueprint_id = BlueprintUuid::from(row_blueprint.id);
200201

201202
let sled_states = blueprint
202-
.sled_state
203+
.sleds
203204
.iter()
204-
.map(|(&sled_id, &state)| BpSledState {
205+
.map(|(&sled_id, sled)| BpSledState {
205206
blueprint_id: blueprint_id.into(),
206207
sled_id: sled_id.into(),
207-
sled_state: state.into(),
208+
sled_state: sled.state.into(),
208209
})
209210
.collect::<Vec<_>>();
210211

211212
let sled_omicron_physical_disks = blueprint
212-
.blueprint_disks
213+
.sleds
213214
.iter()
214-
.map(|(sled_id, disks_config)| {
215+
.map(|(sled_id, sled)| {
215216
BpSledOmicronPhysicalDisks::new(
216217
blueprint_id,
217218
*sled_id,
218-
disks_config,
219+
&sled.disks_config,
219220
)
220221
})
221222
.collect::<Vec<_>>();
222223
let omicron_physical_disks = blueprint
223-
.blueprint_disks
224+
.sleds
224225
.iter()
225-
.flat_map(|(sled_id, disks_config)| {
226-
disks_config.disks.iter().map(move |disk| {
226+
.flat_map(|(sled_id, sled)| {
227+
sled.disks_config.disks.iter().map(move |disk| {
227228
BpOmicronPhysicalDisk::new(blueprint_id, *sled_id, disk)
228229
})
229230
})
230231
.collect::<Vec<_>>();
231232

232233
let sled_omicron_datasets = blueprint
233-
.blueprint_datasets
234+
.sleds
234235
.iter()
235-
.map(|(sled_id, datasets_config)| {
236+
.map(|(sled_id, sled)| {
236237
BpSledOmicronDatasets::new(
237238
blueprint_id,
238239
*sled_id,
239-
datasets_config,
240+
&sled.datasets_config,
240241
)
241242
})
242243
.collect::<Vec<_>>();
243244
let omicron_datasets = blueprint
244-
.blueprint_datasets
245+
.sleds
245246
.iter()
246-
.flat_map(|(sled_id, datasets_config)| {
247-
datasets_config.datasets.iter().map(move |dataset| {
247+
.flat_map(|(sled_id, sled)| {
248+
sled.datasets_config.datasets.iter().map(move |dataset| {
248249
BpOmicronDataset::new(blueprint_id, *sled_id, dataset)
249250
})
250251
})
251252
.collect::<Vec<_>>();
252253

253254
let sled_omicron_zones = blueprint
254-
.blueprint_zones
255+
.sleds
255256
.iter()
256-
.map(|(sled_id, zones_config)| {
257-
BpSledOmicronZones::new(blueprint_id, *sled_id, zones_config)
257+
.map(|(sled_id, sled)| {
258+
BpSledOmicronZones::new(
259+
blueprint_id,
260+
*sled_id,
261+
&sled.zones_config,
262+
)
258263
})
259264
.collect::<Vec<_>>();
260265
let omicron_zones = blueprint
261-
.blueprint_zones
266+
.sleds
262267
.iter()
263-
.flat_map(|(sled_id, zones_config)| {
264-
zones_config.zones.iter().map(move |zone| {
268+
.flat_map(|(sled_id, sled)| {
269+
sled.zones_config.zones.iter().map(move |zone| {
265270
BpOmicronZone::new(blueprint_id, *sled_id, zone)
266271
.map_err(|e| Error::internal_error(&format!("{:#}", e)))
267272
})
268273
})
269274
.collect::<Result<Vec<_>, Error>>()?;
270275
let omicron_zone_nics = blueprint
271-
.blueprint_zones
276+
.sleds
272277
.values()
273-
.flat_map(|zones_config| {
274-
zones_config.zones.iter().filter_map(|zone| {
278+
.flat_map(|sled| {
279+
sled.zones_config.zones.iter().filter_map(|zone| {
275280
BpOmicronZoneNic::new(blueprint_id, zone)
276281
.with_context(|| format!("zone {}", zone.id))
277282
.map_err(|e| Error::internal_error(&format!("{:#}", e)))
@@ -1056,12 +1061,45 @@ impl DataStore {
10561061
}
10571062
};
10581063

1064+
// Combine the four separately-stored maps into the one blueprint map.
1065+
let mut sleds = BTreeMap::new();
1066+
for (sled_id, zones_config) in blueprint_zones {
1067+
// Backwards compatibility: Before `BlueprintSledConfig` existed,
1068+
// the blueprint stored four independent maps, and had cases where
1069+
// the state, datasets, or disks could be omitted entirely. For now,
1070+
// we backfill these based on those cases (an omitted state could
1071+
// only happen if a sled was decommissioned, and omitted
1072+
// datasets/disks happened for expunged sleds - we can't correct for
1073+
// that, so we fill in an empty set and rely on expunged sleds not
1074+
// needing to know those details).
1075+
//
1076+
// Once we've archived all blueprints where this could be true, we
1077+
// should change these `unwrap_or`s to `bail!` instead.
1078+
let state = sled_state
1079+
.get(&sled_id)
1080+
.copied()
1081+
.unwrap_or(SledState::Decommissioned);
1082+
let disks_config = blueprint_disks
1083+
.remove(&sled_id)
1084+
.unwrap_or_else(BlueprintPhysicalDisksConfig::default);
1085+
let datasets_config = blueprint_datasets
1086+
.remove(&sled_id)
1087+
.unwrap_or_else(BlueprintDatasetsConfig::default);
1088+
1089+
sleds.insert(
1090+
sled_id,
1091+
BlueprintSledConfig {
1092+
state,
1093+
disks_config,
1094+
datasets_config,
1095+
zones_config,
1096+
},
1097+
);
1098+
}
1099+
10591100
Ok(Blueprint {
10601101
id: blueprint_id,
1061-
blueprint_zones,
1062-
blueprint_disks,
1063-
blueprint_datasets,
1064-
sled_state,
1102+
sleds,
10651103
parent_blueprint_id,
10661104
internal_dns_version,
10671105
external_dns_version,
@@ -2014,6 +2052,7 @@ mod tests {
20142052
use nexus_reconfigurator_planning::example::example;
20152053
use nexus_reconfigurator_planning::example::ExampleSystemBuilder;
20162054
use nexus_types::deployment::blueprint_zone_type;
2055+
use nexus_types::deployment::BlueprintPhysicalDiskDisposition;
20172056
use nexus_types::deployment::BlueprintZoneConfig;
20182057
use nexus_types::deployment::BlueprintZoneDisposition;
20192058
use nexus_types::deployment::BlueprintZoneImageSource;
@@ -2222,8 +2261,8 @@ mod tests {
22222261
[blueprint1.id]
22232262
);
22242263

2225-
// There ought to be no sleds or zones, and no parent blueprint.
2226-
assert_eq!(blueprint1.blueprint_zones.len(), 0);
2264+
// There ought to be no sleds and no parent blueprint.
2265+
assert_eq!(blueprint1.sleds.len(), 0);
22272266
assert_eq!(blueprint1.parent_blueprint_id, None);
22282267

22292268
// Trying to insert the same blueprint again should fail.
@@ -2273,13 +2312,10 @@ mod tests {
22732312

22742313
// Check the number of blueprint elements against our collection.
22752314
assert_eq!(
2276-
blueprint1.blueprint_zones.len(),
2315+
blueprint1.sleds.len(),
22772316
planning_input.all_sled_ids(SledFilter::Commissioned).count(),
22782317
);
2279-
assert_eq!(
2280-
blueprint1.blueprint_zones.len(),
2281-
collection.sled_agents.len()
2282-
);
2318+
assert_eq!(blueprint1.sleds.len(), collection.sled_agents.len());
22832319
assert_eq!(
22842320
blueprint1.all_omicron_zones(BlueprintZoneDisposition::any).count(),
22852321
collection.all_omicron_zones().count()
@@ -2392,26 +2428,19 @@ mod tests {
23922428

23932429
let diff = blueprint2.diff_since_blueprint(&blueprint1);
23942430
println!("b1 -> b2: {}", diff.display());
2395-
println!("b1 disks: {:?}", blueprint1.blueprint_disks);
2396-
println!("b2 disks: {:?}", blueprint2.blueprint_disks);
2431+
println!("b1 sleds: {:?}", blueprint1.sleds);
2432+
println!("b2 sleds: {:?}", blueprint2.sleds);
23972433
// Check that we added the new sled, as well as its disks and zones.
23982434
assert_eq!(
23992435
blueprint1
2400-
.blueprint_disks
2401-
.values()
2402-
.map(|c| c.disks.len())
2403-
.sum::<usize>()
2436+
.all_omicron_disks(BlueprintPhysicalDiskDisposition::any)
2437+
.count()
24042438
+ new_sled_zpools.len(),
24052439
blueprint2
2406-
.blueprint_disks
2407-
.values()
2408-
.map(|c| c.disks.len())
2409-
.sum::<usize>()
2410-
);
2411-
assert_eq!(
2412-
blueprint1.blueprint_zones.len() + 1,
2413-
blueprint2.blueprint_zones.len()
2440+
.all_omicron_disks(BlueprintPhysicalDiskDisposition::any)
2441+
.count()
24142442
);
2443+
assert_eq!(blueprint1.sleds.len() + 1, blueprint2.sleds.len());
24152444
assert_eq!(
24162445
blueprint1.all_omicron_zones(BlueprintZoneDisposition::any).count()
24172446
+ num_new_sled_zones,
@@ -2817,8 +2846,7 @@ mod tests {
28172846
.await
28182847
.expect("add range to service ip pool");
28192848
let zone_id = OmicronZoneUuid::new_v4();
2820-
blueprint.blueprint_zones.insert(
2821-
sled_id,
2849+
blueprint.sleds.get_mut(&sled_id).unwrap().zones_config =
28222850
BlueprintZonesConfig {
28232851
generation: omicron_common::api::external::Generation::new(),
28242852
zones: [BlueprintZoneConfig {
@@ -2861,8 +2889,7 @@ mod tests {
28612889
}]
28622890
.into_iter()
28632891
.collect(),
2864-
},
2865-
);
2892+
};
28662893

28672894
blueprint
28682895
}

nexus/db-queries/src/db/datastore/rack.rs

Lines changed: 26 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1012,8 +1012,11 @@ mod test {
10121012
SledBuilder, SystemDescription,
10131013
};
10141014
use nexus_sled_agent_shared::inventory::OmicronZoneDataset;
1015-
use nexus_types::deployment::BlueprintZonesConfig;
10161015
use nexus_types::deployment::CockroachDbPreserveDowngrade;
1016+
use nexus_types::deployment::{
1017+
BlueprintDatasetsConfig, BlueprintPhysicalDisksConfig,
1018+
BlueprintSledConfig, BlueprintZonesConfig,
1019+
};
10171020
use nexus_types::deployment::{
10181021
BlueprintZoneConfig, OmicronZoneExternalFloatingAddr,
10191022
OmicronZoneExternalFloatingIp,
@@ -1055,10 +1058,7 @@ mod test {
10551058
rack_subnet: nexus_test_utils::RACK_SUBNET.parse().unwrap(),
10561059
blueprint: Blueprint {
10571060
id: BlueprintUuid::new_v4(),
1058-
blueprint_zones: BTreeMap::new(),
1059-
blueprint_disks: BTreeMap::new(),
1060-
blueprint_datasets: BTreeMap::new(),
1061-
sled_state: BTreeMap::new(),
1061+
sleds: BTreeMap::new(),
10621062
cockroachdb_setting_preserve_downgrade:
10631063
CockroachDbPreserveDowngrade::DoNotModify,
10641064
parent_blueprint_id: None,
@@ -1307,10 +1307,23 @@ mod test {
13071307
}
13081308
}
13091309

1310-
fn sled_states_active(
1311-
sled_ids: impl Iterator<Item = SledUuid>,
1312-
) -> BTreeMap<SledUuid, SledState> {
1313-
sled_ids.map(|sled_id| (sled_id, SledState::Active)).collect()
1310+
fn make_sled_config_only_zones(
1311+
blueprint_zones: BTreeMap<SledUuid, BlueprintZonesConfig>,
1312+
) -> BTreeMap<SledUuid, BlueprintSledConfig> {
1313+
blueprint_zones
1314+
.into_iter()
1315+
.map(|(sled_id, zones_config)| {
1316+
(
1317+
sled_id,
1318+
BlueprintSledConfig {
1319+
state: SledState::Active,
1320+
disks_config: BlueprintPhysicalDisksConfig::default(),
1321+
datasets_config: BlueprintDatasetsConfig::default(),
1322+
zones_config,
1323+
},
1324+
)
1325+
})
1326+
.collect()
13141327
}
13151328

13161329
#[tokio::test]
@@ -1545,10 +1558,7 @@ mod test {
15451558
);
15461559
let blueprint = Blueprint {
15471560
id: BlueprintUuid::new_v4(),
1548-
sled_state: sled_states_active(blueprint_zones.keys().copied()),
1549-
blueprint_zones,
1550-
blueprint_disks: BTreeMap::new(),
1551-
blueprint_datasets: BTreeMap::new(),
1561+
sleds: make_sled_config_only_zones(blueprint_zones),
15521562
cockroachdb_setting_preserve_downgrade:
15531563
CockroachDbPreserveDowngrade::DoNotModify,
15541564
parent_blueprint_id: None,
@@ -1807,10 +1817,7 @@ mod test {
18071817

18081818
let blueprint = Blueprint {
18091819
id: BlueprintUuid::new_v4(),
1810-
sled_state: sled_states_active(blueprint_zones.keys().copied()),
1811-
blueprint_zones,
1812-
blueprint_disks: BTreeMap::new(),
1813-
blueprint_datasets: BTreeMap::new(),
1820+
sleds: make_sled_config_only_zones(blueprint_zones),
18141821
cockroachdb_setting_preserve_downgrade:
18151822
CockroachDbPreserveDowngrade::DoNotModify,
18161823
parent_blueprint_id: None,
@@ -2019,10 +2026,7 @@ mod test {
20192026
);
20202027
let blueprint = Blueprint {
20212028
id: BlueprintUuid::new_v4(),
2022-
sled_state: sled_states_active(blueprint_zones.keys().copied()),
2023-
blueprint_zones,
2024-
blueprint_disks: BTreeMap::new(),
2025-
blueprint_datasets: BTreeMap::new(),
2029+
sleds: make_sled_config_only_zones(blueprint_zones),
20262030
cockroachdb_setting_preserve_downgrade:
20272031
CockroachDbPreserveDowngrade::DoNotModify,
20282032
parent_blueprint_id: None,
@@ -2163,10 +2167,7 @@ mod test {
21632167

21642168
let blueprint = Blueprint {
21652169
id: BlueprintUuid::new_v4(),
2166-
sled_state: sled_states_active(blueprint_zones.keys().copied()),
2167-
blueprint_zones,
2168-
blueprint_disks: BTreeMap::new(),
2169-
blueprint_datasets: BTreeMap::new(),
2170+
sleds: make_sled_config_only_zones(blueprint_zones),
21702171
cockroachdb_setting_preserve_downgrade:
21712172
CockroachDbPreserveDowngrade::DoNotModify,
21722173
parent_blueprint_id: None,

0 commit comments

Comments
 (0)