Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: MSP reject storage requests for existing file keys #224

Merged
merged 46 commits into from
Oct 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
5ddd056
feat: :sparkles: NewStorageHandler handled in user send file tasks
snowmead Oct 4, 2024
a7d2b12
Merge branch 'main' into feat/user-upload-file-task-msp
snowmead Oct 4, 2024
be26ab4
fmt fix
snowmead Oct 4, 2024
98ef7de
use proper FileMetadata type
snowmead Oct 4, 2024
2f40a22
common util fn convert_raw_multiaddresses_to_multiaddr
snowmead Oct 4, 2024
a17f0e9
fix: wait for BSP to update his local forest root integration test
snowmead Oct 4, 2024
d6550a3
fmt fix
snowmead Oct 4, 2024
8c5a91f
handle batch responses, watch_for_success_with_events
snowmead Oct 7, 2024
88f0e0f
Merge branch 'main' into feat/user-upload-file-task-msp
snowmead Oct 7, 2024
4f62a74
Merge branch 'feat/user-upload-file-task-msp' into feat/msp-respond-s…
snowmead Oct 7, 2024
afe3d0e
simple integration test for msp receiving file
snowmead Oct 7, 2024
45b03eb
Merge branch 'main' into feat/user-upload-file-task-msp
snowmead Oct 7, 2024
3c2617b
typegen
snowmead Oct 7, 2024
36bb220
Merge branch 'feat/user-upload-file-task-msp' into feat/msp-respond-s…
snowmead Oct 7, 2024
1a9050b
send process msp respond storage request event for batching
snowmead Oct 7, 2024
20e963b
simplify send_chunks_to_provider return type
snowmead Oct 7, 2024
ce4c443
move fns
snowmead Oct 7, 2024
aa08313
Merge branch 'feat/user-upload-file-task-msp' into feat/msp-respond-s…
snowmead Oct 7, 2024
81d7de5
Merge branch 'main' into feat/msp-respond-storage-requests-tasks
snowmead Oct 7, 2024
d2c8ea9
use Vec<u8> forest keys, use forest key H256 runtime api params, refa…
snowmead Oct 8, 2024
713baae
fix tests fmt
snowmead Oct 8, 2024
8980512
fix tests fmt
snowmead Oct 8, 2024
eeddfca
fix naming, docs and todos
snowmead Oct 8, 2024
19e9f80
Merge branch 'main' into feat/msp-respond-storage-requests-tasks
snowmead Oct 9, 2024
97429b4
reject storage request if file key already exists in forest
snowmead Oct 9, 2024
19db055
use includes fn to check if file key is in issued list
snowmead Oct 9, 2024
40e1fdb
Merge branch 'feat/msp-respond-storage-requests-tasks' into feat/msp-…
snowmead Oct 9, 2024
d030c53
insert new forest storage on new storage request, return if rejected …
snowmead Oct 9, 2024
3462d8e
lint fix
snowmead Oct 9, 2024
f0e793c
add isFileInForest rpc checks
snowmead Oct 10, 2024
4bdcc8d
Merge branch 'main' into feat/msp-respond-storage-requests-tasks
ffarall Oct 11, 2024
f3dbd27
fix: :rotating_light: Restore `waitForMspResponse` after merge conflict
ffarall Oct 11, 2024
1ef75fc
chore: :label: Update api-augment
ffarall Oct 11, 2024
7349bb0
docs: :bulb: Add clarifying comments
ffarall Oct 11, 2024
ed23a5f
amend; update log
snowmead Oct 11, 2024
3dfa82d
Merge branch 'feat/msp-respond-storage-requests-tasks' into feat/msp-…
snowmead Oct 11, 2024
b311147
add missing wait functions
snowmead Oct 11, 2024
d39e236
fix tests
snowmead Oct 11, 2024
7a10850
Merge branch 'feat/msp-respond-storage-requests-tasks' into feat/msp-…
snowmead Oct 11, 2024
dd763d4
simplify mspResponse result
snowmead Oct 11, 2024
f249d98
user h256 file key isFileInForest
snowmead Oct 11, 2024
1d5cf3b
Merge branch 'main' into feat/msp-reject-storage-requests-for-existin…
snowmead Oct 11, 2024
99d248e
fix tests
snowmead Oct 11, 2024
d29d54e
fix rpc param types
snowmead Oct 11, 2024
b12b761
fix typecheck
snowmead Oct 11, 2024
02c61f2
amend
snowmead Oct 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion api-augment/dist/interfaces/lookup.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion api-augment/dist/interfaces/lookup.js.map

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions api-augment/dist/types/interfaces/augment-api-rpc.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1073,7 +1073,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
generateForestProof: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
challenged_file_keys: Vec<H256> | (H256 | string | Uint8Array)[]
) => Observable<Bytes>
>;
Expand All @@ -1082,7 +1082,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
getFileMetadata: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
file_key: H256 | string | Uint8Array
) => Observable<Option<FileMetadata>>
>;
Expand All @@ -1109,7 +1109,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
isFileInForest: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
file_key: H256 | string | Uint8Array
) => Observable<bool>
>;
Expand Down
7 changes: 6 additions & 1 deletion api-augment/dist/types/interfaces/types-lookup.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1996,8 +1996,13 @@ declare module "@polkadot/types/lookup" {
interface PalletFileSystemRejectedStorageRequestReason extends Enum {
readonly isReachedMaximumCapacity: boolean;
readonly isReceivedInvalidProof: boolean;
readonly isFileKeyAlreadyStored: boolean;
readonly isInternalError: boolean;
readonly type: "ReachedMaximumCapacity" | "ReceivedInvalidProof" | "InternalError";
readonly type:
| "ReachedMaximumCapacity"
| "ReceivedInvalidProof"
| "FileKeyAlreadyStored"
| "InternalError";
}
/** @name PalletFileSystemMspFailedBatchStorageRequests (145) */
interface PalletFileSystemMspFailedBatchStorageRequests extends Struct {
Expand Down
6 changes: 3 additions & 3 deletions api-augment/src/interfaces/augment-api-rpc.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1031,7 +1031,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
generateForestProof: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
challenged_file_keys: Vec<H256> | (H256 | string | Uint8Array)[]
) => Observable<Bytes>
>;
Expand All @@ -1040,7 +1040,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
getFileMetadata: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
file_key: H256 | string | Uint8Array
) => Observable<Option<FileMetadata>>
>;
Expand All @@ -1067,7 +1067,7 @@ declare module "@polkadot/rpc-core/types/jsonrpc" {
**/
isFileInForest: AugmentedRpc<
(
forest_key: Option<Text> | null | Uint8Array | Text | string,
forest_key: Option<H256> | null | Uint8Array | H256 | string,
file_key: H256 | string | Uint8Array
) => Observable<bool>
>;
Expand Down
7 changes: 6 additions & 1 deletion api-augment/src/interfaces/lookup.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1577,7 +1577,12 @@ export default {
* Lookup143: pallet_file_system::types::RejectedStorageRequestReason
**/
PalletFileSystemRejectedStorageRequestReason: {
_enum: ["ReachedMaximumCapacity", "ReceivedInvalidProof", "InternalError"]
_enum: [
"ReachedMaximumCapacity",
"ReceivedInvalidProof",
"FileKeyAlreadyStored",
"InternalError"
]
},
/**
* Lookup145: pallet_file_system::types::MspFailedBatchStorageRequests<T>
Expand Down
7 changes: 6 additions & 1 deletion api-augment/src/interfaces/types-lookup.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2087,8 +2087,13 @@ declare module "@polkadot/types/lookup" {
interface PalletFileSystemRejectedStorageRequestReason extends Enum {
readonly isReachedMaximumCapacity: boolean;
readonly isReceivedInvalidProof: boolean;
readonly isFileKeyAlreadyStored: boolean;
readonly isInternalError: boolean;
readonly type: "ReachedMaximumCapacity" | "ReceivedInvalidProof" | "InternalError";
readonly type:
| "ReachedMaximumCapacity"
| "ReceivedInvalidProof"
| "FileKeyAlreadyStored"
| "InternalError";
}

/** @name PalletFileSystemMspFailedBatchStorageRequests (145) */
Expand Down
2 changes: 1 addition & 1 deletion api-augment/storagehub.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion client/file-transfer-service/src/handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ impl FileTransferService {
// Send the response back.
pending_response.send(response).unwrap();
} else {
error!(
debug!(
target: LOG_TARGET,
"Received unexpected upload request from {} for file key {:?}",
peer,
Expand Down
32 changes: 21 additions & 11 deletions node/src/services/forest_storage.rs
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ impl
StorageProofsMerkleTrieLayout,
kvdb_rocksdb::Database,
>::rocksdb_storage(storage_path.clone())
.expect("Failed to create RocksDB for BspProvider");
.expect("Failed to create RocksDB");

let fs = RocksDBForestStorage::new(fs).expect("Failed to create Forest Storage");

Expand Down Expand Up @@ -107,7 +107,7 @@ impl ForestStorageHandler
StorageProofsMerkleTrieLayout,
kvdb_rocksdb::Database,
>::rocksdb_storage(self.storage_path.clone().expect("Storage path should be set for RocksDB implementation"))
.expect("Failed to create RocksDB for BspProvider");
.expect("Failed to create RocksDB");

let fs = RocksDBForestStorage::new(fs).expect("Failed to create Forest Storage");

Expand Down Expand Up @@ -184,14 +184,19 @@ where
}

async fn insert(&mut self, key: &Self::Key) -> Arc<RwLock<Self::FS>> {
let mut fs_instances = self.fs_instances.write().await;

// Return potentially existing instance since we waited for the lock
if let Some(fs) = fs_instances.get(key) {
return fs.clone();
}

let fs = InMemoryForestStorage::new();

let fs = Arc::new(RwLock::new(fs));

self.fs_instances
.write()
.await
.insert(key.clone(), fs.clone());
fs_instances.insert(key.clone(), fs.clone());

fs
}

Expand All @@ -217,20 +222,25 @@ where
}

async fn insert(&mut self, key: &Self::Key) -> Arc<RwLock<Self::FS>> {
let mut fs_instances = self.fs_instances.write().await;

// Return potentially existing instance since we waited for the lock
if let Some(fs) = fs_instances.get(key) {
return fs.clone();
}

let fs = RocksDBForestStorage::<
StorageProofsMerkleTrieLayout,
kvdb_rocksdb::Database,
>::rocksdb_storage(self.storage_path.clone().expect("Storage path should be set for RocksDB implementation"))
.expect("Failed to create RocksDB for BspProvider");
.expect("Failed to create RocksDB");

let fs = RocksDBForestStorage::new(fs).expect("Failed to create Forest Storage");

let fs = Arc::new(RwLock::new(fs));

self.fs_instances
.write()
.await
.insert(key.clone(), fs.clone());
fs_instances.insert(key.clone(), fs.clone());

fs
}

Expand Down
77 changes: 56 additions & 21 deletions node/src/tasks/msp_upload_file.rs
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand Down Expand Up @@ -244,7 +243,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand Down Expand Up @@ -284,7 +282,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand Down Expand Up @@ -319,7 +316,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand Down Expand Up @@ -354,7 +350,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand Down Expand Up @@ -712,6 +707,61 @@ where
location: event.location.to_vec(),
};

// Get the file key.
let file_key: FileKey = metadata
.file_key::<HashT<StorageProofsMerkleTrieLayout>>()
.as_ref()
.try_into()?;

let fs = match self
.storage_hub_handler
.forest_storage_handler
.get(&event.bucket_id.as_ref().to_vec())
.await
{
Some(fs) => fs,
None => {
self.storage_hub_handler
.forest_storage_handler
.insert(&event.bucket_id.as_ref().to_vec())
.await
}
};

let read_fs = fs.read().await;

// Reject the storage request if file key already exists in the forest storage.
if read_fs.contains_file_key(&file_key.into())? {
let err_msg = format!("File key {:?} already exists in forest storage.", file_key);
debug!(target: LOG_TARGET, "{}", err_msg);

// Reject the storage request.
let call = storage_hub_runtime::RuntimeCall::FileSystem(
pallet_file_system::Call::msp_respond_storage_requests_multiple_buckets {
file_key_responses_input: bounded_vec![(
event.bucket_id,
MspStorageRequestResponse {
accept: None,
reject: Some(bounded_vec![(
H256(file_key.into()),
RejectedStorageRequestReason::FileKeyAlreadyStored,
)])
}
)],
},
);

self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
.await?
.with_timeout(Duration::from_secs(60))
.watch_for_success(&self.storage_hub_handler.blockchain)
.await?;

return Ok(());
}

let available_capacity = self
.storage_hub_handler
.blockchain
Expand Down Expand Up @@ -824,15 +874,7 @@ where
let call = storage_hub_runtime::RuntimeCall::FileSystem(
pallet_file_system::Call::msp_respond_storage_requests_multiple_buckets {
file_key_responses_input: bounded_vec![(
H256(metadata.bucket_id.try_into().map_err(|e| {
let err_msg =
format!("Failed to convert bucket ID to [u8; 32]: {:?}", e);
error!(
target: LOG_TARGET,
err_msg
);
anyhow::anyhow!(err_msg)
})?),
event.bucket_id,
MspStorageRequestResponse {
accept: None,
reject: Some(bounded_vec![(
Expand All @@ -844,7 +886,6 @@ where
},
);

// Send extrinsic and wait for it to be included in the block.
self.storage_hub_handler
.blockchain
.send_extrinsic(call, Tip::from(0))
Expand All @@ -857,12 +898,6 @@ where
}
}

// Get the file key.
let file_key: FileKey = metadata
.file_key::<HashT<StorageProofsMerkleTrieLayout>>()
.as_ref()
.try_into()?;

self.file_key_cleanup = Some(file_key.into());

// Register the file for upload in the file transfer service.
Expand Down
1 change: 1 addition & 0 deletions pallets/file-system/src/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ pub struct AcceptedStorageRequestParameters<T: Config> {
pub enum RejectedStorageRequestReason {
ReachedMaximumCapacity,
ReceivedInvalidProof,
FileKeyAlreadyStored,
InternalError,
}

Expand Down
4 changes: 2 additions & 2 deletions test/scripts/fullNetBootstrap.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import {
type ToxicInfo
} from "../util";
import * as ShConsts from "../util/bspNet/consts";
import { runFullNet } from "../util/fullNet/helpers";
import { runSimpleFullNet } from "../util/fullNet/helpers";

let api: EnrichedBspApi | undefined;
const fullNetConfig: BspNetConfig = {
Expand All @@ -21,7 +21,7 @@ const CONFIG = {
};

async function bootStrapNetwork() {
await runFullNet(fullNetConfig);
await runSimpleFullNet(fullNetConfig);

if (fullNetConfig.noisy) {
// For more info on the kind of toxics you can register,
Expand Down
Loading
Loading