Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add retry mechanics to pallet-scheduler #3060

Merged
merged 44 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
328d0cd
Add option to retry task in scheduler
georgepisaltu Jan 23, 2024
6f56d9f
Add unit tests for retry scheduler
georgepisaltu Jan 23, 2024
22800c5
Add benchmarks for retry scheduler
georgepisaltu Jan 24, 2024
9315c2e
Add some docs to retry functions
georgepisaltu Jan 25, 2024
5bbd31e
Remove redundant clone
georgepisaltu Jan 25, 2024
3eb8016
Add real weights to scheduler pallet
georgepisaltu Jan 25, 2024
9ecae46
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 25, 2024
fcff15c
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
a8fd732
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
3e6e77e
Merge branch 'master' of https://github.com/paritytech/polkadot-sdk i…
Jan 25, 2024
9120d60
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
4167d6f
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
816906a
Use `TaskAddress` in `set_retry`
georgepisaltu Jan 26, 2024
8a28ea5
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 26, 2024
900d2d5
Add prdoc
georgepisaltu Jan 26, 2024
05cbdfc
Refactor agenda query in `set_retry`
georgepisaltu Jan 26, 2024
5fc62f4
Minor renames and fixes
georgepisaltu Jan 26, 2024
5e305e1
Refactor `schedule_retry` return type
georgepisaltu Jan 26, 2024
5943dcc
Implement `ensure_privilege`
georgepisaltu Jan 26, 2024
690f2b8
Add event for setting retry config
georgepisaltu Jan 26, 2024
4a81417
Make retry fail if insufficient weight
georgepisaltu Jan 29, 2024
438effb
Remove redundant weight parameter in `set_retry`
georgepisaltu Jan 29, 2024
d048760
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 29, 2024
3c2b540
Add test for dropping insufficient weight retry
georgepisaltu Jan 29, 2024
7a39a69
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 29, 2024
2e8f954
Clean up retry config on cancel
georgepisaltu Feb 1, 2024
7dfb517
Small refactor
georgepisaltu Feb 1, 2024
2e26707
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 1, 2024
40b567d
Add docs to retry config map
georgepisaltu Feb 1, 2024
2b35465
Add retry count to `RetrySet` event
georgepisaltu Feb 2, 2024
a43df52
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 2, 2024
9da58c6
Make retries independent of periodic runs
georgepisaltu Feb 7, 2024
2976dac
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 7, 2024
8ce66f7
Small refactoring
georgepisaltu Feb 13, 2024
dc9ef2e
Add `cancel_retry` extrinsics
georgepisaltu Feb 13, 2024
945a095
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 13, 2024
39eb209
Add e2e unit test for retry schedule
georgepisaltu Feb 14, 2024
2290240
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 14, 2024
50a2010
Simplify `schedule_retry`
georgepisaltu Feb 15, 2024
e9cc27e
Add docs for `as_retry`
georgepisaltu Feb 15, 2024
497100c
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 15, 2024
863bec7
Update doc comments for `set_retry`
georgepisaltu Feb 16, 2024
7a16648
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 16, 2024
b72da0c
Move common logic under `do_cancel_retry`
georgepisaltu Feb 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -203,4 +203,28 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(2))
}
/// Dummy impl
fn check_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
fn set_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
fn set_retry_named(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
}
24 changes: 24 additions & 0 deletions polkadot/runtime/rococo/src/weights/pallet_scheduler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -201,4 +201,28 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(2))
}
/// Dummy impl
fn check_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
fn set_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
fn set_retry_named(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
}
24 changes: 24 additions & 0 deletions polkadot/runtime/westend/src/weights/pallet_scheduler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -204,4 +204,28 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(2))
}
/// Dummy impl
fn check_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
fn set_retry(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Dummy impl
fn set_retry_named(s: u32, ) -> Weight {
Weight::from_parts(2_963_000, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
}
54 changes: 54 additions & 0 deletions substrate/frame/scheduler/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -306,5 +306,59 @@ benchmarks! {
);
}

check_retry {
let s in 1 .. T::MaxScheduledPerBlock::get();
let when = BLOCK_NUMBER.into();

fill_schedule::<T>(when, s)?;
let name = u32_to_name(s - 1);
let address = Lookup::<T>::get(name).unwrap();
let period: BlockNumberFor<T> = 1u32.into();
let root: <T as Config>::PalletsOrigin = frame_system::RawOrigin::Root.into();
Retries::<T>::insert(address, RetryConfig { total_retries: 10, remaining: 10, period });
let (mut when, index) = address;
let task = Agenda::<T>::get(when)[index as usize].clone().unwrap();
}: {
Scheduler::<T>::check_retry(when, when, index, task);
} verify {
when = when + BlockNumberFor::<T>::one();
assert_eq!(
Retries::<T>::get((when, 0)),
Some(RetryConfig { total_retries: 10, remaining: 9, period })
);
}

set_retry {
let s in 1 .. T::MaxScheduledPerBlock::get();
let when = BLOCK_NUMBER.into();

fill_schedule::<T>(when, s)?;
let name = u32_to_name(s - 1);
let address = Lookup::<T>::get(name).unwrap();
let (when, index) = address;
}: _(RawOrigin::Root, when, index, 10, BlockNumberFor::<T>::one())
verify {
assert_eq!(
Retries::<T>::get((when, index)),
Some(RetryConfig { total_retries: 10, remaining: 10, period: BlockNumberFor::<T>::one() })
);
}

set_retry_named {
let s in 1 .. T::MaxScheduledPerBlock::get();
let when = BLOCK_NUMBER.into();

fill_schedule::<T>(when, s)?;
let name = u32_to_name(s - 1);
let address = Lookup::<T>::get(name).unwrap();
let (when, index) = address;
}: _(RawOrigin::Root, name, 10, BlockNumberFor::<T>::one())
verify {
assert_eq!(
Retries::<T>::get((when, index)),
Some(RetryConfig { total_retries: 10, remaining: 10, period: BlockNumberFor::<T>::one() })
);
}

impl_benchmark_test_suite!(Scheduler, crate::mock::new_test_ext(), crate::mock::Test);
}
179 changes: 177 additions & 2 deletions substrate/frame/scheduler/src/lib.rs
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,18 @@ pub type CallOrHashOf<T> =
pub type BoundedCallOf<T> =
Bounded<<T as Config>::RuntimeCall, <T as frame_system::Config>::Hashing>;

/// The configuration of the retry mechanism for a given task along with its current state.
#[cfg_attr(any(feature = "std", test), derive(PartialEq, Eq))]
#[derive(Clone, RuntimeDebug, Encode, Decode, MaxEncodedLen, TypeInfo)]
pub struct RetryConfig<Period> {
/// Initial amount of retries allowed.
total_retries: u8,
/// Amount of retries left.
remaining: u8,
/// Period of time between retry attempts.
period: Period,
}

#[cfg_attr(any(feature = "std", test), derive(PartialEq, Eq))]
#[derive(Clone, RuntimeDebug, Encode, Decode)]
struct ScheduledV1<Call, BlockNumber> {
Expand Down Expand Up @@ -273,6 +285,15 @@ pub mod pallet {
ValueQuery,
>;

#[pallet::storage]
pub type Retries<T: Config> = StorageMap<
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
_,
Blake2_128Concat,
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
TaskAddress<BlockNumberFor<T>>,
RetryConfig<BlockNumberFor<T>>,
OptionQuery,
>;

/// Lookup from a name to the block number and index of the task.
///
/// For v3 -> v4 the previously unbounded identities are Blake2-256 hashed to form the v4
Expand All @@ -299,6 +320,8 @@ pub mod pallet {
CallUnavailable { task: TaskAddress<BlockNumberFor<T>>, id: Option<TaskName> },
/// The given task was unable to be renewed since the agenda is full at that block.
PeriodicFailed { task: TaskAddress<BlockNumberFor<T>>, id: Option<TaskName> },
/// The given task was unable to be retried since the agenda is full at that block.
RetryFailed { task: TaskAddress<BlockNumberFor<T>>, id: Option<TaskName> },
/// The given task can never be executed since it is overweight.
PermanentlyOverweight { task: TaskAddress<BlockNumberFor<T>>, id: Option<TaskName> },
}
Expand Down Expand Up @@ -440,6 +463,86 @@ pub mod pallet {
)?;
Ok(())
}

/// Set a retry configuration for a task so that, in case its scheduled run fails, it will
/// be retried after `period` blocks, for a total amount of `retries` retries or until it
/// succeeds.
///
/// If a retried task runs successfully before running out of retries, its remaining retry
/// counter will be reset to the initial value. If a retried task runs out of retries, it
/// will be removed from the schedule.
///
/// Tasks which need to be scheduled for a retry are still subject to weight metering and
/// agenda space, same as a regular task. Periodic tasks will have their periodic schedule
/// put on hold while the task is retrying.
#[pallet::call_index(6)]
#[pallet::weight(<T as Config>::WeightInfo::set_retry(T::MaxScheduledPerBlock::get()))]
pub fn set_retry(
origin: OriginFor<T>,
when: BlockNumberFor<T>,
index: u32,
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
retries: u8,
period: BlockNumberFor<T>,
) -> DispatchResult {
T::ScheduleOrigin::ensure_origin(origin.clone())?;
let origin = <T as Config>::RuntimeOrigin::from(origin);
let agenda = Agenda::<T>::get(when);
match agenda.get(index as usize) {
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
Some(Some(task)) =>
if matches!(
T::OriginPrivilegeCmp::cmp_privilege(origin.caller(), &task.origin),
Some(Ordering::Less) | None
) {
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
return Err(BadOrigin.into())
},
_ => return Err(Error::<T>::NotFound.into()),
}
Retries::<T>::insert(
(when, index),
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
RetryConfig { total_retries: retries, remaining: retries, period },
);
Ok(())
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
}

/// Set a retry configuration for a named task so that, in case its scheduled run fails, it
/// will be retried after `period` blocks, for a total amount of `retries` retries or until
/// it succeeds.
///
/// If a retried task runs successfully before running out of retries, its remaining retry
/// counter will be reset to the initial value. If a retried task runs out of retries, it
/// will be removed from the schedule.
///
/// Tasks which need to be scheduled for a retry are still subject to weight metering and
/// agenda space, same as a regular task. Periodic tasks will have their periodic schedule
/// put on hold while the task is retrying.
#[pallet::call_index(7)]
#[pallet::weight(<T as Config>::WeightInfo::set_retry_named(T::MaxScheduledPerBlock::get()))]
pub fn set_retry_named(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way to remove a retry config is just to set it to 0, or?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. We could add 2 more extrinsics to remove a retry config, or we could make period: Option<BlockNumberFor<T>>, and unset it in the case of None. I like it how it is now, but I don't feel strongly about this, WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im also fine with both 🤷‍♂️

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be just worth mentioned in the doc how it can be removed, and make sure we have a unit test for this removal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do.

origin: OriginFor<T>,
id: TaskName,
retries: u8,
period: BlockNumberFor<T>,
) -> DispatchResult {
T::ScheduleOrigin::ensure_origin(origin.clone())?;
let origin = <T as Config>::RuntimeOrigin::from(origin);
let (when, agenda_index) = Lookup::<T>::get(&id).ok_or(Error::<T>::NotFound)?;
let agenda = Agenda::<T>::get(when);
match agenda.get(agenda_index as usize) {
Some(Some(task)) =>
if matches!(
T::OriginPrivilegeCmp::cmp_privilege(origin.caller(), &task.origin),
Some(Ordering::Less) | None
) {
return Err(BadOrigin.into())
},
_ => return Err(Error::<T>::NotFound.into()),
}
Retries::<T>::insert(
(when, agenda_index),
RetryConfig { total_retries: retries, remaining: retries, period },
);
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
Ok(())
}
}
}

Expand Down Expand Up @@ -1089,7 +1192,7 @@ impl<T: Config> Pallet<T> {
when: BlockNumberFor<T>,
agenda_index: u32,
is_first: bool,
mut task: ScheduledOf<T>,
task: ScheduledOf<T>,
) -> Result<(), (ServiceTaskError, Option<ScheduledOf<T>>)> {
if let Some(ref id) = task.maybe_id {
Lookup::<T>::remove(id);
Expand Down Expand Up @@ -1124,11 +1227,24 @@ impl<T: Config> Pallet<T> {
},
Err(()) => Err((Overweight, Some(task))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this and the Err arm above you abandon the retry entry if there is any.
I would move the retry handling one level above, for this you just need to add for Ok varian an info if it failed or not.
or at least take it before execute_dispatch.
please write tests for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this and the Err arm above you abandon the retry entry if there is any

The Err arms above deal with completely different problems:

  • Err(Unavailable) is a task which may never be able to run and it may never even be attempted again
  • Err(Overweight) is a task which couldn't run because the scheduler ran out of weight, but it will be run the next time the scheduler services agendas; the scheduler runs agendas from previous blocks if they didn't execute completely, in order of block number from earliest up until present

This means that an overweight task is not abandoned and will be run again soon, but also that there is absolutely nothing that we can do about it. There is no fallback for the current block if we run out of weight. This is the scheduler's behavior before this PR and I believe the retry mechanism has nothing to do with this.

I would move the retry handling one level above, for this you just need to add for Ok varian an info if it failed or not.

Most of the logic is common for failed or successful tasks, there would be a lot of copied code. But if you just want to refactor this, I could move the common logic out of the match.

or at least take it before execute_dispatch

Tasks are scheduled for retries if they actually run but fail in their execution. By definition, there is no way to handle this before running execute_dispatch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that the task is removed if Unavailable, but now I see that we keep it in the storage. This is why I proposed to remove the retry as well. And also I see now that Overweight does not change it's address.

Ok(result) => {
let failed = result.is_err();
georgepisaltu marked this conversation as resolved.
Show resolved Hide resolved
Self::deposit_event(Event::Dispatched {
task: (when, agenda_index),
id: task.maybe_id,
result,
});

let mut task = if failed {
let _ = weight
.try_consume(T::WeightInfo::check_retry(T::MaxScheduledPerBlock::get()));
match Self::check_retry(now, when, agenda_index, task) {
Some(task) => task,
None => return Ok(()),
}
} else {
task
};

if let &Some((period, count)) = &task.maybe_periodic {
if count > 1 {
task.maybe_periodic = Some((period, count - 1));
Expand All @@ -1137,18 +1253,31 @@ impl<T: Config> Pallet<T> {
}
let wake = now.saturating_add(period);
match Self::place_task(wake, task) {
Ok(_) => {},
Ok(new_address) => {
if let Some(RetryConfig { total_retries, remaining, period }) =
Retries::<T>::take((when, agenda_index))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if current iteration for periodic failed and retry scheduled, is not this always none?

In description you mention that next periodic is placed on hold. What about just letting a user to decide? If a user does not want them overlap, a user controls it with a number of retries and period. A user might not want their periodic tasks to be placed on hold, because of retry, to me retry has a lower priority.
Also after all retries attempts, it can be too late to schedule next periodic iteration, I think in this case we loose the periodic task.
Without the hold, the logic looks more straightforward to me. As I mentioned in the issue, I think users should assess the possibility of overlapping between a periodic task and retries on their own.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if current iteration for periodic failed and retry scheduled, is not this always none?

If that happens, the function exits early here. But there is this corner case where there isn't enough weight to do the retry logic, and in that case retries are ignored, test here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the rest of your comment, I can see why you'd want the period and retry to work independently. I'm ok to make it do this:

  • if a task fails, try to schedule a retry if a retry config is set
  • if the task is periodic, try to schedule the next iteration

I think if we make this behavior configurable (i.e. let users decide which behaviour they want), it would get way too complicated and it's not worth it. I think it's better if it's simpler.

I think users should assess the possibility of overlapping between a periodic task and retries on their own

I was trying to avoid this because I saw it as a footgun, but I think you're right. If users set the configurations correctly, it shouldn't be an issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposed changes in 9da58c6

{
// Reset the counter if the task ran successfully.
let remaining = if failed { remaining } else { total_retries };
georgepisaltu marked this conversation as resolved.
Show resolved Hide resolved
Retries::<T>::insert(
new_address,
RetryConfig { total_retries, remaining, period },
georgepisaltu marked this conversation as resolved.
Show resolved Hide resolved
);
}
},
Err((_, task)) => {
// TODO: Leave task in storage somewhere for it to be rescheduled
// manually.
T::Preimages::drop(&task.call);
let _ = Retries::<T>::take((when, agenda_index));
Self::deposit_event(Event::PeriodicFailed {
task: (when, agenda_index),
id: task.maybe_id,
});
},
}
} else {
let _ = Retries::<T>::take((when, agenda_index));
T::Preimages::drop(&task.call);
}
Ok(())
Expand Down Expand Up @@ -1192,6 +1321,52 @@ impl<T: Config> Pallet<T> {
let _ = weight.try_consume(call_weight);
Ok(result)
}

/// Check if a task has a retry configutation in place and, if so, try to reschedule it.
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
///
/// If the task was successfully rescheduled for later, this function will return `None`.
/// Otherwise, if the task was not rescheduled, either because there was no retry configuration
/// in place, there were no more retry attempts left, or the agenda was full, this function
/// will return the task so it can be handled somewhere else.
fn check_retry(
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
now: BlockNumberFor<T>,
when: BlockNumberFor<T>,
agenda_index: u32,
task: ScheduledOf<T>,
) -> Option<ScheduledOf<T>> {
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
// Check if we have a retry configuration set.
match Retries::<T>::take((when, agenda_index)) {
Some(RetryConfig { total_retries, mut remaining, period }) => {
remaining = match remaining.checked_sub(1) {
Some(n) => n,
None => return Some(task),
};
let wake = now.saturating_add(period);
match Self::place_task(wake, task) {
Ok(address) => {
// Reinsert the retry config to the new address of the task after it was
// placed.
Retries::<T>::insert(
address,
RetryConfig { total_retries, remaining, period },
);
None
},
Err((_, task)) => {
// TODO: Leave task in storage somewhere for it to be
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
// rescheduled manually.
T::Preimages::drop(&task.call);
Self::deposit_event(Event::RetryFailed {
task: (when, agenda_index),
id: task.maybe_id,
});
Some(task)
},
}
},
None => Some(task),
}
}
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
}

impl<T: Config> schedule::v2::Anon<BlockNumberFor<T>, <T as Config>::RuntimeCall, T::PalletsOrigin>
ggwpez marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
Loading
Loading