Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documented BinaryHeap performance. #59698

Closed

Conversation

DevQps
Copy link
Contributor

@DevQps DevQps commented Apr 4, 2019

Description

As has been described in #47976 the performance of BinaryHeap was not yet documented. This PR adds a line specifying the performance of the push, pop and peek operations of BinaryHeap.

I deliberately did not add it to the std::collections page, because it does not have operations such as insert and remove. I was unsure whether I should have created a new section called queues since the only two queues present in std::collections are VecDeque and BinaryHeap and VecDeque can append at both sides of the queue making it harder to create a consistent table. That's why I took this approach in the end.

@steveklabnik What do you think about this? Or do you think I can better ping someone else for this?

closes #47976

@rust-highfive
Copy link
Collaborator

r? @dtolnay

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 4, 2019
@@ -165,6 +165,9 @@ use super::SpecExtend;
/// trait, changes while it is in the heap. This is normally only possible
/// through `Cell`, `RefCell`, global state, I/O, or unsafe code.
///
/// Both `push` and `pop` operations can be performed in `O(log(n))` time, whereas `peek` can be
Copy link
Contributor

@tesuji tesuji Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should break at , to conform with surrounding documentation's style.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks man! Fixed it!

@@ -165,6 +165,9 @@ use super::SpecExtend;
/// trait, changes while it is in the heap. This is normally only possible
/// through `Cell`, `RefCell`, global state, I/O, or unsafe code.
///
/// Both `push` and `pop` operations can be performed in `O(log(n))` time,
Copy link
Member

@dtolnay dtolnay Apr 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would need to call out that the O given for push is an amortized cost, similar to how this is emphasized in the std::collections doc. It is not true in general that push is O(log(n)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment! I've read through std::collections once more and it basically means that push can re-allocate and therefore would be O(n) right? What would you think about a change like this?:

The amortized cost of a push operation is O(log(n)). When the buffer cannot hold more elements a resize costs O(n), since all elements have to be copied to a new memory region. pop operations never reallocate and can be performed in O(log(n)) time, whereas peek can be performed in O(1) time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that would work. I would maybe de-emphasize the O(n) statement slightly because that figure would rarely be relevant to selecting a data structure. Notice how std::collections treats it almost like a sidenote explaining what amortized cost refers to, as separate from the more meaningful comparable quantities in the tables.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, in case my two comments look like they contradict each other in terms of what is worth emphasizing:

The fact that O(log(n)) is an amortized cost rather than worst case is important (which is why std::collections places * markers directly alongside the important data) but the O(n) worst case time may be treated as a less important detail (which is why std::collections has that only in a sidenote).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay I changed the PR! Hopefully, this is a bit better :)

/// The costs of `push` and `pop` operations are `O(log(n))` whereas `peek`
/// can be performed in `O(1)` time. Note that the cost of a `push`
/// operation is an amortized cost which does not take into account potential
/// re-allocations when the current buffer cannot hold more elements.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what this is saying but I find it somewhat misleading as written. The amortized cost absolutely does take into account the reallocations. Including all the time spent reallocating and copying, the amortized cost is O(log(n)).

There are various ways to analyze this. See https://en.wikipedia.org/wiki/Potential_method#Dynamic_array for one approach. I believe our BinaryHeap push does the O(1) amortized amount of work described in the link plus a O(log(n)) worst case amount of work to maintain the binary heap property.

Copy link
Contributor Author

@DevQps DevQps Apr 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay Just to be sure I get it right this time:

Amortized costs are like "the average costs of a function" if uncle Google didn't lie to me :)
So that makes me deduce these things:

  • pop and peek are always constant since they do not perform and reallocation.
  • push does perform reallocation, but only once every while. So could you say that the average cost is estimated as O(1) since it's so little?
  • I would suspect that the worst case scenario is O(N) because the entire buffer with N elements need to be copied to a new memory region. Could you explain why you believe it is O(log(n))? The link that you shared says this:

Combining this with the inequality relating amortized time and actual time over sequences of operations, this shows that any sequence of n dynamic array operations takes O(n) actual time in the worst case

Allocating a new internal array A and copying all of the values from the old internal array to the new one takes O(n) actual time

The big O notation and complexity is not really my turf so I am glad you're here :)

Btw if you feel like it might just be easier to write a few sentences yourself, feel free to do so, then I will add them to this merge request (Y).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amortized costs are like "the average costs of a function"

In some sense, although saying it this way is ambiguous between whether it is an average over all possible inputs to the same call (which std::collections calls "expected cost" when averaging over hash functions and hashed values) or an average over a sequence of calls (which is "amortized cost").

I would recommend thinking of amortized cost as a worst case cost per call of a large number of calls.

  • pop and peek are always constant since they do not perform any reallocation.

Reallocation is not the only cost. Pop does O(log(n)) work to preserve the binary heap "shape property" and "heap property": https://en.wikipedia.org/wiki/Binary_heap

  • push does perform reallocation, but only once every while. So could you say that the average cost is estimated as O(1) since it's so little?

It isn't an estimate and "since it's so little" isn't really the reason. Previous link explains how to show formally that the worst case cost of many calls is O(1) per call.

  • I would suspect that the worst case scenario is O(N) because the entire buffer with N elements need to be copied to a new memory region. Could you explain why you believe it is O(log(n))?

The part you quoted from the link says that n array insertions take O(n) time so the amortized time is O(1) each. Binary heap does some more work beyond that to maintain "shape property" and "heap property" which takes O(log(n)) time in the worst case. Adding these up, the amortized cost is O(log(n)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay Thanks for your response and sorry for my late response. I read through the article and I think I am slowly starting to understand what it means. Because it's O(N) for n insertions it's N/1 aka O(1) per insertion.

So technically speaking I can say it like this:

  • peek: O(1) always
  • push: O(1), amortized: O(log(n): Because it calls sift_up for maintaining the 'sorted' Binary Heap property.
  • pop: O(1), amortized: O(log(n): Because it calls sift_down_to_bottom for maintaining the 'sorted' Binary Heap property.

I rephrased the description! Hopefully, it's good this time. If you don't agree, could you maybe do a suggestion for a description?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay I hope you still have time to respond to my previous comment!

Co-Authored-By: DevQps <46896178+DevQps@users.noreply.github.com>
@Mark-Simulacrum
Copy link
Member

Visiting for triage -- @dtolnay, looks like this is waiting on a review from you.

Centril added a commit to Centril/rust that referenced this pull request May 20, 2019
Document BinaryHeap time complexity

I went into some detail on the time complexity of `push` because it is relevant for using BinaryHeap efficiently -- specifically that you should avoid pushing many elements in ascending order when possible.

r? @Amanieu
Closes rust-lang#47976. Closes rust-lang#59698.
@bors bors closed this in #60952 May 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-review Status: Awaiting review from the assignee but also interested parties.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Collections documentation could mention performance of BinaryHeap impl
6 participants