-
Notifications
You must be signed in to change notification settings - Fork 3
Refactor getMin & getMax #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
||
AvlTree.prototype.getMin = function (node) { | ||
AvlTree.prototype.getMin = function () { | ||
return this._getMin(this._root); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, thanks for the PR! Sorry for the late reply.
I think these public functions should return the element, not the node. The idea is to try not expose the nodes to the public api. See AvlTree.prototype.deleteMax returns the element.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right! I will do that tomorrow 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I done that already :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Putting the .element in the private function _getMin means that _getMin has two different types it can possibly return, a node or a number. It is better to only have 1 return type for a function. So if you put it in getMin it would be better.
return node.element; | ||
} | ||
return this.getMin(node.left); | ||
return this._getMin(node.left); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not scared of stack overflows? :) To be honest, I don't know how serious a threat that would be, but keep in mind that JavaScript today does not support tail call optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I often use recursive functions and in a case like this I don't think it can cause problems
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the tree grows, the maximum number of recursive calls is given by log2(n) where n is the total elements in the tree. Assuming 48 bytes for a stack frame and 4 bytes for the reference to the node.
But let's call it 64 byes for simplicity. Assuming we have 1 MiB of stack space you have 2^20 / 2^6 = 2^26 calls till overflow. 16384. The maximum number of calls of getMax is limited to Log2(n) of the total number of values in the tree, where n is the number of values. So you get Log2(n) = 16384, solve for n. Which is 2^16384 items.
This number is so large your computer will run out of memory first before getting anywhere close to getting that many items in the tree. And for reference, a billion items would take only 30 calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, i done some tests :
const maxIterations = 29000000;
for (let i = 0; i < maxIterations; i++) {
tree.insert(i);
}
var node = tree.getMax();
assert.strictEqual(node, maxIterations - 1);
For 29M iterations, it takes 35 seconds, and it's stack overflow for 30M.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the code like below, I was able to run with 30m entries. At 40m, the process runs out of mem. Which is interesting because it is only around 2GB when it ran out, if it were a a 64 bit process I should be able to hold a great deal more than that. Not sure why that is? @ronkorving.
Anyway as far as I can tell your test demos an oom error in insert(), not a stack overflow on getMax().
var Avltree = require('avltree-js');
var tree = new Avltree();
const maxIterations = 40000000;
for (let i = 0; i < maxIterations; i++) {
tree.insert(i);
}
console.log('finished adding');
var num = tree.getMax();
console.log('done', num);
C:\dev\avltest>node .
finished adding
done 29999999
C:\dev\avltest>node .
Fatal error in , line 0
API fatal error handler returned after process out of memory
C:\dev\avltest>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Tatsujinichi How many bytes are you assuming it holds per entry+overhead? Also, V8 has/had a hard limit, depending on the version you're on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yeah I suppose that would explain it.
I am assuming 64bytes per stack frame.
Hmm, that's interesting. I'll have to take a look. Busy with something else
atm though.
…On Thu, May 10, 2018, 4:44 PM Christopher Yovanovitch < ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In index.js
<#9 (comment)>:
> }
- return this.getMin(node.left);
+ return this._getMin(node.left);
Yep, i done some tests :
const maxIterations = 29000000;for (let i = 0; i < maxIterations; i++) {
tree.insert(i);
}
var node = tree.getMax();assert.strictEqual(node, maxIterations - 1);
For 29M iterations, it takes 35 seconds, and it's stack overflow for 30M.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#9 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEyO7QrpVoei5MZzQK89Y8nA2WkFpdLFks5tw-_ggaJpZM4TvwJr>
.
|
This pull request fix #5.
It's a little contribution but it's working :)