Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Lowest Common Ancestor (LCA) algorithm. #512

Merged
merged 7 commits into from
Jul 7, 2022

Conversation

datbeohbbh
Copy link
Contributor

Hi guys! I want to add an implementation of LCA algorihm.

Copy link
Member

@siriak siriak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks!

@yanglbme yanglbme merged commit d05e013 into TheAlgorithms:master Jul 7, 2022
@tjgurwara99
Copy link
Member

I feel like the graph implementation that exists in the graph package could have been adapted instead of creating a new graph data structure.

@datbeohbbh
Copy link
Contributor Author

@tjgurwara99 Thank you for your response. You are right, the graph implementation in the graph package can be adapted. But struct Graph saves the edges using map, I think with a large graph up to 100,000 vertices, it is not a good choice. Instead, I use slice to save the edges in an adjacent list. Again, thank you for spending your time!

@tjgurwara99
Copy link
Member

Can you explain your reasoning for why the map is not an appropriate choice in this context as compared to slice?

@datbeohbbh
Copy link
Contributor Author

@tjgurwara99 Hi! As I understand, map is a hash table, so to access an element with a key, it must do some checking on the presence of that element and it costs some loadFactor, so complexity is approx O(loadFactor). With slice, access with an index is more efficient, and complexity is O(1). Back to this algorithm context, for example if I do a dfs() on a large tree, I need to access the list of edges of each vertex frequently, so using slice will be more efficient. Please correct me if I'm wrong.

@tjgurwara99
Copy link
Member

tjgurwara99 commented Jul 12, 2022

I don't know what you're talking about here...

Lookup in hash maps is O(1) on good case and O(n) on worst case (if you want to go more in depth - read this article - so I don't think it has anything to do with load factor. In fact, maps with keys from finite domain (int - -2^31 to 2^31 -1 for example) are O(1) which is the case for Graph in the graph package. Also lookup on slices is the same (if you're lucky, you already have the element in the first try O(1) but is not then, too bad O(n)). However, your argument about checking the presence of an element has nothing to do with this. Getting the depth of the graph, should be easy and straight forward in maps too without actually affecting the performance at all. Did you even consider the map approach? Cause if you did, and you have a working implementation, try creating a benchmark and see it for yourself. Maybe I'm wrong and slice is indeed faster, but I don't think your argument makes sense to me in this context.

PS: Load factor (to my knowledge) is only relevant in order to resize the buckets to maintain the lookup speed to O(1) and put speed to O(1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants