-
Notifications
You must be signed in to change notification settings - Fork 0
groups and chords #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @LuisJG8, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the application's ability to fetch GitHub data by implementing a distributed task processing strategy using Celery groups. The change moves from a single, potentially long-running task to a set of parallel subtasks, which is designed to improve performance, resilience, and scalability for data retrieval operations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the Celery task execution to use a group of tasks for fetching GitHub data, allowing for parallel execution and collection of results. This is a good improvement for scalability. However, there are several critical issues that need to be addressed. A critical bug in worker.py passes an incorrect argument to the get_github_data task, which will cause runtime failures. The result processing in client.py needs to be updated to handle the nested list structure returned by the task group. A hardcoded value in worker.py severely limits the number of repositories processed, which seems to be a leftover from debugging. There is also some commented-out code in client.py that should be removed, and magic numbers in worker.py that should be defined as constants for better maintainability.
|
|
||
| jobs = [ | ||
| jobs = group([ | ||
| get_github_data.s(start, 500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a critical error in how the get_github_data task is being called. The task's signature is def get_github_data(self, start_in_repo_num: int = 0, github_instance: Github = gh), but it's being called as get_github_data.s(start, 500). This passes the integer 500 as the github_instance argument, which expects a Github object. This will cause a TypeError at runtime.
To use the default gh instance, you should call it with only the start argument.
| get_github_data.s(start, 500) | |
| get_github_data.s(start) |
| print("Getting the result") | ||
| response = get_github_data.apply_async() | ||
| response = distribute_tasks.apply_async() | ||
| the_data = response.get(timeout=3600) # 1 hour timeout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The distribute_tasks task now returns a result from a Celery group, which means the_data will be a list of lists (e.g., [[...], [...]]). Passing this nested list directly to pl.DataFrame(the_data) on line 34 will likely not produce the intended flat DataFrame of repositories. You'll need to flatten this list of lists before creating the DataFrame.
For example:
import itertools
flat_data = list(itertools.chain.from_iterable(the_data))
df = pl.DataFrame(flat_data)| remaining = remaining_api_calls[0] | ||
|
|
||
| if int(counter) == 4900: | ||
| if int(counter) == 10: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition appears to use a hardcoded value (10) for debugging purposes. This will cause the task to stop processing after only 10 repositories, which is likely not the intended behavior. This should be reverted to a more appropriate limit (like the previous 4900) or be made configurable to avoid crippling the functionality in a production environment.
| if int(counter) == 10: | |
| if int(counter) >= 4900: |
| # if distribute_tasks.apply_async() == True: | ||
| # print("A subtask failed") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This commented-out code block should be removed. It appears to be an incorrect attempt to check for task failure. Calling distribute_tasks.apply_async() again would start a new set of tasks, and comparing its AsyncResult object to True would always evaluate to false. The existing try...except block around response.get() is the correct way to handle failures from tasks within the group.
| jobs = [ | ||
| jobs = group([ | ||
| get_github_data.s(start, 500) | ||
| for start in range(0, 5000, 500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The values 5000 and 500 used in this range function are magic numbers. To improve readability and maintainability, they should be defined as named constants at the top of the module.
For example:
# At top of file
TOTAL_REPOS_TO_SCAN = 5000
REPOS_CHUNK_SIZE = 500
# In distribute_tasks
for start in range(0, TOTAL_REPOS_TO_SCAN, REPOS_CHUNK_SIZE)
No description provided.