Skip to content

Conversation

@LuisJG8
Copy link
Owner

@LuisJG8 LuisJG8 commented Jan 26, 2026

No description provided.

@LuisJG8 LuisJG8 merged commit da7f176 into MyProjects Jan 26, 2026
1 check failed
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LuisJG8, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the application's ability to fetch GitHub data by implementing a distributed task processing strategy using Celery groups. The change moves from a single, potentially long-running task to a set of parallel subtasks, which is designed to improve performance, resilience, and scalability for data retrieval operations.

Highlights

  • Celery Task Grouping: Introduced a new distribute_tasks function in worker.py that utilizes Celery's group primitive to parallelize calls to get_github_data. This breaks down the data fetching into smaller, concurrent subtasks.
  • Client-Side Integration: The client.py file has been updated to invoke the new distribute_tasks function, ensuring that the application now uses the distributed task execution model.
  • Simulated Rate Limit Adjustment: A temporary change was made in worker.py to reduce the simulated GitHub API rate limit counter from 4900 to 10, likely for testing the new task distribution logic more quickly.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Celery task execution to use a group of tasks for fetching GitHub data, allowing for parallel execution and collection of results. This is a good improvement for scalability. However, there are several critical issues that need to be addressed. A critical bug in worker.py passes an incorrect argument to the get_github_data task, which will cause runtime failures. The result processing in client.py needs to be updated to handle the nested list structure returned by the task group. A hardcoded value in worker.py severely limits the number of repositories processed, which seems to be a leftover from debugging. There is also some commented-out code in client.py that should be removed, and magic numbers in worker.py that should be defined as constants for better maintainability.


jobs = [
jobs = group([
get_github_data.s(start, 500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a critical error in how the get_github_data task is being called. The task's signature is def get_github_data(self, start_in_repo_num: int = 0, github_instance: Github = gh), but it's being called as get_github_data.s(start, 500). This passes the integer 500 as the github_instance argument, which expects a Github object. This will cause a TypeError at runtime.

To use the default gh instance, you should call it with only the start argument.

Suggested change
get_github_data.s(start, 500)
get_github_data.s(start)

print("Getting the result")
response = get_github_data.apply_async()
response = distribute_tasks.apply_async()
the_data = response.get(timeout=3600) # 1 hour timeout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The distribute_tasks task now returns a result from a Celery group, which means the_data will be a list of lists (e.g., [[...], [...]]). Passing this nested list directly to pl.DataFrame(the_data) on line 34 will likely not produce the intended flat DataFrame of repositories. You'll need to flatten this list of lists before creating the DataFrame.

For example:

import itertools

flat_data = list(itertools.chain.from_iterable(the_data))
df = pl.DataFrame(flat_data)

remaining = remaining_api_calls[0]

if int(counter) == 4900:
if int(counter) == 10:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This condition appears to use a hardcoded value (10) for debugging purposes. This will cause the task to stop processing after only 10 repositories, which is likely not the intended behavior. This should be reverted to a more appropriate limit (like the previous 4900) or be made configurable to avoid crippling the functionality in a production environment.

Suggested change
if int(counter) == 10:
if int(counter) >= 4900:

Comment on lines +20 to +21
# if distribute_tasks.apply_async() == True:
# print("A subtask failed")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This commented-out code block should be removed. It appears to be an incorrect attempt to check for task failure. Calling distribute_tasks.apply_async() again would start a new set of tasks, and comparing its AsyncResult object to True would always evaluate to false. The existing try...except block around response.get() is the correct way to handle failures from tasks within the group.

jobs = [
jobs = group([
get_github_data.s(start, 500)
for start in range(0, 5000, 500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The values 5000 and 500 used in this range function are magic numbers. To improve readability and maintainability, they should be defined as named constants at the top of the module.

For example:

# At top of file
TOTAL_REPOS_TO_SCAN = 5000
REPOS_CHUNK_SIZE = 500

# In distribute_tasks
for start in range(0, TOTAL_REPOS_TO_SCAN, REPOS_CHUNK_SIZE)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant