Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve bom generation #51

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 31 additions & 23 deletions allspice/utils/bom_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,31 +248,39 @@ def _extract_all_schdoc_components(
"""

files_in_repo = repository.get_git_content(ref=ref)
all_components = []
for repo_file in files_in_repo:
if repo_file.name in schdoc_files_in_proj:
# If the file is not yet generated, we'll retry a few times.
retry_count = 0
while True:
retry_count += 1

try:
schdoc_file_json = repository.get_generated_json(repo_file, ref=ref)
all_components.extend(
_extract_components_from_schdoc(
schdoc_file_json,
attributes_mapping,
)
)

break
except NotYetGeneratedException:
if retry_count > 20:
break
schdoc_files_in_repo = [file for file in files_in_repo if file.name in schdoc_files_in_proj]

# Create a retry value for each file so that we can traverse them
# in parallel for improved performance
Comment on lines +254 to +255
Copy link
Contributor

@jtran jtran Oct 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is a little misleading. This code doesn't do anything in parallel. (Maybe round-robin is more accurate?) I would also use a pretty different approach. We basically want to reimplement the JS client code or something even smarter. But I don't know if we care. So I guess this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, maybe I should look at our rate-limiting a little closer. My goal here was to limit any requests by this function to 4 per second. I didn’t implement something with promises/threads because I figured that could still potentially fire off dozens of requests simultaneously, and rate-limit us, even if we waited between requests for each endpoint.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our rate limiting session should already limit requests to X per second. If it isn't limiting enough, then we should probably adjust the max_calls and period arguments. The point of having a separate rate limiter is to pull out that concern so that rate limiting logic isn't sprinkled all over the codebase.

class RateLimitedSession(requests.Session):
"""
A requests.Session that is rate limited.
:param max_calls: Maximum number of calls per period
:param period: Time period in seconds
Example:
session = RateLimitedSession(max_calls=10, period=1)
session.get("https://example.com") # Will be rate limited
"""

If you're interested, I can also post my pseudocode first draft of something that's more similar to the JS client code. It doesn't use promises or threads; that would be one of the improvements we could do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, if you could share that it would be great

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #58.

retry_counter = [60]*len(schdoc_files_in_repo)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retry_counters


# Wait a bit before retrying.
time.sleep(0.5)
continue
all_components = []
# If the file is not yet generated, we'll retry a few times.
while any(i > 0 for i in retry_counter):
for idx in range(len(retry_counter)):
if retry_counter[idx] < 0:
continue
elif retry_counter[idx] == 0:
raise Exception(
f"Fetching file {schdoc_files_in_repo[idx].path} in {repository} ref={ref} exceeded max retries.")

retry_counter[idx] -= 1

try:
schdoc_file_json = repository.get_generated_json(schdoc_files_in_repo[idx], ref=ref)
all_components.extend(
_extract_components_from_schdoc(
schdoc_file_json,
attributes_mapping,
)
)
retry_counter[idx] = -1
except NotYetGeneratedException:
pass
if any(retry_counter):
# Wait a bit before retrying.
time.sleep(0.25)

return all_components

Expand Down
11 changes: 4 additions & 7 deletions allspice/utils/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,13 @@ def get_all_pcb_components(
"""

retry_count = 0
while True:
while retry_count < 60:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can change this to a for _ in range(60)

retry_count += 1
try:
pcb_json = repository.get_generated_json(pcb_file, ref=ref)
break
return pcb_json["component_instances"]
except NotYetGeneratedException:
if retry_count > 20:
break
# Wait a bit before retrying.
time.sleep(0.5)
time.sleep(0.25)
continue

return pcb_json["component_instances"]
raise Exception(f"Fetching file {pcb_file} in {repository} ref={ref} exceeded max retries.")
4 changes: 2 additions & 2 deletions examples/generate_bom/compute_cogs.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,9 +187,9 @@ def fetch_price_for_part(part_number: str) -> dict[int, float]:
with ExitStack() as stack:
if args.output_file:
file = stack.enter_context(open(args.output_file, "w"))
writer = csv.writer(file)
writer = csv.writer(file, lineterminator="\n")
else:
writer = csv.writer(sys.stdout)
writer = csv.writer(sys.stdout, lineterminator="\n")

writer.writerow(headers)
writer.writerows(rows)
Expand Down
18 changes: 13 additions & 5 deletions examples/generate_bom/generate_bom.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,20 @@
from contextlib import ExitStack

from allspice import AllSpice
from allspice.utils.bom_generation import AttributesMapping, generate_bom_for_altium
from allspice.utils.bom_generation import AttributesMapping, BomEntry, generate_bom_for_altium


with open("attributes_mapping.json", "r") as f:
attributes_mapper = AttributesMapping.from_dict(json.loads(f.read()))


def bom_entry_sort_key(x: BomEntry) -> str:
''' sort BOM list by this criteria'''
if len(x.designators):
return sorted(x.designators)[0]
return "_" + x.part_number


if __name__ == "__main__":
# Parse command line arguments. If you're writing a special purpose script,
# you can hardcode these values instead of using command line arguments.
Expand Down Expand Up @@ -84,20 +91,21 @@
bom_rows = [
[
bom_row.description,
", ".join(bom_row.designators),
", ".join(sorted(bom_row.designators)),
bom_row.quantity,
bom_row.manufacturer,
bom_row.part_number,
]
for bom_row in bom_rows
# Sort by designator
for bom_row in sorted(bom_rows, key=bom_entry_sort_key)
]

with ExitStack() as stack:
if args.output_file is not None:
f = stack.enter_context(open(args.output_file, "w"))
writer = csv.writer(f)
writer = csv.writer(f, lineterminator="\n")
else:
writer = csv.writer(sys.stdout)
writer = csv.writer(sys.stdout, lineterminator="\n")

header = [
"Description",
Expand Down
Loading