You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is similar to other stock performance issues: #6027, #6922, #6659.
On my machine, I did some tests to figure out how to improve the performance of the stock registries. I'd like to share my findings in this issue. All these tests were run on a machine with the following specification:
Processor
RAM
Disk
Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
7.63GiB
476.9GB nvme0n1
First, I accessed the stock lots registry using the latest Vanga database. This database has 533905 records in the stock_movement table. By increasing the limit to 10000, I was able to render this page in 1.78 minutes (106598.164 ms).
This is nearly 2 minutes waiting for the page! Inspecting the server logs showed that the majority of the time was spent in executing this HTTP query:
http ::ffff:100.118.76.15 - GET /stock/lots/depots?client_timestamp=2023-10-09T21:01:14.290Z&displayValues=&includeEmptyLot=0&limit=10000 HTTP/1.1 200 764174 - 106598.164 ms +2m
Which, in turn, calls this SQL query:
SELECT BUID(l.uuid) AS uuid, l.label, l.descriptionAS lot_description,
SUM(m.quantity* IF(m.is_exit=1, -1, 1)) AS quantity,
SUM(m.quantity) AS mvt_quantity,
d.textAS depot_text, l.unit_cost, l.expiration_date,
l.serial_number, l.reference_number, l.package_size,
d.min_months_security_stock, d.default_purchase_interval,
DATEDIFF(l.expiration_date, CURRENT_DATE()) AS lifetime,
BUID(l.inventory_uuid) AS inventory_uuid,
i.code, i.text, BUID(m.depot_uuid) AS depot_uuid,
i.is_asset, i.manufacturer_brand, i.manufacturer_model,
m.dateAS entry_date, i.purchase_interval, i.delay, i.is_count_per_container,
IF(ISNULL(iu.token), iu.text, CONCAT("INVENTORY.UNITS.",iu.token,".TEXT")) AS unit_type,
ig.nameAS group_name, ig.tracking_expiration, ig.tracking_consumption,
dm.textAS documentReference, t.nameAS tag_name, t.color, sv.wac,
CONCAT('LT', LEFT(HEX(l.uuid), 8)) AS barcode
FROM stock_movement m
JOIN lot l ONl.uuid=m.lot_uuidJOIN inventory i ONi.uuid=l.inventory_uuidJOIN inventory_unit iu ONiu.id=i.unit_idJOIN inventory_group ig ONig.uuid=i.group_uuidJOIN depot d ONd.uuid=m.depot_uuidJOIN stock_value sv ONsv.inventory_uuid=i.uuidLEFT JOIN document_map dm ONdm.uuid=m.document_uuidLEFT JOIN lot_tag lt ONlt.lot_uuid=l.uuidLEFT JOIN tags t ONt.uuid=lt.tag_uuidWHERE1GROUP BYl.uuid, m.depot_uuidHAVING quantity >0ORDER BYi.code, l.labelLIMIT10000
Next, I entered the MySQL command line and executed this query on its own. It took 10.759 seconds. This is part of the issue, but it fires of a cascading series of GetAMC() calls and lot tag lookups. We could probably get an easy win by adding "skipTags" to the query if tags aren't in the search parameter.
I was curious to see how quick I could get the "bare minimum" information from the stock_movement table. So, I composed the following SQL query:
SELECTsm.lot_uuid,
sm.depot_uuid,
SUM(CASE WHEN sm.is_exit=0 THEN sm.quantity ELSE -sm.quantity END) AS quantity_in_stock
FROM
stock_movement sm
GROUP BYsm.depot_uuid, sm.lot_uuidHAVING
quantity_in_stock >0;
This query returned the correct information (422 rows, I spot-checked the results) in 0.405 seconds, which demonstrates that the problem isn't really the volume of stock transactions, but inefficient JOINs.
So... I rewrote the query, attempting to replicate the exact same behavior, but making it as fast as possible. To do this, I used MySQL 8's WITH statement (also know as Common Table Expressions). They allowed me to move the heavy SUM/GROUP BY calculation into its own WITH statement. Here is the final query:
WITH LotBalances AS (
SELECTsm.lot_uuid,
sm.depot_uuid,
SUM(CASE WHEN sm.is_exit=0 THEN sm.quantity ELSE -sm.quantity END) AS quantity_in_stock,
MIN(sm.date) AS entry_date
FROM
stock_movement sm
GROUP BYsm.lot_uuid, sm.depot_uuidHAVING
quantity_in_stock >0
)
SELECT BUID(l.uuid) as uuid,
l.label, l.descriptionAS lot_description,
LB.quantity_in_stockas quantity,
d.textAS depot_text, l.unit_cost, l.expiration_date,
l.serial_number, l.reference_number, l.package_size,
d.min_months_security_stock, d.default_purchase_interval,
DATEDIFF(l.expiration_date, CURRENT_DATE()) AS lifetime,
BUID(l.inventory_uuid) AS inventory_uuid,
i.code, i.text, BUID(LB.depot_uuid) AS depot_uuid,
LB.entry_date, i.is_asset, i.manufacturer_brand, i.manufacturer_model,
i.purchase_interval, i.delay, i.is_count_per_container,
IF(ISNULL(iu.token), iu.text, CONCAT("INVENTORY.UNITS.",iu.token,".TEXT")) AS unit_type,
ig.nameAS group_name, ig.tracking_expiration, ig.tracking_consumption,
t.nameAS tag_name, t.color, sv.wac,
CONCAT('LT', LEFT(HEX(l.uuid), 8)) AS barcode
FROM LotBalances LB
JOIN lot l ONl.uuid=LB.lot_uuidJOIN inventory i ONi.uuid=l.inventory_uuidJOIN inventory_unit iu ONiu.id=i.unit_idJOIN inventory_group ig ONig.uuid=i.group_uuidJOIN depot d ONd.uuid=LB.depot_uuidJOIN stock_value sv ONsv.inventory_uuid=i.uuidLEFT JOIN lot_tag lt ONlt.lot_uuid=l.uuidLEFT JOIN tags t ONt.uuid=lt.tag_uuidORDER BYi.code, l.labelLIMIT10000 ;
This is almost identical, but not quite. There are two changes I made:
I removed the document_map join because this query doesn't deal with document mappings at all! I'm not sure why it's included here, except that it is probably used by a totally different route for a different functionality.
I removed the mvt.quantity since it wouldn't ever show up anyway (we are grouping over movements - I'm not sure what this number should be). I also changed the logic for entry_date to ensure that it is the earliest date a lot was used.
With these changes, I ran the query again which produced 422 rows in set (0.438 sec). From 10.7 seconds to 0.438 seconds is a 95.929% performance improvement.
My analysis is ignoring the WHERE functionality, but I imagine it would not be too difficult to implement filtering on this format of query.
At this point, our stock routes have gotten too difficult for me to rationalize. I think we need to start going back to keeping things simple, and seek to improve performance. To do so, we would need to document the consumers of each API route and make sure we don't break them. I'm not sure what routes the BHIMA mobile app is using, nor what routes BAO is using for DHIS2. If we could document these, along with the ones BHIMA uses internally, we could start to move to a better designed stock routes system.
The text was updated successfully, but these errors were encountered:
I received a call this morning from Vanga about slow invoicing routes, specifically when loading inventory items to populate the bill. This might be one of the factors contributing to the slowness.
This issue is similar to other stock performance issues: #6027, #6922, #6659.
On my machine, I did some tests to figure out how to improve the performance of the stock registries. I'd like to share my findings in this issue. All these tests were run on a machine with the following specification:
First, I accessed the stock lots registry using the latest Vanga database. This database has 533905 records in the
stock_movement
table. By increasing the limit to 10000, I was able to render this page in 1.78 minutes (106598.164 ms).This is nearly 2 minutes waiting for the page! Inspecting the server logs showed that the majority of the time was spent in executing this HTTP query:
Which, in turn, calls this SQL query:
Next, I entered the MySQL command line and executed this query on its own. It took 10.759 seconds. This is part of the issue, but it fires of a cascading series of
GetAMC()
calls and lot tag lookups. We could probably get an easy win by adding "skipTags" to the query if tags aren't in the search parameter.I was curious to see how quick I could get the "bare minimum" information from the stock_movement table. So, I composed the following SQL query:
This query returned the correct information (422 rows, I spot-checked the results) in 0.405 seconds, which demonstrates that the problem isn't really the volume of stock transactions, but inefficient JOINs.
So... I rewrote the query, attempting to replicate the exact same behavior, but making it as fast as possible. To do this, I used MySQL 8's WITH statement (also know as Common Table Expressions). They allowed me to move the heavy SUM/GROUP BY calculation into its own WITH statement. Here is the final query:
This is almost identical, but not quite. There are two changes I made:
document_map
join because this query doesn't deal with document mappings at all! I'm not sure why it's included here, except that it is probably used by a totally different route for a different functionality.mvt.quantity
since it wouldn't ever show up anyway (we are grouping over movements - I'm not sure what this number should be). I also changed the logic forentry_date
to ensure that it is the earliest date a lot was used.With these changes, I ran the query again which produced 422 rows in set (0.438 sec). From 10.7 seconds to 0.438 seconds is a 95.929% performance improvement.
My analysis is ignoring the
WHERE
functionality, but I imagine it would not be too difficult to implement filtering on this format of query.At this point, our stock routes have gotten too difficult for me to rationalize. I think we need to start going back to keeping things simple, and seek to improve performance. To do so, we would need to document the consumers of each API route and make sure we don't break them. I'm not sure what routes the BHIMA mobile app is using, nor what routes BAO is using for DHIS2. If we could document these, along with the ones BHIMA uses internally, we could start to move to a better designed stock routes system.
The text was updated successfully, but these errors were encountered: