When a new Sales Order is submitted, update_reserved_qty() function updates reservered_qty for each item in the sales order.
Reserved quantity is saved in tabBin table. This table turns out to be in critical path for all transactions happening on same Item and same Warehouse.
e.g. if there are multiple Sales Orders, Sales Invoices, Purchase Order, Purchase Invoice etc created for same Item in Same Warehouse then updates to tabBin are queued. Sometime there is a deadlock situation. The transaction goes through after waiting and retrying.
While synchronous update of tabBin ensures correct status of the inventory is available at any point of time, it slows down transactions if they are on same Item and Warehouse.
In our case, orders are punched by sales team using an App (we use ErpNext REST Apis). There is high chance of orders for same item are getting punched for same warehouse for different customers. We were thinking of segregating the Sales Order Transaction and tabBin update to avoid any locks on the table.
In case of Sales Order, reserved_qty is updated. This update can be done by a worker thread on periodic basis (Scheduled every 4 mins/hour etc) instead of happening in the same transaction of Sales Order. In this case, it is not important to have correct stock status (especially reserved qty) instantly. It is okay to be stale by 4 mins or so.
What does community think on segregating this? This is our first step to optimise the ordering part. In general, what we think is if we can separate transactions from inventory updates and move from always consistent to eventually consistent (shorter timeframes) approach we can speed up and scale up the system lot more.
Thanks for a detailed analysis. in My opinion operations on tabBin are not taking much time as the use case you explained, it may happen only during the peak hours of your business. and there is not much data in tabBin, (each item per warehouse), by doing the change you suggested will not do a significant impact on performance.
apart from the suggestion, i always love to read your posts about performance analysis.
The problem is not the amount of data in tabBin. It is the row level lock which is added when any of following happens -
New Sales order is created, cancelled or amended
New Sales Invoice is created, cancelled or amended
Purchase order is created, cancelled or amended
Purchase receipt is created, cancelled or amended
Purchase Invoice is created, cancelled or amended
Stock Entry is created, cancelled or amended
Production order…
Stock reconciliation …
All of above transactions update the same row in the tabBin table (for updating either reserved_qty, actual_qty, waiting_qty, reserved_qty etc. etc.). Chances of Sales order getting placed when any of the above is happening is quite high in multi-user platform.
You can simulate the problem by running two bench console and trying to update same tabBin record
We should actually create Sales Order in one console and Purchase Order in second console. For simplicity I’ve depicted updates to tabBin directly. The Row level lock on tabBin will continue to be there until the full Sales Order or Purchase Order is not saved/committed which takes significant amount of time.
E.g. Open 2 bench console and run code similar to below. Transaction in only 1 bench console will go through. The other transaction will hang and timeout (until ofcourse the 1st transaction is committed or rollbacked)
console 1 output
In [1]: item_code='cabbage-medium'
In [2]: warehouse='Head Office - ES'
In [3]: bin = frappe.db.get_value("Bin", {"item_code": item_code, "warehouse": warehouse})
In [6]: bin_obj = frappe.get_doc('Bin', bin)
In [9]: bin_obj.save()
Out[9]: <erpnext.stock.doctype.bin.bin.Bin at 0x112e03d50>
Console 2 output
In [1]: item_code='cabbage-medium'
In [2]: warehouse='Head Office - ES'
In [3]: bin = frappe.db.get_value("Bin", {"item_code": item_code, "warehouse": warehouse})
In [6]: bin_obj = frappe.get_doc('Bin', bin)
In [9]: bin_obj.save()
InternalError: (1205, u'Lock wait timeout exceeded; try restarting transaction')
As you metioned, this situation would occur in peak hours. But it would always be a scenario in multi user platform with high transaction frequency.
Interesting scenario. I am not technical; but I’m guessing my scenario is similar to yours and I’m curious as to where you reached with your solution. We have several restaurant branches using same item (and bundle) list and face continuous issue of unsubmitted invoices due to timeout. I think our issue is further magnified by the fact that we use FIFO with very old running negative stock levels. Don’t ask why