I am not sure exactly how @max_morais_dmm did it, but I worked through it with a friend and it went something like this: (I assume you are asking about how to setup indexes for overgrown database tables)
First and foremost run a complete backup of your database in case something goes horribly wrong, you have the ability to get back to where you started. Now we can begin.
First find the directory that contains the slow query log. So, start mysql:
mysql -u root -p[your-password]
SELECT @@datadir;
This will reveal the directory where you will find the slow query log file. In my case it was:
/var/lib/mysql
When I looked in that directory, near the bottom of the files list I found:
mysql-slow.log
So, now that we know where to look and what the file is named, you stop mysql and delete the file, then restart mysql to re-initialize the file:
sudo service mysql stop
sudo rm /var/lib/mysql/mysql-slow.log
ls -lha /var/lib/mysql (to make sure the file deleted and is no longer there)
sudo service mysql restart
ls -lha /var/lib/mysql (to make sure the file was recreated on restart)
Now we have a running system with a fresh and mostly empty slow query log file. So, go run the reports and do some regular erp functions that you think run slow. After a few exercises, we dump the log file to a text file so it can be easily read:
sudo cat /var/lib/mysql/mysql-slow.log > ~/slowlog.txt
This will create a file called slowlog.txt in your user folder that you can now easily review in any editor, or even transfer down to your local PC and use a local editor to read it.
Now that we have the file, we need to evaluate it a bit looking for the tasks that are taking up the most time. Here is an example from one of my own files:
# Time: 210330 9:11:35
# User@Host: 1bd3e0294da19198[1bd3e0294da19198] @ localhost [127.0.0.1]
# Thread_id: 2316 Schema: 1bd3e0294da19198 QC_hit: No
# Query_time: 10.630705 Lock_time: 0.000096 Rows_sent: 0 Rows_examined: 4756902
# Rows_affected: 0
# Full_scan: Yes Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: Yes Filesort_on_disk: No Merge_passes: 0 Priority_queue: Yes
SET timestamp=1617109895;
select name, owner, creation, data from `tabVersion` where `tabVersion`.docname = "STE-54117" and `tabVersion`.ref_doctype = "Stock Entry"
order by creation desc limit 0, 10;
In this example we are looking at the “Query_time” of about 10.6(+) seconds and the operation was submitting a stock entry. So after the “SET timestamp” field we see the actual operation and that is where the rest of the clues come from.
We see that it was an operation on the “tabVersion” table that took so long. If you were to look at the other hundreds of entries in my log file you would see that about 85% of them are related to slow operations related to the “tabVersion” table and every one of them was trying to find records sorted by .ref_doctype
and then .docname
So the logical thing to do was to create an index for that table to sort by those two references. To do that I went into mysql again and created the index for that table:
mysql -u root -p[your_password]
CREATE INDEX test_index1 ON `tabVersion` (`ref_doctype`, `docname`);
exit
This creates an index named test_index1
for the table tabVersion
where it is first sorted by ref_doctype
and then docname
within ref_doctype
to speed up searches.
Now we restart mysql again to make sure everything is ready for testing.
sudo service mysql restart
Now log back into your ERPNext system and run the same tests for things that were running slow before to see if you have a speed improvement.
WARNING - messing with your database is a dangerous thing if you are not sure of yourself. So be warned to make backups before attempting anything and always check you syntax as you go.
Now, I had very little experience with mysql database work but the instructions I received from a friend were very helpful and I just jumped in and tried it. IT worked very well for me.
And as ALWAYS… Your mileage may vary!
BKM
EDIT - If you want a larger datapool to work with and possibly get better results, instead of only running a few tests of the slow functions, try letting the fresh slow query log file capture a day worth of information and then dump it to a text file for evaluation. This will actually give you much more data to tell you how many times a particular table may be interfering with your business during the day.
Sometimes more information is better. But if your system was like mine then the slow query log that existed already was massive (36mb of text) which is why I suggest deleting it and focusing your testing on a few functions or a day worth of work.
BKM