ERPNext + LLM (ChatGPT, Ollama) | Part -2

Here the Part -1

I have done the crawling on these three sites.

  1. docs.erpnext.com
  2. frappeframework.com
  3. frappehr.com

And converted as datasets and stored in the HugginFace. Here is the dataset link.

  1. antony-pk/erpnext-docs-ds
  2. antony-pk/erpnext-frappeframework-ds
  3. antony-pk/erpnext-frappe-hr-ds

From this Custom Dataset I have done the fine tune using the microsoft/Phi-3-mini-4k-instruct pre-trained model. Here the custom model antony-pk/Phi-3-mini-4k-instruct-erpnext.

The model I fine-tuned is really giving very unusual response. Im currently working on that response.

If anyone really interested you can join with me in this research

Please join in the whatsapp group

Contact

Github : Antony-M1 · GitHub
Linkedin : Antony

4 Likes

@Antony_Praveenkumar congratulations for starting this!

I see you are using Phi 3, nice choice!

2 Likes