Announcing Raven v2: now with AI Agents, document alerts and more!

If there’s one thing we’ve worked on the most on Raven for the past few months, it’s integrations with other Frappe apps.

With Raven, you can now build your own AI Agents to automate routine tasks in ERPNext - all without writing any code. Whether it’s parsing invoices or expense claims, you can choose how you want to use AI Agents in your setup since it can integrate with any Frappe app.

You can also send notifications for documents within ERPNext like notifying the finance team about an invoice or sending salary slips to employees via bots.

We’re also introducing a new way to manage channels and user permissions with “Workspaces” - making it easier to manage internal teams, customers, or multi-company setups.

Read more about Raven v2 here: Raven v2 by The Commit Company | Frappe Blog

Raven’s website: ravenchat.ai
GitHub

9 Likes

This is so cool! Can’t wait to implement this internally :slight_smile:

1 Like

Congratulations for your work on adding features, bugfixes and generally contributing to the ERPNext (and Frappe) ecosystem.

I still have some questions, and please bear with me as I don’t know much about the current practical implementations of AI (theoretical background knowledge exists, though). So:

How does Raven’s AI feature work in practice:

  • Does it need specific hardware? Specifications?
  • Will it scan all the data available in the onsite data silo (aka “database”, files) to accomplish its functions? If so, can such a scan be restricted? Disabled? Will the AI functions then stop working or will it still be trained with the use of ERPNext (or the Framework)?
  • Does the AI feature need a network connection to an external entity (organization)? If so, does Raven now need that network connection to function?
  • If so, will it transmit onsite data to that entity?
  • How can this data leakage be prohibited and verified? (Is there a point were such perimeter checks on the data can be accomplished reliably and securely, and how? Is this duly documented?)
  • Will it work purely onsite?
  • Can any AI feature be disabled if any doubts about the integrity of the onsite data due to its functioning or lack of information about its functioning persist? Can this be done easily (and, if needed, urgently) if new information about it compromising the site’s data appear?

The current implementation for these AI Agents is via OpenAI’s Assistant API.

So to answer your questions:

Does it need specific hardware? Specifications?

No, since the models run on OpenAI’s servers

Will it scan all the data available in the onsite data silo (aka “database”, files) to accomplish its functions? If so, can such a scan be restricted? Disabled? Will the AI functions then stop working or will it still be trained with the use of ERPNext (or the Framework)?

The AI can only access data for which you have created “functions” - it cannot call any other function apart from the ones you specify. On top of that, when it does the call, all user/role permissions are handled and the Agent can only access what the user can access. The Agent is not “trained” on any of your data - it just requests data to provide results.

Does the AI feature need a network connection to an external entity (organization)? If so, does Raven now need that network connection to function? If so, will it transmit onsite data to that entity?

Yes, since it makes a call to OpenAI.

How can this data leakage be prohibited and verified? (Is there a point were such perimeter checks on the data can be accomplished reliably and securely, and how? Is this duly documented?)

The AI Agent can access standard functions like getting a document, or creating a document - based on what you specify when configuring it. All these functions call standard functions in Frappe that have user permissions checks - refer to: raven/raven/ai/functions.py at develop · The-Commit-Company/raven · GitHub

For custom function calls like APIs, Raven does not allow you to use any internal function. So you can only create functions that point to whitelisted APIs in apps and hence user permissions should be handled.

Can any AI feature be disabled if any doubts about the integrity of the onsite data due to its functioning or lack of information about its functioning persist? Can this be done easily (and, if needed, urgently) if new information about it compromising the site’s data appear?

Yes, the integration can be disabled with just a checkbox in settings. Moreover, OpenAI is not called without explicit user request via a direct message to an agent on Raven. Nothing runs in the background automatically.

Hope that clears things up.

1 Like

Yes, it does, and very well so, thank you very much for this clear, transparent and educative answer for the current implementation.

Note for later readers: We’re at v2 (as per your title), and on github, release 2.0.4 of Raven was published come out just yesterday.

Great! Your helpful answer will allow implementors to evaluate much quicker what these new functionalities might offer for specific use cases.

2 Likes