Better parallel processing for AI calls in Frappe background jobs?

Hi everyone,

My Frappe app receives frequent triggers from another system and each job calls an AI API.

I’m using frappe.enqueue(), but I noticed jobs in the queue run sequentially per worker.
To increase throughput I need to increase workers, which feels more like scaling processes than true async handling.

Is there a better pattern in Frappe for this use case?
Or is the recommended approach to offload this to an async service (for example alongside FastAPI)?

Thanks!