Opentelemetry logs (Grafana) for frappe.enqueue

Following up on this topic

I was able to implement Grafana in Frappe Framework. I can see logs and also metrics. One thing that I cannot see is frappe.enqueue logger.

How do you implement this?

I have implemented based on @revant_one 's comment with minor changes.

I have added gunicorn.conf.py

import logging
from uuid import uuid4

from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter

from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import SERVICE_INSTANCE_ID, Resource


def post_fork(server, worker):
    server.log.info("Worker spawned (pid: %s)", worker.pid)

    resource = Resource.create(
        attributes={
            # each worker needs a unique service.instance.id to distinguish the created metrics in prometheus
            SERVICE_INSTANCE_ID: str(uuid4()),
            "worker": worker.pid,
        }
    )

    logger_provider = LoggerProvider(resource=resource)
    logger_provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))

    logging_handler = LoggingHandler(
        level=logging.INFO, logger_provider=logger_provider
    )
    logging.getLogger().setLevel(logging.INFO)  # Set root logger to INFO
    logging.getLogger().addHandler(logging_handler)


and I configure Dockerfile in the end

# Run open telemetry
RUN /home/frappe/frappe-bench/env/bin/opentelemetry-bootstrap -a install

CMD [ \
  "/home/frappe/frappe-bench/env/bin/opentelemetry-instrument", \
  "/home/frappe/frappe-bench/env/bin/gunicorn", \
  "--config=/home/frappe/frappe-bench/apps/MYAPP/MYAPP/gunicorn.conf.py", \
  "--chdir=/home/frappe/frappe-bench/sites", \
  "--bind=0.0.0.0:8000", \
  "--threads=4", \
  "--workers=2", \
  "--worker-class=gthread", \
  "--worker-tmp-dir=/dev/shm", \
  "--timeout=120", \
  "--preload", \
  "MYAPP.app:application" \
]

and MYAPP.app:application code

from frappe.app import application
from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware

application = OpenTelemetryMiddleware(application)
1 Like

Follow this issue.

I could not find anything useful in that GitHub issue. Have you been able to set it up?

I could use before_job hook, but it does not look like the best approach.

No, nothing ready made. support for any APM · Issue #1230 · rq/rq · GitHub, few developers added it for their libraries, referer their code.

I was able to find a solution that I mentioned above

hooks.py

before_job = [
    "app.hooks.init_open_telemetry_logs",
]

and python method

def init_open_telemetry_logs():
    """
    For some reason the OpenTelemetry logs are not initiated on gunicorn setup.
    So we use this hook.
    """
    import logging
    import os
    from uuid import uuid4

    from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter

    # support for logs is currently experimental
    from opentelemetry.sdk._logs import LoggerProvider
    from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
    from opentelemetry.sdk.resources import SERVICE_INSTANCE_ID, Resource

    resource = Resource.create(
        attributes={
            # each worker needs a unique service.instance.id to distinguish the created metrics in prometheus
            SERVICE_INSTANCE_ID: str(uuid4()),
            "worker": os.getpid(),
        }
    )

    logger_provider = LoggerProvider(resource=resource)
    logger_provider.add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))

    logging_handler = LoggingHandler(
        level=logging.INFO, logger_provider=logger_provider
    )
    logging.getLogger().setLevel(logging.INFO)  # Set root logger to INFO
    logging.getLogger().addHandler(logging_handler)

I have to make sure that I do not re-add the same handler every job.

1 Like