Allow logging configuration from configuration yaml file
This will allow proper logging configuration for the services which are currently running in the dynamic infrastructure. Their logs are current written in the wrong elasticsearch indices.
The existing behavior is kept as is so we can focus on changing the current logging configuration deployment incrementally.
And then, some json logging configuration file could look like:
---
version: 1
handlers:
console:
class: logging.StreamHandler
formatter: json
stream: ext://sys.stdout
formatters:
json:
class: pythonjsonlogger.jsonlogger.JsonFormatter
# python-json-logger parses the format argument to get the variables it actually expands into the json
format: "%(asctime)s:%(threadName)s:%(pathname)s:%(lineno)s:%(funcName)s:%(task_name)s:%(task_id)s:%(name)s:%(levelname)s:%(message)s"
loggers:
celery:
level: INFO
amqp:
level: WARNING
urllib3:
level: WARNING
azure.core.pipeline.policies.http_logging_policy:
level: WARNING
swh: {}
celery.task: {}
root:
level: <<loglevel>>
handlers:
- console
While a systemd logging config matching the current setup for celery would look like:
---
version: 1
handlers:
console:
class: logging.StreamHandler
formatter: task
stream: ext://sys.stdout
systemd:
class: swh.core.logger.JournalHandler
formatter: task
formatters:
task:
(): celery.app.log.TaskFormatter
fmt: "[%(asctime)s: %(levelname)s/%(processName)s] %(task_name)s[%(task_id)s]: %(message)s"
use_color: false
loggers:
celery:
level: INFO
amqp:
level: WARNING
urllib3:
level: WARNING
azure.core.pipeline.policies.http_logging_policy:
level: WARNING
swh: {}
celery.task: {}
root:
level: DEBUG
handlers:
- console
- systemd
Note: the task_name and task_id qualifiers are only available within celery, when using the TaskFormatter
Ref. swh/infra/sysadm-environment#4524 (closed) Depends on swh-core!335 (merged)
Edited by Antoine R. Dumont