Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
We currently use Zap for logging in YuniKorn. While this is good for performance, our current implementation is not very flexible. Logging is all-or-nothing at or above a configured log level (i.e. debug/info/warn/error, etc.)
The current ConfigMap supports setting the log level via a single configuration, which controls the minimum (i.e. finest) enabled log level:
// enable debug logging log.level: "-1"
Not only does this only allow a single global configuration for logging, but the levels must be specified as Zap-specific numeric values. It would be far more useful to be able to do things like this:
log.level: INFO // set default log level to INFO ("0" works too) log.kubernetes.level: DEBUG // enable Kubernetes debugging log.k8shim.cache.level: DEBUG // enable debugging of K8Shim cache code log.core.application.usage.level: ERROR // disable app usage tracking
We propose to keep the existing syntax for backwards compatibility, but also allow fine-grained logging configuration in a hierarchical manner (similar to Log4J, etc.)
Existing configurations will be preserved, and new configurations may use either numeric (-1 through 5) or textual (DEBUG, INFO, WARN, ERROR, DPANIC, PANIC, FATAL) values. For example, as the configuration is hierarchical, setting an entry for log.core.level will configure all loggers that start with core. (including core.scheduler, etc.) unless a more specific configuration is present.
On the API side, current loggers use code such as:
log.Logger().Info("message", zap.String("key", "value"))
We will add new functions to the log package (in both shim and core) as follows:
// RootLogger retrieves the root logger func RootLogger() *zap.Logger {} // Logger retrieves the global logger (will be eventually removed) func Logger() *zap.Logger {} // Log retrieves a standard logger func Log(handle LoggerHandle) *zap.Logger {}
Additionally, several logger handles will be defined. These handles are structs rather than strings to ensure type safety:
// shim implementation var K8Shim = LoggerHandle{name: "k8shim"} // default shim logger var Admission = LoggerHandle(name: "admission"} // default admission controller logger var Kubernetes = LoggerHandle(name: "kubernetes"} // default kubernetes logger var Context = LoggerHandle{name: "k8shim.context"} // k8shim context logger var Cache = LoggerHandle(name: "k8shim.cache"} // k8shim cache logger // core implementation var Core = LoggerHandle{name: "core"} // default core logger var ApplicationUsage = LoggerHandle{name: "core.application.usage"} // usage logger
Additional logger handles will be added for individual subsystems as conversion to the new API occurs (this will happen in follow-up JIRAs).
log.RootLogger() will be reserved for internal framework code (such as passing a logger reference from the shim to the core). Normal application code is expected to call log.Log(handle) instead:
log.Log(log.Cache).Info("msg", zap.String("key", "value")
The existing log.Logger() method will be preserved for backwards compatibility but will be an alias for log.Log(log.K8shim), log.Log(log.Admission), or log.Log(log.Core) depending on the calling context. This will be accomplished by the various entrypoints (shim, admission controller, and core) each calling log.SetDefaultLogger() on startup.
The implementation of this feature will cache loggers (with their associated level) by name and construct them on-demand using atomic data structures when not found. This ensures that the overhead of the logging implementation is low, and the use of statically configured loggers ensures that the number of logger instances is bounded.
Preliminary measurements of the overhead of this framework are as follows:
Implementation | Disabled Log Entry (ns/op) | Enabled Log Entry (ns/op) |
---|---|---|
Existing logging | 73 / 79 / 85 | 6270 / 6463 / 6590 |
Scoped logging (root=INFO) | 88 / 93 / 99 | 6521 / 6658 / 6793 |
Scoped logging (root=DEBUG) | 73 / 75 / 78 | 6546 / 6632 / 6781 |
Tests were run 5 times, with min/avg/max times given. Each test generates 100,000 log entries at each configured log level and writes to a log file. Test hardware was a 2019 MacBook Pro with a 2.6Ghz 6-core i7 CPU and 32GB RAM. As pre-filtering is done on each individual logger, the root logger has to be set to the minimum log level in order to allow dynamic filtering, which could influence logging cost.
As you can see from the charts above, the maximum average additional cost for a disabled log entry was 14 nanoseconds / op, while the maximum average additional cost for an enabled log entry was 195 nanoseconds / op. Even in the worst case, these figures allow for approximately 1 million messages / sec when disabled and nearly 150,000 messages / sec when enabled. Therefore, the performance cost is negligible, especially when this framework will allow us to do more targeted filtering and enable only those messages that are desired.