Compare commits

..

2 Commits

Author SHA1 Message Date
Olivier Poitrey fde564e937 Optimize JSON encoding even further
Last optimization was for JSON string with no character to encode. This
version focuses on strings with some chars to encode, trying to apply
the same trick for substrings that do not need encoding.

benchmark                old ns/op     new ns/op    delta
.../NoEncoding-8         60.2          51.3         -14.78%
.../EncodingFirst-8      140           116          -17.14%
.../EncodingMiddle-8     112           86.4         -22.86%
.../EncodingLast-8       62.8          61.1         -2.71%
.../MultiBytesFirst-8    164           129          -21.34%
.../MultiBytesMiddle-8   133           96.9         -27.14%
.../MultiBytesLast-8     81.9          73.5         -10.26%
2017-06-25 01:30:02 -07:00
Olivier Poitrey 274f2e4c61 Add some json encoder benchmarks 2017-06-24 18:30:15 -07:00
89 changed files with 826 additions and 11920 deletions

View File

@ -1,10 +0,0 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: weekly
- package-ecosystem: gomod
directory: /
schedule:
interval: weekly

View File

@ -1,27 +0,0 @@
on: [push, pull_request]
name: Test
jobs:
test:
strategy:
matrix:
go-version: [1.18.x, 1.19.x]
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Test
run: go test -race -bench . -benchmem ./...
- name: Test CBOR
run: go test -tags binary_log ./...

1
.gitignore vendored
View File

@ -6,7 +6,6 @@
# Folders # Folders
_obj _obj
_test _test
tmp
# Architecture specific extensions/prefixes # Architecture specific extensions/prefixes
*.[568vq] *.[568vq]

10
.travis.yml Normal file
View File

@ -0,0 +1,10 @@
language: go
go:
- 1.7
- 1.8
- tip
matrix:
allow_failures:
- go: tip
script:
go test -v -race -cpu=1,2,4 ./...

1
CNAME
View File

@ -1 +0,0 @@
zlog.io

608
README.md
View File

@ -1,324 +1,67 @@
###zlog
opinionated defaults on zerolog
# Zero Allocation JSON Logger # Zero Allocation JSON Logger
[![godoc](http://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/tuxpa.in/a/zlog) [![license](http://img.shields.io/badge/license-MIT-red.svg?style=flat)](https://raw.githubusercontent.com/rs/zlog/master/LICENSE) [![Build Status](https://travis-ci.org/rs/zlog.svg?branch=master)](https://travis-ci.org/rs/zlog) [![Coverage](http://gocover.io/_badge/tuxpa.in/a/zlog)](http://gocover.io/tuxpa.in/a/zlog) [![godoc](http://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/rs/zerolog) [![license](http://img.shields.io/badge/license-MIT-red.svg?style=flat)](https://raw.githubusercontent.com/rs/zerolog/master/LICENSE) [![Build Status](https://travis-ci.org/rs/zerolog.svg?branch=master)](https://travis-ci.org/rs/zerolog) [![Coverage](http://gocover.io/_badge/github.com/rs/zerolog)](http://gocover.io/github.com/rs/zerolog)
The zlog package provides a fast and simple logger dedicated to JSON output. The zerolog package provides a fast and simple logger dedicated to JSON output.
Zerolog's API is designed to provide both a great developer experience and stunning [performance](#benchmarks). Its unique chaining API allows zlog to write JSON (or CBOR) log events by avoiding allocations and reflection. Zerolog's API is designed to provide both a great developer experience and stunning [performance](#performance). Its unique chaining API allows zerolog to write JSON log events by avoiding allocations and reflection.
Uber's [zap](https://godoc.org/go.uber.org/zap) library pioneered this approach. Zerolog is taking this concept to the next level with a simpler to use API and even better performance. The uber's [zap](https://godoc.org/go.uber.org/zap) library pioneered this approach. Zerolog is taking this concept to the next level with simpler to use API and even better performance.
To keep the code base and the API simple, zlog focuses on efficient structured logging only. Pretty logging on the console is made possible using the provided (but inefficient) [`zlog.ConsoleWriter`](#pretty-logging). To keep the code base and the API simple, zerolog focuses on JSON logging only. As [suggested on reddit](https://www.reddit.com/r/golang/comments/6c9k7n/zerolog_is_now_faster_than_zap/), you may use tools like [humanlog](https://github.com/aybabtme/humanlog) to pretty print JSON on the console during development.
![Pretty Logging Image](pretty.png)
## Who uses zlog
Find out [who uses zlog](https://tuxpa.in/a/zlog/wiki/Who-uses-zlog) and add your company / project to the list.
## Features ## Features
* [Blazing fast](#benchmarks) * Level logging
* [Low to zero allocation](#benchmarks) * Sampling
* [Leveled logging](#leveled-logging) * Contextual fields
* [Sampling](#log-sampling) * `context.Context` integration
* [Hooks](#hooks) * `net/http` helpers
* [Contextual fields](#contextual-logging)
* [`context.Context` integration](#contextcontext-integration)
* [Integration with `net/http`](#integration-with-nethttp)
* [JSON and CBOR encoding formats](#binary-encoding)
* [Pretty logging for development](#pretty-logging)
* [Error Logging (with optional Stacktrace)](#error-logging)
## Installation ## Usage
```bash
go get -u tuxpa.in/a/zlog/log
```
## Getting Started
### Simple Logging Example
For simple logging, import the global logger package **tuxpa.in/a/zlog/log**
```go ```go
package main import "github.com/rs/zerolog/log"
import (
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
// UNIX Time is faster and smaller than most timestamps
zlog.TimeFieldFormat = zlog.TimeFormatUnix
log.Print("hello world")
}
// Output: {"time":1516134303,"level":"debug","message":"hello world"}
```
> Note: By default log writes to `os.Stderr`
> Note: The default log level for `log.Print` is *debug*
### Contextual Logging
**zlog** allows data to be added to log messages in the form of key:value pairs. The data added to the message adds "context" about the log event that can be critical for debugging as well as myriad other purposes. An example of this is below:
```go
package main
import (
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
log.Debug().
Str("Scale", "833 cents").
Float64("Interval", 833.09).
Msg("Fibonacci is everywhere")
log.Debug().
Str("Name", "Tom").
Send()
}
// Output: {"level":"debug","Scale":"833 cents","Interval":833.09,"time":1562212768,"message":"Fibonacci is everywhere"}
// Output: {"level":"debug","Name":"Tom","time":1562212768}
``` ```
> You'll note in the above example that when adding contextual fields, the fields are strongly typed. You can find the full list of supported fields [here](#standard-types) ### A global logger can be use for simple logging
### Leveled Logging
#### Simple Leveled Logging Example
```go ```go
package main
import (
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
log.Info().Msg("hello world") log.Info().Msg("hello world")
}
// Output: {"time":1516134303,"level":"info","message":"hello world"} // Output: {"level":"info","time":1494567715,"message":"hello world"}
``` ```
> It is very important to note that when using the **zlog** chaining API, as shown above (`log.Info().Msg("hello world"`), the chain must have either the `Msg` or `Msgf` method call. If you forget to add either of these, the log will not occur and there is no compile time error to alert you of this. NOTE: To import the global logger, import the `log` subpackage `github.com/rs/zerolog/log`.
**zlog** allows for logging at the following levels (from highest to lowest):
* panic (`zlog.PanicLevel`, 5)
* fatal (`zlog.FatalLevel`, 4)
* error (`zlog.ErrorLevel`, 3)
* warn (`zlog.WarnLevel`, 2)
* info (`zlog.InfoLevel`, 1)
* debug (`zlog.DebugLevel`, 0)
* trace (`zlog.TraceLevel`, -1)
You can set the Global logging level to any of these options using the `SetGlobalLevel` function in the zlog package, passing in one of the given constants above, e.g. `zlog.InfoLevel` would be the "info" level. Whichever level is chosen, all logs with a level greater than or equal to that level will be written. To turn off logging entirely, pass the `zlog.Disabled` constant.
#### Setting Global Log Level
This example uses command-line flags to demonstrate various outputs depending on the chosen log level.
```go ```go
package main
import (
"flag"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
debug := flag.Bool("debug", false, "sets log level to debug")
flag.Parse()
// Default level for this example is info, unless debug flag is present
zlog.SetGlobalLevel(zlog.InfoLevel)
if *debug {
zlog.SetGlobalLevel(zlog.DebugLevel)
}
log.Debug().Msg("This message appears only when log level set to Debug")
log.Info().Msg("This message appears when log level set to Debug or Info")
if e := log.Debug(); e.Enabled() {
// Compute log output only if enabled.
value := "bar"
e.Str("foo", value).Msg("some debug message")
}
}
```
Info Output (no flag)
```bash
$ ./logLevelExample
{"time":1516387492,"level":"info","message":"This message appears when log level set to Debug or Info"}
```
Debug Output (debug flag set)
```bash
$ ./logLevelExample -debug
{"time":1516387573,"level":"debug","message":"This message appears only when log level set to Debug"}
{"time":1516387573,"level":"info","message":"This message appears when log level set to Debug or Info"}
{"time":1516387573,"level":"debug","foo":"bar","message":"some debug message"}
```
#### Logging without Level or Message
You may choose to log without a specific level by using the `Log` method. You may also write without a message by setting an empty string in the `msg string` parameter of the `Msg` method. Both are demonstrated in the example below.
```go
package main
import (
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
log.Log().
Str("foo", "bar").
Msg("")
}
// Output: {"time":1494567715,"foo":"bar"}
```
### Error Logging
You can log errors using the `Err` method
```go
package main
import (
"errors"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
err := errors.New("seems we have an error here")
log.Error().Err(err).Msg("")
}
// Output: {"level":"error","error":"seems we have an error here","time":1609085256}
```
> The default field name for errors is `error`, you can change this by setting `zlog.ErrorFieldName` to meet your needs.
#### Error Logging with Stacktrace
Using `github.com/pkg/errors`, you can add a formatted stacktrace to your errors.
```go
package main
import (
"github.com/pkg/errors"
"tuxpa.in/a/zlog/pkgerrors"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
zlog.TimeFieldFormat = zlog.TimeFormatUnix
zlog.ErrorStackMarshaler = pkgerrors.MarshalStack
err := outer()
log.Error().Stack().Err(err).Msg("")
}
func inner() error {
return errors.New("seems we have an error here")
}
func middle() error {
err := inner()
if err != nil {
return err
}
return nil
}
func outer() error {
err := middle()
if err != nil {
return err
}
return nil
}
// Output: {"level":"error","stack":[{"func":"inner","line":"20","source":"errors.go"},{"func":"middle","line":"24","source":"errors.go"},{"func":"outer","line":"32","source":"errors.go"},{"func":"main","line":"15","source":"errors.go"},{"func":"main","line":"204","source":"proc.go"},{"func":"goexit","line":"1374","source":"asm_amd64.s"}],"error":"seems we have an error here","time":1609086683}
```
> zlog.ErrorStackMarshaler must be set in order for the stack to output anything.
#### Logging Fatal Messages
```go
package main
import (
"errors"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
func main() {
err := errors.New("A repo man spends his life getting into tense situations")
service := "myservice"
zlog.TimeFieldFormat = zlog.TimeFormatUnix
log.Fatal(). log.Fatal().
Err(err). Err(err).
Str("service", service). Str("service", service).
Msgf("Cannot start %s", service) Msgf("Cannot start %s", service)
}
// Output: {"time":1516133263,"level":"fatal","error":"A repo man spends his life getting into tense situations","service":"myservice","message":"Cannot start myservice"} // Output: {"level":"fatal","time":1494567715,"message":"Cannot start myservice","error":"some error","service":"myservice"}
// exit status 1 // Exit 1
``` ```
> NOTE: Using `Msgf` generates one allocation even when the logger is disabled. NOTE: Using `Msgf` generates one allocation even when the logger is disabled.
### Fields can be added to log messages
```go
log.Info().
Str("foo", "bar").
Int("n", 123).
Msg("hello world")
// Output: {"level":"info","time":1494567715,"foo":"bar","n":123,"message":"hello world"}
```
### Create logger instance to manage different outputs ### Create logger instance to manage different outputs
```go ```go
logger := zlog.New(os.Stderr).With().Timestamp().Logger() logger := zerolog.New(os.Stderr).With().Timestamp().Logger()
logger.Info().Str("foo", "bar").Msg("hello world") logger.Info().Str("foo", "bar").Msg("hello world")
@ -329,47 +72,28 @@ logger.Info().Str("foo", "bar").Msg("hello world")
```go ```go
sublogger := log.With(). sublogger := log.With().
Str("component", "foo"). Str("component": "foo").
Logger() Logger()
sublogger.Info().Msg("hello world") sublogger.Info().Msg("hello world")
// Output: {"level":"info","time":1494567715,"message":"hello world","component":"foo"} // Output: {"level":"info","time":1494567715,"message":"hello world","component":"foo"}
``` ```
### Pretty logging ### Level logging
To log a human-friendly, colorized output, use `zlog.ConsoleWriter`:
```go ```go
log.Logger = log.Output(zlog.ConsoleWriter{Out: os.Stderr}) zerolog.SetGlobalLevel(zerolog.InfoLevel)
log.Info().Str("foo", "bar").Msg("Hello world") log.Debug().Msg("filtered out message")
log.Info().Msg("routed message")
// Output: 3:04PM INF Hello World foo=bar if e := log.Debug(); e.Enabled() {
``` // Compute log output only if enabled.
value := compute()
To customize the configuration and formatting: e.Str("foo": value).Msg("some debug message")
```go
output := zlog.ConsoleWriter{Out: os.Stdout, TimeFormat: time.RFC3339}
output.FormatLevel = func(i interface{}) string {
return strings.ToUpper(fmt.Sprintf("| %-6s|", i))
}
output.FormatMessage = func(i interface{}) string {
return fmt.Sprintf("***%s****", i)
}
output.FormatFieldName = func(i interface{}) string {
return fmt.Sprintf("%s:", i)
}
output.FormatFieldValue = func(i interface{}) string {
return strings.ToUpper(fmt.Sprintf("%s", i))
} }
log := zlog.New(output).With().Timestamp().Logger() // Output: {"level":"info","time":1494567715,"message":"routed message"}
log.Info().Str("foo", "bar").Msg("Hello World")
// Output: 2006-01-02T15:04:05Z07:00 | INFO | ***Hello World**** foo:BAR
``` ```
### Sub dictionary ### Sub dictionary
@ -377,9 +101,9 @@ log.Info().Str("foo", "bar").Msg("Hello World")
```go ```go
log.Info(). log.Info().
Str("foo", "bar"). Str("foo", "bar").
Dict("dict", zlog.Dict(). Dict("dict", zerolog.Dict().
Str("bar", "baz"). Str("bar", "baz").
Int("n", 1), Int("n", 1)
).Msg("hello world") ).Msg("hello world")
// Output: {"level":"info","time":1494567715,"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"} // Output: {"level":"info","time":1494567715,"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"}
@ -388,114 +112,42 @@ log.Info().
### Customize automatic field names ### Customize automatic field names
```go ```go
zlog.TimestampFieldName = "t" zerolog.TimestampFieldName = "t"
zlog.LevelFieldName = "l" zerolog.LevelFieldName = "l"
zlog.MessageFieldName = "m" zerolog.MessageFieldName = "m"
log.Info().Msg("hello world") log.Info().Msg("hello world")
// Output: {"l":"info","t":1494567715,"m":"hello world"} // Output: {"l":"info","t":1494567715,"m":"hello world"}
``` ```
### Log with no level nor message
```go
log.Log().Str("foo","bar").Msg("")
// Output: {"time":1494567715,"foo":"bar"}
```
### Add contextual fields to the global logger ### Add contextual fields to the global logger
```go ```go
log.Logger = log.With().Str("foo", "bar").Logger() log.Logger = log.With().Str("foo", "bar").Logger()
``` ```
### Add file and line number to log
Equivalent of `Llongfile`:
```go
log.Logger = log.With().Caller().Logger()
log.Info().Msg("hello world")
// Output: {"level": "info", "message": "hello world", "caller": "/go/src/your_project/some_file:21"}
```
Equivalent of `Lshortfile`:
```go
zlog.CallerMarshalFunc = func(pc uintptr, file string, line int) string {
short := file
for i := len(file) - 1; i > 0; i-- {
if file[i] == '/' {
short = file[i+1:]
break
}
}
file = short
return file + ":" + strconv.Itoa(line)
}
log.Logger = log.With().Caller().Logger()
log.Info().Msg("hello world")
// Output: {"level": "info", "message": "hello world", "caller": "some_file:21"}
```
### Thread-safe, lock-free, non-blocking writer
If your writer might be slow or not thread-safe and you need your log producers to never get slowed down by a slow writer, you can use a `diode.Writer` as follows:
```go
wr := diode.NewWriter(os.Stdout, 1000, 10*time.Millisecond, func(missed int) {
fmt.Printf("Logger Dropped %d messages", missed)
})
log := zlog.New(wr)
log.Print("test")
```
You will need to install `code.cloudfoundry.org/go-diodes` to use this feature.
### Log Sampling ### Log Sampling
```go ```go
sampled := log.Sample(&zlog.BasicSampler{N: 10}) sampled := log.Sample(10)
sampled.Info().Msg("will be logged every 10 messages") sampled.Info().Msg("will be logged every 10 messages")
// Output: {"time":1494567715,"level":"info","message":"will be logged every 10 messages"} // Output: {"time":1494567715,"sample":10,"message":"will be logged every 10 messages"}
```
More advanced sampling:
```go
// Will let 5 debug messages per period of 1 second.
// Over 5 debug message, 1 every 100 debug messages are logged.
// Other levels are not sampled.
sampled := log.Sample(zlog.LevelSampler{
DebugSampler: &zlog.BurstSampler{
Burst: 5,
Period: 1*time.Second,
NextSampler: &zlog.BasicSampler{N: 100},
},
})
sampled.Debug().Msg("hello world")
// Output: {"time":1494567715,"level":"debug","message":"hello world"}
```
### Hooks
```go
type SeverityHook struct{}
func (h SeverityHook) Run(e *zlog.Event, level zlog.Level, msg string) {
if level != zlog.NoLevel {
e.Str("severity", level.String())
}
}
hooked := log.Hook(SeverityHook{})
hooked.Warn().Msg("")
// Output: {"level":"warn","severity":"warn"}
``` ```
### Pass a sub-logger by context ### Pass a sub-logger by context
```go ```go
ctx := log.With().Str("component", "module").Logger().WithContext(ctx) ctx := log.With("component", "module").Logger().WithContext(ctx)
log.Ctx(ctx).Info().Msg("hello world") log.Ctx(ctx).Info().Msg("hello world")
@ -505,7 +157,7 @@ log.Ctx(ctx).Info().Msg("hello world")
### Set as standard logger output ### Set as standard logger output
```go ```go
stdlog := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Str("foo", "bar"). Str("foo", "bar").
Logger() Logger()
@ -517,38 +169,14 @@ stdlog.Print("hello world")
// Output: {"foo":"bar","message":"hello world"} // Output: {"foo":"bar","message":"hello world"}
``` ```
### context.Context integration
The `Logger` instance could be attached to `context.Context` values with `logger.WithContext(ctx)`
and extracted from it using `zerolog.Ctx(ctx)`.
Example to add logger to context:
```go
// this code attach logger instance to context fields
ctx := context.Background()
logger := zerolog.New(os.Stdout)
ctx = logger.WithContext(ctx)
someFunc(ctx)
```
Extracting logger from context:
```go
func someFunc(ctx context.Context) {
// get logger from context. if it's nill, then `zerolog.DefaultContextLogger` is returned,
// if `DefaultContextLogger` is nil, then disabled logger returned.
logger := zerolog.Ctx(ctx)
logger.Info().Msg("Hello")
}
```
### Integration with `net/http` ### Integration with `net/http`
The `tuxpa.in/a/zlog/hlog` package provides some helpers to integrate zlog with `http.Handler`. The `github.com/rs/zerolog/hlog` package provides some helpers to integrate zerolog with `http.Handler`.
In this example we use [alice](https://github.com/justinas/alice) to install logger for better readability. In this example we use [alice](https://github.com/justinas/alice) to install logger for better readability.
```go ```go
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Timestamp(). Timestamp().
Str("role", "my-service"). Str("role", "my-service").
Str("host", host). Str("host", host).
@ -560,16 +188,7 @@ c := alice.New()
c = c.Append(hlog.NewHandler(log)) c = c.Append(hlog.NewHandler(log))
// Install some provided extra handler to set some request's context fields. // Install some provided extra handler to set some request's context fields.
// Thanks to that handler, all our logs will come with some prepopulated fields. // Thanks to those handler, all our logs will come with some pre-populated fields.
c = c.Append(hlog.AccessHandler(func(r *http.Request, status, size int, duration time.Duration) {
hlog.FromRequest(r).Info().
Str("method", r.Method).
Stringer("url", r.URL).
Int("status", status).
Int("size", size).
Dur("duration", duration).
Msg("")
}))
c = c.Append(hlog.RemoteAddrHandler("ip")) c = c.Append(hlog.RemoteAddrHandler("ip"))
c = c.Append(hlog.UserAgentHandler("user_agent")) c = c.Append(hlog.UserAgentHandler("user_agent"))
c = c.Append(hlog.RefererHandler("referer")) c = c.Append(hlog.RefererHandler("referer"))
@ -594,40 +213,23 @@ if err := http.ListenAndServe(":8080", nil); err != nil {
} }
``` ```
## Multiple Log Output
`zlog.MultiLevelWriter` may be used to send the log message to multiple outputs.
In this example, we send the log message to both `os.Stdout` and the in-built ConsoleWriter.
```go
func main() {
consoleWriter := zlog.ConsoleWriter{Out: os.Stdout}
multi := zlog.MultiLevelWriter(consoleWriter, os.Stdout)
logger := zlog.New(multi).With().Timestamp().Logger()
logger.Info().Msg("Hello World!")
}
// Output (Line 1: Console; Line 2: Stdout)
// 12:36PM INF Hello World!
// {"level":"info","time":"2019-11-07T12:36:38+03:00","message":"Hello World!"}
```
## Global Settings ## Global Settings
Some settings can be changed and will be applied to all loggers: Some settings can be changed and will by applied to all loggers:
* `log.Logger`: You can set this value to customize the global logger (the one used by package level methods). * `log.Logger`: You can set this value to customize the global logger (the one used by package level methods).
* `zlog.SetGlobalLevel`: Can raise the minimum level of all loggers. Call this with `zlog.Disabled` to disable logging altogether (quiet mode). * `zerolog.SetGlobalLevel`: Can raise the minimum level of all loggers. Set this to `zerolog.Disable` to disable logging altogether (quiet mode).
* `zlog.DisableSampling`: If argument is `true`, all sampled loggers will stop sampling and issue 100% of their log events. * `zerolog.DisableSampling`: If argument is `true`, all sampled loggers will stop sampling and issue 100% of their log events.
* `zlog.TimestampFieldName`: Can be set to customize `Timestamp` field name. * `zerolog.TimestampFieldName`: Can be set to customize `Timestamp` field name.
* `zlog.LevelFieldName`: Can be set to customize level field name. * `zerolog.LevelFieldName`: Can be set to customize level field name.
* `zlog.MessageFieldName`: Can be set to customize message field name. * `zerolog.MessageFieldName`: Can be set to customize message field name.
* `zlog.ErrorFieldName`: Can be set to customize `Err` field name. * `zerolog.ErrorFieldName`: Can be set to customize `Err` field name.
* `zlog.TimeFieldFormat`: Can be set to customize `Time` field value formatting. If set with `zlog.TimeFormatUnix`, `zlog.TimeFormatUnixMs` or `zlog.TimeFormatUnixMicro`, times are formated as UNIX timestamp. * `zerolog.SampleFieldName`: Can be set to customize the field name added when sampling is enabled.
* `zlog.DurationFieldUnit`: Can be set to customize the unit for time.Duration type fields added by `Dur` (default: `time.Millisecond`). * `zerolog.TimeFieldFormat`: Can be set to customize `Time` field value formatting. If set with an empty string, times are formated as UNIX timestamp.
* `zlog.DurationFieldInteger`: If set to `true`, `Dur` fields are formatted as integers instead of floats (default: `false`). // DurationFieldUnit defines the unit for time.Duration type fields added
* `zlog.ErrorHandler`: Called whenever zlog fails to write an event on its output. If not set, an error is printed on the stderr. This handler must be thread safe and non-blocking. // using the Dur method.
* `DurationFieldUnit`: Sets the unit of the fields added by `Dur` (default: `time.Millisecond`).
* `DurationFieldInteger`: If set to true, `Dur` fields are formatted as integers instead of floats.
## Field Types ## Field Types
@ -641,46 +243,18 @@ Some settings can be changed and will be applied to all loggers:
### Advanced Fields ### Advanced Fields
* `Err`: Takes an `error` and renders it as a string using the `zlog.ErrorFieldName` field name. * `Err`: Takes an `error` and render it as a string using the `zerolog.ErrorFieldName` field name.
* `Func`: Run a `func` only if the level is enabled. * `Timestamp`: Insert a timestamp field with `zerolog.TimestampFieldName` field name and formatted using `zerolog.TimeFieldFormat`.
* `Timestamp`: Inserts a timestamp field with `zlog.TimestampFieldName` field name, formatted using `zlog.TimeFieldFormat`. * `Time`: Adds a field with the time formated with the `zerolog.TimeFieldFormat`.
* `Time`: Adds a field with time formatted with `zlog.TimeFieldFormat`. * `Dur`: Adds a field with a `time.Duration`.
* `Dur`: Adds a field with `time.Duration`.
* `Dict`: Adds a sub-key/value as a field of the event. * `Dict`: Adds a sub-key/value as a field of the event.
* `RawJSON`: Adds a field with an already encoded JSON (`[]byte`)
* `Hex`: Adds a field with value formatted as a hexadecimal string (`[]byte`)
* `Interface`: Uses reflection to marshal the type. * `Interface`: Uses reflection to marshal the type.
Most fields are also available in the slice format (`Strs` for `[]string`, `Errs` for `[]error` etc.) ## Performance
## Binary Encoding
<<<<<<< HEAD
In addition to the default JSON encoding, `zlog` can produce binary logs using [CBOR](http://cbor.io) encoding. The choice of encoding can be decided at compile time using the build tag `binary_log` as follows:
=======
In addition to the default JSON encoding, `zerolog` can produce binary logs using [CBOR](https://cbor.io) encoding. The choice of encoding can be decided at compile time using the build tag `binary_log` as follows:
>>>>>>> github
```bash
go build -tags binary_log .
```
To Decode binary encoded log files you can use any CBOR decoder. One has been tested to work
with zlog library is [CSD](https://github.com/toravir/csd/).
## Related Projects
* [grpc-zlog](https://github.com/cheapRoc/grpc-zlog): Implementation of `grpclog.LoggerV2` interface using `zlog`
* [overlog](https://github.com/Trendyol/overlog): Implementation of `Mapped Diagnostic Context` interface using `zlog`
* [zlogr](https://github.com/go-logr/zlogr): Implementation of `logr.LogSink` interface using `zlog`
## Benchmarks
See [logbench](http://hackemist.com/logbench/) for more comprehensive and up-to-date benchmarks.
All operations are allocation free (those numbers *include* JSON encoding): All operations are allocation free (those numbers *include* JSON encoding):
```text ```
BenchmarkLogEmpty-8 100000000 19.1 ns/op 0 B/op 0 allocs/op BenchmarkLogEmpty-8 100000000 19.1 ns/op 0 B/op 0 allocs/op
BenchmarkDisabled-8 500000000 4.07 ns/op 0 B/op 0 allocs/op BenchmarkDisabled-8 500000000 4.07 ns/op 0 B/op 0 allocs/op
BenchmarkInfo-8 30000000 42.5 ns/op 0 B/op 0 allocs/op BenchmarkInfo-8 30000000 42.5 ns/op 0 B/op 0 allocs/op
@ -688,18 +262,13 @@ BenchmarkContextFields-8 30000000 44.9 ns/op 0 B/op 0 allocs/op
BenchmarkLogFields-8 10000000 184 ns/op 0 B/op 0 allocs/op BenchmarkLogFields-8 10000000 184 ns/op 0 B/op 0 allocs/op
``` ```
There are a few Go logging benchmarks and comparisons that include zlog. Using Uber's zap [comparison benchmark](https://github.com/uber-go/zap#performance):
* [imkira/go-loggers-bench](https://github.com/imkira/go-loggers-bench)
* [uber-common/zap](https://github.com/uber-go/zap#performance)
Using Uber's zap comparison benchmark:
Log a message and 10 fields: Log a message and 10 fields:
| Library | Time | Bytes Allocated | Objects Allocated | | Library | Time | Bytes Allocated | Objects Allocated |
| :--- | :---: | :---: | :---: | | :--- | :---: | :---: | :---: |
| zlog | 767 ns/op | 552 B/op | 6 allocs/op | | zerolog | 767 ns/op | 552 B/op | 6 allocs/op |
| :zap: zap | 848 ns/op | 704 B/op | 2 allocs/op | | :zap: zap | 848 ns/op | 704 B/op | 2 allocs/op |
| :zap: zap (sugared) | 1363 ns/op | 1610 B/op | 20 allocs/op | | :zap: zap (sugared) | 1363 ns/op | 1610 B/op | 20 allocs/op |
| go-kit | 3614 ns/op | 2895 B/op | 66 allocs/op | | go-kit | 3614 ns/op | 2895 B/op | 66 allocs/op |
@ -712,7 +281,7 @@ Log a message with a logger that already has 10 fields of context:
| Library | Time | Bytes Allocated | Objects Allocated | | Library | Time | Bytes Allocated | Objects Allocated |
| :--- | :---: | :---: | :---: | | :--- | :---: | :---: | :---: |
| zlog | 52 ns/op | 0 B/op | 0 allocs/op | | zerolog | 52 ns/op | 0 B/op | 0 allocs/op |
| :zap: zap | 283 ns/op | 0 B/op | 0 allocs/op | | :zap: zap | 283 ns/op | 0 B/op | 0 allocs/op |
| :zap: zap (sugared) | 337 ns/op | 80 B/op | 2 allocs/op | | :zap: zap (sugared) | 337 ns/op | 80 B/op | 2 allocs/op |
| lion | 2702 ns/op | 4074 B/op | 38 allocs/op | | lion | 2702 ns/op | 4074 B/op | 38 allocs/op |
@ -725,7 +294,7 @@ Log a static string, without any context or `printf`-style templating:
| Library | Time | Bytes Allocated | Objects Allocated | | Library | Time | Bytes Allocated | Objects Allocated |
| :--- | :---: | :---: | :---: | | :--- | :---: | :---: | :---: |
| zlog | 50 ns/op | 0 B/op | 0 allocs/op | | zerolog | 50 ns/op | 0 B/op | 0 allocs/op |
| :zap: zap | 236 ns/op | 0 B/op | 0 allocs/op | | :zap: zap | 236 ns/op | 0 B/op | 0 allocs/op |
| standard library | 453 ns/op | 80 B/op | 2 allocs/op | | standard library | 453 ns/op | 80 B/op | 2 allocs/op |
| :zap: zap (sugared) | 337 ns/op | 80 B/op | 2 allocs/op | | :zap: zap (sugared) | 337 ns/op | 80 B/op | 2 allocs/op |
@ -735,16 +304,3 @@ Log a static string, without any context or `printf`-style templating:
| apex/log | 2751 ns/op | 584 B/op | 11 allocs/op | | apex/log | 2751 ns/op | 584 B/op | 11 allocs/op |
| log15 | 5181 ns/op | 1592 B/op | 26 allocs/op | | log15 | 5181 ns/op | 1592 B/op | 26 allocs/op |
## Caveats
Note that zlog does no de-duplication of fields. Using the same key multiple times creates multiple keys in final JSON:
```go
logger := zlog.New(os.Stderr).With().Timestamp().Logger()
logger.Info().
Timestamp().
Msg("dup")
// Output: {"level":"info","time":1494567715,"time":1494567715,"message":"dup"}
```
In this case, many consumers will take the last value, but this is not guaranteed; check yours if in doubt.

View File

@ -1 +0,0 @@
remote_theme: rs/gh-readme

240
array.go
View File

@ -1,240 +0,0 @@
package zlog
import (
"net"
"sync"
"time"
)
var arrayPool = &sync.Pool{
New: func() interface{} {
return &Array{
buf: make([]byte, 0, 500),
}
},
}
// Array is used to prepopulate an array of items
// which can be re-used to add to log messages.
type Array struct {
buf []byte
}
func putArray(a *Array) {
// Proper usage of a sync.Pool requires each entry to have approximately
// the same memory cost. To obtain this property when the stored type
// contains a variably-sized buffer, we add a hard limit on the maximum buffer
// to place back in the pool.
//
// See https://golang.org/issue/23199
const maxSize = 1 << 16 // 64KiB
if cap(a.buf) > maxSize {
return
}
arrayPool.Put(a)
}
// Arr creates an array to be added to an Event or Context.
func Arr() *Array {
a := arrayPool.Get().(*Array)
a.buf = a.buf[:0]
return a
}
// MarshalZerologArray method here is no-op - since data is
// already in the needed format.
func (*Array) MarshalZerologArray(*Array) {
}
func (a *Array) write(dst []byte) []byte {
dst = enc.AppendArrayStart(dst)
if len(a.buf) > 0 {
dst = append(dst, a.buf...)
}
dst = enc.AppendArrayEnd(dst)
putArray(a)
return dst
}
// Object marshals an object that implement the LogObjectMarshaler
// interface and appends it to the array.
func (a *Array) Object(obj LogObjectMarshaler) *Array {
e := Dict()
obj.MarshalZerologObject(e)
e.buf = enc.AppendEndMarker(e.buf)
a.buf = append(enc.AppendArrayDelim(a.buf), e.buf...)
putEvent(e)
return a
}
// Str appends the val as a string to the array.
func (a *Array) Str(val string) *Array {
a.buf = enc.AppendString(enc.AppendArrayDelim(a.buf), val)
return a
}
// Bytes appends the val as a string to the array.
func (a *Array) Bytes(val []byte) *Array {
a.buf = enc.AppendBytes(enc.AppendArrayDelim(a.buf), val)
return a
}
// Hex appends the val as a hex string to the array.
func (a *Array) Hex(val []byte) *Array {
a.buf = enc.AppendHex(enc.AppendArrayDelim(a.buf), val)
return a
}
// RawJSON adds already encoded JSON to the array.
func (a *Array) RawJSON(val []byte) *Array {
a.buf = appendJSON(enc.AppendArrayDelim(a.buf), val)
return a
}
// Err serializes and appends the err to the array.
func (a *Array) Err(err error) *Array {
switch m := ErrorMarshalFunc(err).(type) {
case LogObjectMarshaler:
e := newEvent(nil, 0)
e.buf = e.buf[:0]
e.appendObject(m)
a.buf = append(enc.AppendArrayDelim(a.buf), e.buf...)
putEvent(e)
case error:
if m == nil || isNilValue(m) {
a.buf = enc.AppendNil(enc.AppendArrayDelim(a.buf))
} else {
a.buf = enc.AppendString(enc.AppendArrayDelim(a.buf), m.Error())
}
case string:
a.buf = enc.AppendString(enc.AppendArrayDelim(a.buf), m)
default:
a.buf = enc.AppendInterface(enc.AppendArrayDelim(a.buf), m)
}
return a
}
// Bool appends the val as a bool to the array.
func (a *Array) Bool(b bool) *Array {
a.buf = enc.AppendBool(enc.AppendArrayDelim(a.buf), b)
return a
}
// Int appends i as a int to the array.
func (a *Array) Int(i int) *Array {
a.buf = enc.AppendInt(enc.AppendArrayDelim(a.buf), i)
return a
}
// Int8 appends i as a int8 to the array.
func (a *Array) Int8(i int8) *Array {
a.buf = enc.AppendInt8(enc.AppendArrayDelim(a.buf), i)
return a
}
// Int16 appends i as a int16 to the array.
func (a *Array) Int16(i int16) *Array {
a.buf = enc.AppendInt16(enc.AppendArrayDelim(a.buf), i)
return a
}
// Int32 appends i as a int32 to the array.
func (a *Array) Int32(i int32) *Array {
a.buf = enc.AppendInt32(enc.AppendArrayDelim(a.buf), i)
return a
}
// Int64 appends i as a int64 to the array.
func (a *Array) Int64(i int64) *Array {
a.buf = enc.AppendInt64(enc.AppendArrayDelim(a.buf), i)
return a
}
// Uint appends i as a uint to the array.
func (a *Array) Uint(i uint) *Array {
a.buf = enc.AppendUint(enc.AppendArrayDelim(a.buf), i)
return a
}
// Uint8 appends i as a uint8 to the array.
func (a *Array) Uint8(i uint8) *Array {
a.buf = enc.AppendUint8(enc.AppendArrayDelim(a.buf), i)
return a
}
// Uint16 appends i as a uint16 to the array.
func (a *Array) Uint16(i uint16) *Array {
a.buf = enc.AppendUint16(enc.AppendArrayDelim(a.buf), i)
return a
}
// Uint32 appends i as a uint32 to the array.
func (a *Array) Uint32(i uint32) *Array {
a.buf = enc.AppendUint32(enc.AppendArrayDelim(a.buf), i)
return a
}
// Uint64 appends i as a uint64 to the array.
func (a *Array) Uint64(i uint64) *Array {
a.buf = enc.AppendUint64(enc.AppendArrayDelim(a.buf), i)
return a
}
// Float32 appends f as a float32 to the array.
func (a *Array) Float32(f float32) *Array {
a.buf = enc.AppendFloat32(enc.AppendArrayDelim(a.buf), f)
return a
}
// Float64 appends f as a float64 to the array.
func (a *Array) Float64(f float64) *Array {
a.buf = enc.AppendFloat64(enc.AppendArrayDelim(a.buf), f)
return a
}
// Time appends t formatted as string using zlog.TimeFieldFormat.
func (a *Array) Time(t time.Time) *Array {
a.buf = enc.AppendTime(enc.AppendArrayDelim(a.buf), t, TimeFieldFormat)
return a
}
// Dur appends d to the array.
func (a *Array) Dur(d time.Duration) *Array {
a.buf = enc.AppendDuration(enc.AppendArrayDelim(a.buf), d, DurationFieldUnit, DurationFieldInteger)
return a
}
// Interface appends i marshaled using reflection.
func (a *Array) Interface(i interface{}) *Array {
if obj, ok := i.(LogObjectMarshaler); ok {
return a.Object(obj)
}
a.buf = enc.AppendInterface(enc.AppendArrayDelim(a.buf), i)
return a
}
// IPAddr adds IPv4 or IPv6 address to the array
func (a *Array) IPAddr(ip net.IP) *Array {
a.buf = enc.AppendIPAddr(enc.AppendArrayDelim(a.buf), ip)
return a
}
// IPPrefix adds IPv4 or IPv6 Prefix (IP + mask) to the array
func (a *Array) IPPrefix(pfx net.IPNet) *Array {
a.buf = enc.AppendIPPrefix(enc.AppendArrayDelim(a.buf), pfx)
return a
}
// MACAddr adds a MAC (Ethernet) address to the array
func (a *Array) MACAddr(ha net.HardwareAddr) *Array {
a.buf = enc.AppendMACAddr(enc.AppendArrayDelim(a.buf), ha)
return a
}
// Dict adds the dict Event to the array
func (a *Array) Dict(dict *Event) *Array {
dict.buf = enc.AppendEndMarker(dict.buf)
a.buf = append(enc.AppendArrayDelim(a.buf), dict.buf...)
return a
}

View File

@ -1,39 +0,0 @@
package zlog
import (
"net"
"testing"
"time"
)
func TestArray(t *testing.T) {
a := Arr().
Bool(true).
Int(1).
Int8(2).
Int16(3).
Int32(4).
Int64(5).
Uint(6).
Uint8(7).
Uint16(8).
Uint32(9).
Uint64(10).
Float32(11.98122).
Float64(12.987654321).
Str("a").
Bytes([]byte("b")).
Hex([]byte{0x1f}).
RawJSON([]byte(`{"some":"json"}`)).
Time(time.Time{}).
IPAddr(net.IP{192, 168, 0, 10}).
Dur(0).
Dict(Dict().
Str("bar", "baz").
Int("n", 1),
)
want := `[true,1,2,3,4,5,6,7,8,9,10,11.98122,12.987654321,"a","b","1f",{"some":"json"},"0001-01-01T00:00:00Z","192.168.0.10",0,{"bar":"baz","n":1}]`
if got := decodeObjectToStr(a.write([]byte{})); got != want {
t.Errorf("Array.write()\ngot: %s\nwant: %s", got, want)
}
}

View File

@ -1,9 +1,8 @@
package zlog package zerolog
import ( import (
"errors" "errors"
"io/ioutil" "io/ioutil"
"net"
"testing" "testing"
"time" "time"
) )
@ -58,18 +57,6 @@ func BenchmarkContextFields(b *testing.B) {
}) })
} }
func BenchmarkContextAppend(b *testing.B) {
logger := New(ioutil.Discard).With().
Str("foo", "bar").
Logger()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.With().Str("bar", "baz")
}
})
}
func BenchmarkLogFields(b *testing.B) { func BenchmarkLogFields(b *testing.B) {
logger := New(ioutil.Discard) logger := New(ioutil.Discard)
b.ResetTimer() b.ResetTimer()
@ -84,282 +71,3 @@ func BenchmarkLogFields(b *testing.B) {
} }
}) })
} }
type obj struct {
Pub string
Tag string `json:"tag"`
priv int
}
func (o obj) MarshalZerologObject(e *Event) {
e.Str("Pub", o.Pub).
Str("Tag", o.Tag).
Int("priv", o.priv)
}
func BenchmarkLogArrayObject(b *testing.B) {
obj1 := obj{"a", "b", 2}
obj2 := obj{"c", "d", 3}
obj3 := obj{"e", "f", 4}
logger := New(ioutil.Discard)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
arr := Arr()
arr.Object(&obj1)
arr.Object(&obj2)
arr.Object(&obj3)
logger.Info().Array("objects", arr).Msg("test")
}
}
func BenchmarkLogFieldType(b *testing.B) {
bools := []bool{true, false, true, false, true, false, true, false, true, false}
ints := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
floats := []float64{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
strings := []string{"a", "b", "c", "d", "e", "f", "g", "h", "i", "j"}
durations := []time.Duration{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
times := []time.Time{
time.Unix(0, 0),
time.Unix(1, 0),
time.Unix(2, 0),
time.Unix(3, 0),
time.Unix(4, 0),
time.Unix(5, 0),
time.Unix(6, 0),
time.Unix(7, 0),
time.Unix(8, 0),
time.Unix(9, 0),
}
interfaces := []struct {
Pub string
Tag string `json:"tag"`
priv int
}{
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
}
objects := []obj{
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
}
errs := []error{errors.New("a"), errors.New("b"), errors.New("c"), errors.New("d"), errors.New("e")}
types := map[string]func(e *Event) *Event{
"Bool": func(e *Event) *Event {
return e.Bool("k", bools[0])
},
"Bools": func(e *Event) *Event {
return e.Bools("k", bools)
},
"Int": func(e *Event) *Event {
return e.Int("k", ints[0])
},
"Ints": func(e *Event) *Event {
return e.Ints("k", ints)
},
"Float": func(e *Event) *Event {
return e.Float64("k", floats[0])
},
"Floats": func(e *Event) *Event {
return e.Floats64("k", floats)
},
"Str": func(e *Event) *Event {
return e.Str("k", strings[0])
},
"Strs": func(e *Event) *Event {
return e.Strs("k", strings)
},
"Err": func(e *Event) *Event {
return e.Err(errs[0])
},
"Errs": func(e *Event) *Event {
return e.Errs("k", errs)
},
"Time": func(e *Event) *Event {
return e.Time("k", times[0])
},
"Times": func(e *Event) *Event {
return e.Times("k", times)
},
"Dur": func(e *Event) *Event {
return e.Dur("k", durations[0])
},
"Durs": func(e *Event) *Event {
return e.Durs("k", durations)
},
"Interface": func(e *Event) *Event {
return e.Interface("k", interfaces[0])
},
"Interfaces": func(e *Event) *Event {
return e.Interface("k", interfaces)
},
"Interface(Object)": func(e *Event) *Event {
return e.Interface("k", objects[0])
},
"Interface(Objects)": func(e *Event) *Event {
return e.Interface("k", objects)
},
"Object": func(e *Event) *Event {
return e.Object("k", objects[0])
},
}
logger := New(ioutil.Discard)
b.ResetTimer()
for name := range types {
f := types[name]
b.Run(name, func(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
f(logger.Info()).Msg("")
}
})
})
}
}
func BenchmarkContextFieldType(b *testing.B) {
oldFormat := TimeFieldFormat
TimeFieldFormat = TimeFormatUnix
defer func() { TimeFieldFormat = oldFormat }()
bools := []bool{true, false, true, false, true, false, true, false, true, false}
ints := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
floats := []float64{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
strings := []string{"a", "b", "c", "d", "e", "f", "g", "h", "i", "j"}
stringer := net.IP{127, 0, 0, 1}
durations := []time.Duration{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
times := []time.Time{
time.Unix(0, 0),
time.Unix(1, 0),
time.Unix(2, 0),
time.Unix(3, 0),
time.Unix(4, 0),
time.Unix(5, 0),
time.Unix(6, 0),
time.Unix(7, 0),
time.Unix(8, 0),
time.Unix(9, 0),
}
interfaces := []struct {
Pub string
Tag string `json:"tag"`
priv int
}{
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
}
objects := []obj{
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
{"a", "a", 0},
}
errs := []error{errors.New("a"), errors.New("b"), errors.New("c"), errors.New("d"), errors.New("e")}
types := map[string]func(c Context) Context{
"Bool": func(c Context) Context {
return c.Bool("k", bools[0])
},
"Bools": func(c Context) Context {
return c.Bools("k", bools)
},
"Int": func(c Context) Context {
return c.Int("k", ints[0])
},
"Ints": func(c Context) Context {
return c.Ints("k", ints)
},
"Float": func(c Context) Context {
return c.Float64("k", floats[0])
},
"Floats": func(c Context) Context {
return c.Floats64("k", floats)
},
"Str": func(c Context) Context {
return c.Str("k", strings[0])
},
"Strs": func(c Context) Context {
return c.Strs("k", strings)
},
"Stringer": func(c Context) Context {
return c.Stringer("k", stringer)
},
"Err": func(c Context) Context {
return c.Err(errs[0])
},
"Errs": func(c Context) Context {
return c.Errs("k", errs)
},
"Time": func(c Context) Context {
return c.Time("k", times[0])
},
"Times": func(c Context) Context {
return c.Times("k", times)
},
"Dur": func(c Context) Context {
return c.Dur("k", durations[0])
},
"Durs": func(c Context) Context {
return c.Durs("k", durations)
},
"Interface": func(c Context) Context {
return c.Interface("k", interfaces[0])
},
"Interfaces": func(c Context) Context {
return c.Interface("k", interfaces)
},
"Interface(Object)": func(c Context) Context {
return c.Interface("k", objects[0])
},
"Interface(Objects)": func(c Context) Context {
return c.Interface("k", objects)
},
"Object": func(c Context) Context {
return c.Object("k", objects[0])
},
"Timestamp": func(c Context) Context {
return c.Timestamp()
},
}
logger := New(ioutil.Discard)
b.ResetTimer()
for name := range types {
f := types[name]
b.Run(name, func(b *testing.B) {
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
l := f(logger.With()).Logger()
l.Info().Msg("")
}
})
})
}
}

View File

@ -1,584 +0,0 @@
// +build binary_log
package zlog
import (
"bytes"
"errors"
"fmt"
// "io/ioutil"
stdlog "log"
"time"
)
func ExampleBinaryNew() {
dst := bytes.Buffer{}
log := New(&dst)
log.Info().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","message":"hello world"}
}
func ExampleLogger_With() {
dst := bytes.Buffer{}
log := New(&dst).
With().
Str("foo", "bar").
Logger()
log.Info().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","foo":"bar","message":"hello world"}
}
func ExampleLogger_Level() {
dst := bytes.Buffer{}
log := New(&dst).Level(WarnLevel)
log.Info().Msg("filtered out message")
log.Error().Msg("kept message")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"error","message":"kept message"}
}
func ExampleLogger_Sample() {
dst := bytes.Buffer{}
log := New(&dst).Sample(&BasicSampler{N: 2})
log.Info().Msg("message 1")
log.Info().Msg("message 2")
log.Info().Msg("message 3")
log.Info().Msg("message 4")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","message":"message 1"}
// {"level":"info","message":"message 3"}
}
type LevelNameHook1 struct{}
func (h LevelNameHook1) Run(e *Event, l Level, msg string) {
if l != NoLevel {
e.Str("level_name", l.String())
} else {
e.Str("level_name", "NoLevel")
}
}
type MessageHook string
func (h MessageHook) Run(e *Event, l Level, msg string) {
e.Str("the_message", msg)
}
func ExampleLogger_Hook() {
var levelNameHook LevelNameHook1
var messageHook MessageHook = "The message"
dst := bytes.Buffer{}
log := New(&dst).Hook(levelNameHook).Hook(messageHook)
log.Info().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","level_name":"info","the_message":"hello world","message":"hello world"}
}
func ExampleLogger_Print() {
dst := bytes.Buffer{}
log := New(&dst)
log.Print("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"debug","message":"hello world"}
}
func ExampleLogger_Printf() {
dst := bytes.Buffer{}
log := New(&dst)
log.Printf("hello %s", "world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"debug","message":"hello world"}
}
func ExampleLogger_Trace() {
dst := bytes.Buffer{}
log := New(&dst)
log.Trace().
Str("foo", "bar").
Int("n", 123).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"trace","foo":"bar","n":123,"message":"hello world"}
}
func ExampleLogger_Debug() {
dst := bytes.Buffer{}
log := New(&dst)
log.Debug().
Str("foo", "bar").
Int("n", 123).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"debug","foo":"bar","n":123,"message":"hello world"}
}
func ExampleLogger_Info() {
dst := bytes.Buffer{}
log := New(&dst)
log.Info().
Str("foo", "bar").
Int("n", 123).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","foo":"bar","n":123,"message":"hello world"}
}
func ExampleLogger_Warn() {
dst := bytes.Buffer{}
log := New(&dst)
log.Warn().
Str("foo", "bar").
Msg("a warning message")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"warn","foo":"bar","message":"a warning message"}
}
func ExampleLogger_Error() {
dst := bytes.Buffer{}
log := New(&dst)
log.Error().
Err(errors.New("some error")).
Msg("error doing something")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"error","error":"some error","message":"error doing something"}
}
func ExampleLogger_WithLevel() {
dst := bytes.Buffer{}
log := New(&dst)
log.WithLevel(InfoLevel).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"level":"info","message":"hello world"}
}
func ExampleLogger_Write() {
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Logger()
stdlog.SetFlags(0)
stdlog.SetOutput(log)
stdlog.Print("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","message":"hello world"}
}
func ExampleLogger_Log() {
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Str("bar", "baz").
Msg("")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","bar":"baz"}
}
func ExampleEvent_Dict() {
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Dict("dict", Dict().
Str("bar", "baz").
Int("n", 1),
).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"}
}
type User struct {
Name string
Age int
Created time.Time
}
func (u User) MarshalZerologObject(e *Event) {
e.Str("name", u.Name).
Int("age", u.Age).
Time("created", u.Created)
}
type Users []User
func (uu Users) MarshalZerologArray(a *Array) {
for _, u := range uu {
a.Object(u)
}
}
func ExampleEvent_Array() {
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Array("array", Arr().
Str("baz").
Int(1),
).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","array":["baz",1],"message":"hello world"}
}
func ExampleEvent_Array_object() {
dst := bytes.Buffer{}
log := New(&dst)
// Users implements LogArrayMarshaler
u := Users{
User{"John", 35, time.Time{}},
User{"Bob", 55, time.Time{}},
}
log.Log().
Str("foo", "bar").
Array("users", u).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","users":[{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},{"name":"Bob","age":55,"created":"0001-01-01T00:00:00Z"}],"message":"hello world"}
}
func ExampleEvent_Object() {
dst := bytes.Buffer{}
log := New(&dst)
// User implements LogObjectMarshaler
u := User{"John", 35, time.Time{}}
log.Log().
Str("foo", "bar").
Object("user", u).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","user":{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},"message":"hello world"}
}
func ExampleEvent_EmbedObject() {
price := Price{val: 6449, prec: 2, unit: "$"}
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
EmbedObject(price).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","price":"$64.49","message":"hello world"}
}
func ExampleEvent_Interface() {
dst := bytes.Buffer{}
log := New(&dst)
obj := struct {
Name string `json:"name"`
}{
Name: "john",
}
log.Log().
Str("foo", "bar").
Interface("obj", obj).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","obj":{"name":"john"},"message":"hello world"}
}
func ExampleEvent_Dur() {
d := time.Duration(10 * time.Second)
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Dur("dur", d).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","dur":10000,"message":"hello world"}
}
func ExampleEvent_Durs() {
d := []time.Duration{
time.Duration(10 * time.Second),
time.Duration(20 * time.Second),
}
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Durs("durs", d).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","durs":[10000,20000],"message":"hello world"}
}
func ExampleEvent_Fields_map() {
fields := map[string]interface{}{
"bar": "baz",
"n": 1,
}
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Fields(fields).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleEvent_Fields_slice() {
fields := []interface{}{
"bar", "baz",
"n", 1,
}
dst := bytes.Buffer{}
log := New(&dst)
log.Log().
Str("foo", "bar").
Fields(fields).
Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleContext_Dict() {
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Dict("dict", Dict().
Str("bar", "baz").
Int("n", 1),
).Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"}
}
func ExampleContext_Array() {
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Array("array", Arr().
Str("baz").
Int(1),
).Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","array":["baz",1],"message":"hello world"}
}
func ExampleContext_Array_object() {
// Users implements LogArrayMarshaler
u := Users{
User{"John", 35, time.Time{}},
User{"Bob", 55, time.Time{}},
}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Array("users", u).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","users":[{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},{"name":"Bob","age":55,"created":"0001-01-01T00:00:00Z"}],"message":"hello world"}
}
type Price struct {
val uint64
prec int
unit string
}
func (p Price) MarshalZerologObject(e *Event) {
denom := uint64(1)
for i := 0; i < p.prec; i++ {
denom *= 10
}
result := []byte(p.unit)
result = append(result, fmt.Sprintf("%d.%d", p.val/denom, p.val%denom)...)
e.Str("price", string(result))
}
func ExampleContext_EmbedObject() {
price := Price{val: 6449, prec: 2, unit: "$"}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
EmbedObject(price).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","price":"$64.49","message":"hello world"}
}
func ExampleContext_Object() {
// User implements LogObjectMarshaler
u := User{"John", 35, time.Time{}}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Object("user", u).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","user":{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},"message":"hello world"}
}
func ExampleContext_Interface() {
obj := struct {
Name string `json:"name"`
}{
Name: "john",
}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Interface("obj", obj).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","obj":{"name":"john"},"message":"hello world"}
}
func ExampleContext_Dur() {
d := time.Duration(10 * time.Second)
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Dur("dur", d).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","dur":10000,"message":"hello world"}
}
func ExampleContext_Durs() {
d := []time.Duration{
time.Duration(10 * time.Second),
time.Duration(20 * time.Second),
}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Durs("durs", d).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","durs":[10000,20000],"message":"hello world"}
}
func ExampleContext_Fields_map() {
fields := map[string]interface{}{
"bar": "baz",
"n": 1,
}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Fields(fields).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleContext_Fields_slice() {
fields := []interface{}{
"bar", "baz",
"n", 1,
}
dst := bytes.Buffer{}
log := New(&dst).With().
Str("foo", "bar").
Fields(fields).
Logger()
log.Log().Msg("hello world")
fmt.Println(decodeIfBinaryToString(dst.Bytes()))
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}

View File

@ -1,37 +0,0 @@
# Zerolog Lint
This is a basic linter that checks for missing log event finishers. Finds errors like: `log.Error().Int64("userID": 5)` - missing the `Msg`/`Msgf` finishers.
## Problem
When using zlog it's easy to forget to finish the log event chain by calling a finisher - the `Msg` or `Msgf` function that will schedule the event for writing. The problem with this is that it doesn't warn/panic during compilation and it's not easily found by grep or other general tools. It's even prominently mentioned in the project's readme, that:
> It is very important to note that when using the **zlog** chaining API, as shown above (`log.Info().Msg("hello world"`), the chain must have either the `Msg` or `Msgf` method call. If you forget to add either of these, the log will not occur and there is no compile time error to alert you of this.
## Solution
A basic linter like this one here that looks for method invocations on `zlog.Event` can examine the last call in a method call chain and check if it is a finisher, thus pointing out these errors.
## Usage
Just compile this and then run it. Or just run it via `go run` command via something like `go run cmd/lint/lint.go`.
The command accepts only one argument - the package to be inspected - and 4 optional flags, all of which can occur multiple times. The standard synopsis of the command is:
`lint [-finisher value] [-ignoreFile value] [-ignorePkg value] [-ignorePkgRecursively value] package`
#### Flags
- finisher
- specify which finishers to accept, defaults to `Msg` and `Msgf`
- ignoreFile
- which files to ignore, either by full path or by go path (package/file.go)
- ignorePkg
- do not inspect the specified package if found in the dependency tree
- ignorePkgRecursively
- do not inspect the specified package or its subpackages if found in the dependency tree
## Drawbacks
As it is, linter can generate a false positives in a specific case. These false positives come from the fact that if you have a method that returns a `zlog.Event` the linter will flag it because you are obviously not finishing the event. This will be solved in later release.

View File

@ -1,5 +0,0 @@
module tuxpa.in/a/zlog/cmd/lint
go 1.15
require golang.org/x/tools v0.1.8

View File

@ -1,28 +0,0 @@
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/mod v0.5.1 h1:OJxoQ/rynoF0dcCdI7cLPktw/hR2cueqYfjm43oqK38=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654 h1:id054HUawV2/6IGm2IV8KZQjqtwAOo2CYlOToYqa0d0=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.8 h1:P1HhGGuLW4aAclzjtmJdf0mJOjVUZUzOTqkAkWL+l6w=
golang.org/x/tools v0.1.8/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@ -1,175 +0,0 @@
package main
import (
"flag"
"fmt"
"go/ast"
"go/token"
"go/types"
"os"
"path/filepath"
"strings"
"golang.org/x/tools/go/loader"
)
var (
recursivelyIgnoredPkgs arrayFlag
ignoredPkgs arrayFlag
ignoredFiles arrayFlag
allowedFinishers arrayFlag = []string{"Msg", "Msgf"}
rootPkg string
)
// parse input flags and args
func init() {
flag.Var(&recursivelyIgnoredPkgs, "ignorePkgRecursively", "ignore the specified package and all subpackages recursively")
flag.Var(&ignoredPkgs, "ignorePkg", "ignore the specified package")
flag.Var(&ignoredFiles, "ignoreFile", "ignore the specified file by its path and/or go path (package/file.go)")
flag.Var(&allowedFinishers, "finisher", "allowed finisher for the event chain")
flag.Parse()
// add zlog to recursively ignored packages
recursivelyIgnoredPkgs = append(recursivelyIgnoredPkgs, "tuxpa.in/a/zlog")
args := flag.Args()
if len(args) != 1 {
fmt.Fprintln(os.Stderr, "you must provide exactly one package path")
os.Exit(1)
}
rootPkg = args[0]
}
func main() {
// load the package and all its dependencies
conf := loader.Config{}
conf.Import(rootPkg)
p, err := conf.Load()
if err != nil {
fmt.Fprintf(os.Stderr, "Error: unable to load the root package. %s\n", err.Error())
os.Exit(1)
}
// get the tuxpa.in/a/zlog.Event type
event := getEvent(p)
if event == nil {
fmt.Fprintln(os.Stderr, "Error: tuxpa.in/a/zlog.Event declaration not found, maybe zlog is not imported in the scanned package?")
os.Exit(1)
}
// get all selections (function calls) with the tuxpa.in/a/zlog.Event (or pointer) receiver
selections := getSelectionsWithReceiverType(p, event)
// print the violations (if any)
hasViolations := false
for _, s := range selections {
if hasBadFinisher(p, s) {
hasViolations = true
fmt.Printf("Error: missing or bad finisher for log chain, last call: %q at: %s:%v\n", s.fn.Name(), p.Fset.File(s.Pos()).Name(), p.Fset.Position(s.Pos()).Line)
}
}
// if no violations detected, return normally
if !hasViolations {
fmt.Println("No violations found")
return
}
// if violations were detected, return error code
os.Exit(1)
}
func getEvent(p *loader.Program) types.Type {
for _, pkg := range p.AllPackages {
if strings.HasSuffix(pkg.Pkg.Path(), "tuxpa.in/a/zlog") {
for _, d := range pkg.Defs {
if d != nil && d.Name() == "Event" {
return d.Type()
}
}
}
}
return nil
}
func getSelectionsWithReceiverType(p *loader.Program, targetType types.Type) map[token.Pos]selection {
selections := map[token.Pos]selection{}
for _, z := range p.AllPackages {
for i, t := range z.Selections {
switch o := t.Obj().(type) {
case *types.Func:
// this is not a bug, o.Type() is always *types.Signature, see docs
if vt := o.Type().(*types.Signature).Recv(); vt != nil {
typ := vt.Type()
if pointer, ok := typ.(*types.Pointer); ok {
typ = pointer.Elem()
}
if typ == targetType {
if s, ok := selections[i.Pos()]; !ok || i.End() > s.End() {
selections[i.Pos()] = selection{i, o, z.Pkg}
}
}
}
default:
// skip
}
}
}
return selections
}
func hasBadFinisher(p *loader.Program, s selection) bool {
pkgPath := strings.TrimPrefix(s.pkg.Path(), rootPkg+"/vendor/")
absoluteFilePath := strings.TrimPrefix(p.Fset.File(s.Pos()).Name(), rootPkg+"/vendor/")
goFilePath := pkgPath + "/" + filepath.Base(p.Fset.Position(s.Pos()).Filename)
for _, f := range allowedFinishers {
if f == s.fn.Name() {
return false
}
}
for _, ignoredPkg := range recursivelyIgnoredPkgs {
if strings.HasPrefix(pkgPath, ignoredPkg) {
return false
}
}
for _, ignoredPkg := range ignoredPkgs {
if pkgPath == ignoredPkg {
return false
}
}
for _, ignoredFile := range ignoredFiles {
if absoluteFilePath == ignoredFile {
return false
}
if goFilePath == ignoredFile {
return false
}
}
return true
}
type arrayFlag []string
func (i *arrayFlag) String() string {
return fmt.Sprintf("%v", []string(*i))
}
func (i *arrayFlag) Set(value string) error {
*i = append(*i, value)
return nil
}
type selection struct {
*ast.SelectorExpr
fn *types.Func
pkg *types.Package
}

View File

@ -1,40 +0,0 @@
# Zerolog PrettyLog
This is a basic CLI utility that will colorize and pretty print your structured JSON logs.
## Usage
You can compile it or run it directly. The only issue is that by default Zerolog does not output to `stdout`
but rather to `stderr` so we must pipe `stderr` stream to this CLI tool.
### Linux
These commands will redirect `stderr` to our `prettylog` tool and `stdout` will remain unaffected.
1. Compiled version
```shell
some_program_with_zerolog 2> >(prettylog)
```
2. Run it directly with `go run`
```shell
some_program_with_zerolog 2> >(go run cmd/prettylog/prettylog.go)
```
### Windows
These commands will redirect `stderr` to `stdout` and then pipe it to our `prettylog` tool.
1. Compiled version
```shell
some_program_with_zerolog 2>&1 | prettylog
```
2. Run it directly with `go run`
```shell
some_program_with_zerolog 2>&1 | go run cmd/prettylog/prettylog.go
```

View File

@ -1,26 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"tuxpa.in/a/zlog"
)
func isInputFromPipe() bool {
fileInfo, _ := os.Stdin.Stat()
return fileInfo.Mode()&os.ModeCharDevice == 0
}
func main() {
if !isInputFromPipe() {
fmt.Println("The command is intended to work with pipes.")
fmt.Println("Usage: app_with_zerolog | 2> >(prettylog)")
os.Exit(1)
return
}
writer := zlog.NewConsoleWriter()
_, _ = io.Copy(writer, os.Stdin)
}

View File

@ -1,450 +0,0 @@
package zlog
import (
"bytes"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/mattn/go-colorable"
)
const (
colorBlack = iota + 30
colorRed
colorGreen
colorYellow
colorBlue
colorMagenta
colorCyan
colorWhite
colorBold = 1
colorDarkGray = 90
)
var (
consoleBufPool = sync.Pool{
New: func() interface{} {
return bytes.NewBuffer(make([]byte, 0, 100))
},
}
)
const (
consoleDefaultTimeFormat = time.Kitchen
)
// Formatter transforms the input into a formatted string.
type Formatter func(interface{}) string
// ConsoleWriter parses the JSON input and writes it in an
// (optionally) colorized, human-friendly format to Out.
type ConsoleWriter struct {
// Out is the output destination.
Out io.Writer
// NoColor disables the colorized output.
NoColor bool
// TimeFormat specifies the format for timestamp in output.
TimeFormat string
// PartsOrder defines the order of parts in output.
PartsOrder []string
// PartsExclude defines parts to not display in output.
PartsExclude []string
// FieldsExclude defines contextual fields to not display in output.
FieldsExclude []string
FormatTimestamp Formatter
FormatLevel Formatter
FormatCaller Formatter
FormatMessage Formatter
FormatFieldName Formatter
FormatFieldValue Formatter
FormatErrFieldName Formatter
FormatErrFieldValue Formatter
FormatExtra func(map[string]interface{}, *bytes.Buffer) error
}
// NewConsoleWriter creates and initializes a new ConsoleWriter.
func NewConsoleWriter(options ...func(w *ConsoleWriter)) ConsoleWriter {
w := ConsoleWriter{
Out: os.Stdout,
TimeFormat: consoleDefaultTimeFormat,
PartsOrder: consoleDefaultPartsOrder(),
}
for _, opt := range options {
opt(&w)
}
// Fix color on Windows
if w.Out == os.Stdout || w.Out == os.Stderr {
w.Out = colorable.NewColorable(w.Out.(*os.File))
}
return w
}
// Write transforms the JSON input with formatters and appends to w.Out.
func (w ConsoleWriter) Write(p []byte) (n int, err error) {
// Fix color on Windows
if w.Out == os.Stdout || w.Out == os.Stderr {
w.Out = colorable.NewColorable(w.Out.(*os.File))
}
if w.PartsOrder == nil {
w.PartsOrder = consoleDefaultPartsOrder()
}
var buf = consoleBufPool.Get().(*bytes.Buffer)
defer func() {
buf.Reset()
consoleBufPool.Put(buf)
}()
var evt map[string]interface{}
p = decodeIfBinaryToBytes(p)
d := json.NewDecoder(bytes.NewReader(p))
d.UseNumber()
err = d.Decode(&evt)
if err != nil {
return n, fmt.Errorf("cannot decode event: %s", err)
}
for _, p := range w.PartsOrder {
w.writePart(buf, evt, p)
}
w.writeFields(evt, buf)
if w.FormatExtra != nil {
err = w.FormatExtra(evt, buf)
if err != nil {
return n, err
}
}
err = buf.WriteByte('\n')
if err != nil {
return n, err
}
_, err = buf.WriteTo(w.Out)
return len(p), err
}
// writeFields appends formatted key-value pairs to buf.
func (w ConsoleWriter) writeFields(evt map[string]interface{}, buf *bytes.Buffer) {
var fields = make([]string, 0, len(evt))
for field := range evt {
var isExcluded bool
for _, excluded := range w.FieldsExclude {
if field == excluded {
isExcluded = true
break
}
}
if isExcluded {
continue
}
switch field {
case LevelFieldName, TimestampFieldName, MessageFieldName, CallerFieldName:
continue
}
fields = append(fields, field)
}
sort.Strings(fields)
// Write space only if something has already been written to the buffer, and if there are fields.
if buf.Len() > 0 && len(fields) > 0 {
buf.WriteByte(' ')
}
// Move the "error" field to the front
ei := sort.Search(len(fields), func(i int) bool { return fields[i] >= ErrorFieldName })
if ei < len(fields) && fields[ei] == ErrorFieldName {
fields[ei] = ""
fields = append([]string{ErrorFieldName}, fields...)
var xfields = make([]string, 0, len(fields))
for _, field := range fields {
if field == "" { // Skip empty fields
continue
}
xfields = append(xfields, field)
}
fields = xfields
}
for i, field := range fields {
var fn Formatter
var fv Formatter
if field == ErrorFieldName {
if w.FormatErrFieldName == nil {
fn = consoleDefaultFormatErrFieldName(w.NoColor)
} else {
fn = w.FormatErrFieldName
}
if w.FormatErrFieldValue == nil {
fv = consoleDefaultFormatErrFieldValue(w.NoColor)
} else {
fv = w.FormatErrFieldValue
}
} else {
if w.FormatFieldName == nil {
fn = consoleDefaultFormatFieldName(w.NoColor)
} else {
fn = w.FormatFieldName
}
if w.FormatFieldValue == nil {
fv = consoleDefaultFormatFieldValue
} else {
fv = w.FormatFieldValue
}
}
buf.WriteString(fn(field))
switch fValue := evt[field].(type) {
case string:
if needsQuote(fValue) {
buf.WriteString(fv(strconv.Quote(fValue)))
} else {
buf.WriteString(fv(fValue))
}
case json.Number:
buf.WriteString(fv(fValue))
default:
b, err := InterfaceMarshalFunc(fValue)
if err != nil {
fmt.Fprintf(buf, colorize("[error: %v]", colorRed, w.NoColor), err)
} else {
fmt.Fprint(buf, fv(b))
}
}
if i < len(fields)-1 { // Skip space for last field
buf.WriteByte(' ')
}
}
}
// writePart appends a formatted part to buf.
func (w ConsoleWriter) writePart(buf *bytes.Buffer, evt map[string]interface{}, p string) {
var f Formatter
if w.PartsExclude != nil && len(w.PartsExclude) > 0 {
for _, exclude := range w.PartsExclude {
if exclude == p {
return
}
}
}
switch p {
case LevelFieldName:
if w.FormatLevel == nil {
f = consoleDefaultFormatLevel(w.NoColor)
} else {
f = w.FormatLevel
}
case TimestampFieldName:
if w.FormatTimestamp == nil {
f = consoleDefaultFormatTimestamp(w.TimeFormat, w.NoColor)
} else {
f = w.FormatTimestamp
}
case MessageFieldName:
if w.FormatMessage == nil {
f = consoleDefaultFormatMessage
} else {
f = w.FormatMessage
}
case CallerFieldName:
if w.FormatCaller == nil {
f = consoleDefaultFormatCaller(w.NoColor)
} else {
f = w.FormatCaller
}
default:
if w.FormatFieldValue == nil {
f = consoleDefaultFormatFieldValue
} else {
f = w.FormatFieldValue
}
}
var s = f(evt[p])
if len(s) > 0 {
if buf.Len() > 0 {
buf.WriteByte(' ') // Write space only if not the first part
}
buf.WriteString(s)
}
}
// needsQuote returns true when the string s should be quoted in output.
func needsQuote(s string) bool {
for i := range s {
if s[i] < 0x20 || s[i] > 0x7e || s[i] == ' ' || s[i] == '\\' || s[i] == '"' {
return true
}
}
return false
}
// colorize returns the string s wrapped in ANSI code c, unless disabled is true.
func colorize(s interface{}, c int, disabled bool) string {
if disabled {
return fmt.Sprintf("%s", s)
}
return fmt.Sprintf("\x1b[%dm%v\x1b[0m", c, s)
}
// ----- DEFAULT FORMATTERS ---------------------------------------------------
func consoleDefaultPartsOrder() []string {
return []string{
TimestampFieldName,
LevelFieldName,
CallerFieldName,
MessageFieldName,
}
}
func consoleDefaultFormatTimestamp(timeFormat string, noColor bool) Formatter {
if timeFormat == "" {
timeFormat = consoleDefaultTimeFormat
}
return func(i interface{}) string {
t := "<nil>"
switch tt := i.(type) {
case string:
ts, err := time.ParseInLocation(TimeFieldFormat, tt, time.Local)
if err != nil {
t = tt
} else {
t = ts.Local().Format(timeFormat)
}
case json.Number:
i, err := tt.Int64()
if err != nil {
t = tt.String()
} else {
var sec, nsec int64
switch TimeFieldFormat {
case TimeFormatUnixNano:
sec, nsec = 0, i
case TimeFormatUnixMicro:
sec, nsec = 0, int64(time.Duration(i)*time.Microsecond)
case TimeFormatUnixMs:
sec, nsec = 0, int64(time.Duration(i)*time.Millisecond)
default:
sec, nsec = i, 0
}
ts := time.Unix(sec, nsec)
t = ts.Format(timeFormat)
}
}
return colorize(t, colorDarkGray, noColor)
}
}
func consoleDefaultFormatLevel(noColor bool) Formatter {
return func(i interface{}) string {
var l string
if ll, ok := i.(string); ok {
switch ll {
case LevelTraceValue:
l = colorize("TRC", colorMagenta, noColor)
case LevelDebugValue:
l = colorize("DBG", colorYellow, noColor)
case LevelInfoValue:
l = colorize("INF", colorGreen, noColor)
case LevelWarnValue:
l = colorize("WRN", colorRed, noColor)
case LevelErrorValue:
l = colorize(colorize("ERR", colorRed, noColor), colorBold, noColor)
case LevelFatalValue:
l = colorize(colorize("FTL", colorRed, noColor), colorBold, noColor)
case LevelPanicValue:
l = colorize(colorize("PNC", colorRed, noColor), colorBold, noColor)
default:
l = colorize(ll, colorBold, noColor)
}
} else {
if i == nil {
l = colorize("???", colorBold, noColor)
} else {
l = strings.ToUpper(fmt.Sprintf("%s", i))[0:3]
}
}
return l
}
}
func consoleDefaultFormatCaller(noColor bool) Formatter {
return func(i interface{}) string {
var c string
if cc, ok := i.(string); ok {
c = cc
}
if len(c) > 0 {
if cwd, err := os.Getwd(); err == nil {
if rel, err := filepath.Rel(cwd, c); err == nil {
c = rel
}
}
c = colorize(c, colorBold, noColor) + colorize(" >", colorCyan, noColor)
}
return c
}
}
func consoleDefaultFormatMessage(i interface{}) string {
if i == nil {
return ""
}
return fmt.Sprintf("%s", i)
}
func consoleDefaultFormatFieldName(noColor bool) Formatter {
return func(i interface{}) string {
return colorize(fmt.Sprintf("%s=", i), colorCyan, noColor)
}
}
func consoleDefaultFormatFieldValue(i interface{}) string {
return fmt.Sprintf("%s", i)
}
func consoleDefaultFormatErrFieldName(noColor bool) Formatter {
return func(i interface{}) string {
return colorize(fmt.Sprintf("%s=", i), colorCyan, noColor)
}
}
func consoleDefaultFormatErrFieldValue(noColor bool) Formatter {
return func(i interface{}) string {
return colorize(fmt.Sprintf("%s", i), colorRed, noColor)
}
}

View File

@ -1,440 +0,0 @@
package zlog_test
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"strings"
"testing"
"time"
"tuxpa.in/a/zlog"
)
func ExampleConsoleWriter() {
log := zlog.New(zlog.ConsoleWriter{Out: os.Stdout, NoColor: true})
log.Info().Str("foo", "bar").Msg("Hello World")
// Output: <nil> INF Hello World foo=bar
}
func ExampleConsoleWriter_customFormatters() {
out := zlog.ConsoleWriter{Out: os.Stdout, NoColor: true}
out.FormatLevel = func(i interface{}) string { return strings.ToUpper(fmt.Sprintf("%-6s|", i)) }
out.FormatFieldName = func(i interface{}) string { return fmt.Sprintf("%s:", i) }
out.FormatFieldValue = func(i interface{}) string { return strings.ToUpper(fmt.Sprintf("%s", i)) }
log := zlog.New(out)
log.Info().Str("foo", "bar").Msg("Hello World")
// Output: <nil> INFO | Hello World foo:BAR
}
func ExampleNewConsoleWriter() {
out := zlog.NewConsoleWriter()
out.NoColor = true // For testing purposes only
log := zlog.New(out)
log.Debug().Str("foo", "bar").Msg("Hello World")
// Output: <nil> DBG Hello World foo=bar
}
func ExampleNewConsoleWriter_customFormatters() {
out := zlog.NewConsoleWriter(
func(w *zlog.ConsoleWriter) {
// Customize time format
w.TimeFormat = time.RFC822
// Customize level formatting
w.FormatLevel = func(i interface{}) string { return strings.ToUpper(fmt.Sprintf("[%-5s]", i)) }
},
)
out.NoColor = true // For testing purposes only
log := zlog.New(out)
log.Info().Str("foo", "bar").Msg("Hello World")
// Output: <nil> [INFO ] Hello World foo=bar
}
func TestConsoleLogger(t *testing.T) {
t.Run("Numbers", func(t *testing.T) {
buf := &bytes.Buffer{}
log := zlog.New(zlog.ConsoleWriter{Out: buf, NoColor: true})
log.Info().
Float64("float", 1.23).
Uint64("small", 123).
Uint64("big", 1152921504606846976).
Msg("msg")
if got, want := strings.TrimSpace(buf.String()), "<nil> INF msg big=1152921504606846976 float=1.23 small=123"; got != want {
t.Errorf("\ngot:\n%s\nwant:\n%s", got, want)
}
})
}
func TestConsoleWriter(t *testing.T) {
t.Run("Default field formatter", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, PartsOrder: []string{"foo"}}
_, err := w.Write([]byte(`{"foo": "DEFAULT"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "DEFAULT foo=DEFAULT\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write colorized", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: false}
_, err := w.Write([]byte(`{"level": "warn", "message": "Foobar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "\x1b[90m<nil>\x1b[0m \x1b[31mWRN\x1b[0m Foobar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write fields", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
ts := time.Unix(0, 0)
d := ts.UTC().Format(time.RFC3339)
_, err := w.Write([]byte(`{"time": "` + d + `", "level": "debug", "message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := ts.Format(time.Kitchen) + " DBG Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Unix timestamp input format", func(t *testing.T) {
of := zlog.TimeFieldFormat
defer func() {
zlog.TimeFieldFormat = of
}()
zlog.TimeFieldFormat = zlog.TimeFormatUnix
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, TimeFormat: time.StampMilli, NoColor: true}
_, err := w.Write([]byte(`{"time": 1234, "level": "debug", "message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := time.Unix(1234, 0).Format(time.StampMilli) + " DBG Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Unix timestamp ms input format", func(t *testing.T) {
of := zlog.TimeFieldFormat
defer func() {
zlog.TimeFieldFormat = of
}()
zlog.TimeFieldFormat = zlog.TimeFormatUnixMs
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, TimeFormat: time.StampMilli, NoColor: true}
_, err := w.Write([]byte(`{"time": 1234567, "level": "debug", "message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := time.Unix(1234, 567000000).Format(time.StampMilli) + " DBG Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Unix timestamp us input format", func(t *testing.T) {
of := zlog.TimeFieldFormat
defer func() {
zlog.TimeFieldFormat = of
}()
zlog.TimeFieldFormat = zlog.TimeFormatUnixMicro
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, TimeFormat: time.StampMicro, NoColor: true}
_, err := w.Write([]byte(`{"time": 1234567891, "level": "debug", "message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := time.Unix(1234, 567891000).Format(time.StampMicro) + " DBG Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("No message field", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
_, err := w.Write([]byte(`{"level": "debug", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "<nil> DBG foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("No level field", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
_, err := w.Write([]byte(`{"message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "<nil> ??? Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write colorized fields", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: false}
_, err := w.Write([]byte(`{"level": "warn", "message": "Foobar", "foo": "bar"}`))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "\x1b[90m<nil>\x1b[0m \x1b[31mWRN\x1b[0m Foobar \x1b[36mfoo=\x1b[0mbar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write error field", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
ts := time.Unix(0, 0)
d := ts.UTC().Format(time.RFC3339)
evt := `{"time": "` + d + `", "level": "error", "message": "Foobar", "aaa": "bbb", "error": "Error"}`
// t.Log(evt)
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := ts.Format(time.Kitchen) + " ERR Foobar error=Error aaa=bbb\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write caller field", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
cwd, err := os.Getwd()
if err != nil {
t.Fatalf("Cannot get working directory: %s", err)
}
ts := time.Unix(0, 0)
d := ts.UTC().Format(time.RFC3339)
evt := `{"time": "` + d + `", "level": "debug", "message": "Foobar", "foo": "bar", "caller": "` + cwd + `/foo/bar.go"}`
// t.Log(evt)
_, err = w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := ts.Format(time.Kitchen) + " DBG foo/bar.go > Foobar foo=bar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Write JSON field", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true}
evt := `{"level": "debug", "message": "Foobar", "foo": [1, 2, 3], "bar": true}`
// t.Log(evt)
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "<nil> DBG Foobar bar=true foo=[1,2,3]\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
}
func TestConsoleWriterConfiguration(t *testing.T) {
t.Run("Sets TimeFormat", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, TimeFormat: time.RFC3339}
ts := time.Unix(0, 0)
d := ts.UTC().Format(time.RFC3339)
evt := `{"time": "` + d + `", "level": "info", "message": "Foobar"}`
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := ts.Format(time.RFC3339) + " INF Foobar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Sets PartsOrder", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, PartsOrder: []string{"message", "level"}}
evt := `{"level": "info", "message": "Foobar"}`
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "Foobar INF\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Sets PartsExclude", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, PartsExclude: []string{"time"}}
d := time.Unix(0, 0).UTC().Format(time.RFC3339)
evt := `{"time": "` + d + `", "level": "info", "message": "Foobar"}`
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "INF Foobar\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Sets FieldsExclude", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, FieldsExclude: []string{"foo"}}
evt := `{"level": "info", "message": "Foobar", "foo":"bar", "baz":"quux"}`
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "<nil> INF Foobar baz=quux\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Sets FormatExtra", func(t *testing.T) {
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{
Out: buf, NoColor: true, PartsOrder: []string{"level", "message"},
FormatExtra: func(evt map[string]interface{}, buf *bytes.Buffer) error {
buf.WriteString("\nAdditional stacktrace")
return nil
},
}
evt := `{"level": "info", "message": "Foobar"}`
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
expectedOutput := "INF Foobar\nAdditional stacktrace\n"
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
t.Run("Uses local time for console writer without time zone", func(t *testing.T) {
// Regression test for issue #483 (check there for more details)
timeFormat := "2006-01-02 15:04:05"
expectedOutput := "2022-10-20 20:24:50 INF Foobar\n"
evt := `{"time": "2022-10-20 20:24:50", "level": "info", "message": "Foobar"}`
of := zlog.TimeFieldFormat
defer func() {
zlog.TimeFieldFormat = of
}()
zlog.TimeFieldFormat = timeFormat
buf := &bytes.Buffer{}
w := zlog.ConsoleWriter{Out: buf, NoColor: true, TimeFormat: timeFormat}
_, err := w.Write([]byte(evt))
if err != nil {
t.Errorf("Unexpected error when writing output: %s", err)
}
actualOutput := buf.String()
if actualOutput != expectedOutput {
t.Errorf("Unexpected output %q, want: %q", actualOutput, expectedOutput)
}
})
}
func BenchmarkConsoleWriter(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
var msg = []byte(`{"level": "info", "foo": "bar", "message": "HELLO", "time": "1990-01-01"}`)
w := zlog.ConsoleWriter{Out: ioutil.Discard, NoColor: false}
for i := 0; i < b.N; i++ {
w.Write(msg)
}
}

View File

@ -1,12 +1,6 @@
package zlog package zerolog
import ( import "time"
"fmt"
"io/ioutil"
"math"
"net"
"time"
)
// Context configures a new sub-logger with contextual fields. // Context configures a new sub-logger with contextual fields.
type Context struct { type Context struct {
@ -18,417 +12,136 @@ func (c Context) Logger() Logger {
return c.l return c.l
} }
// Fields is a helper function to use a map or slice to set fields using type assertion.
// Only map[string]interface{} and []interface{} are accepted. []interface{} must
// alternate string keys and arbitrary values, and extraneous ones are ignored.
func (c Context) Fields(fields interface{}) Context {
c.l.context = appendFields(c.l.context, fields)
return c
}
// Dict adds the field key with the dict to the logger context. // Dict adds the field key with the dict to the logger context.
func (c Context) Dict(key string, dict *Event) Context { func (c Context) Dict(key string, dict *Event) Context {
dict.buf = enc.AppendEndMarker(dict.buf) dict.buf = append(dict.buf, '}')
c.l.context = append(enc.AppendKey(c.l.context, key), dict.buf...) c.l.context = append(appendKey(c.l.context, key), dict.buf...)
putEvent(dict) eventPool.Put(dict)
return c
}
// Array adds the field key with an array to the event context.
// Use zlog.Arr() to create the array or pass a type that
// implement the LogArrayMarshaler interface.
func (c Context) Array(key string, arr LogArrayMarshaler) Context {
c.l.context = enc.AppendKey(c.l.context, key)
if arr, ok := arr.(*Array); ok {
c.l.context = arr.write(c.l.context)
return c
}
var a *Array
if aa, ok := arr.(*Array); ok {
a = aa
} else {
a = Arr()
arr.MarshalZerologArray(a)
}
c.l.context = a.write(c.l.context)
return c
}
// Object marshals an object that implement the LogObjectMarshaler interface.
func (c Context) Object(key string, obj LogObjectMarshaler) Context {
e := newEvent(levelWriterAdapter{ioutil.Discard}, 0)
e.Object(key, obj)
c.l.context = enc.AppendObjectData(c.l.context, e.buf)
putEvent(e)
return c
}
// EmbedObject marshals and Embeds an object that implement the LogObjectMarshaler interface.
func (c Context) EmbedObject(obj LogObjectMarshaler) Context {
e := newEvent(levelWriterAdapter{ioutil.Discard}, 0)
e.EmbedObject(obj)
c.l.context = enc.AppendObjectData(c.l.context, e.buf)
putEvent(e)
return c return c
} }
// Str adds the field key with val as a string to the logger context. // Str adds the field key with val as a string to the logger context.
func (c Context) Str(key, val string) Context { func (c Context) Str(key, val string) Context {
c.l.context = enc.AppendString(enc.AppendKey(c.l.context, key), val) c.l.context = appendString(c.l.context, key, val)
return c return c
} }
// Strs adds the field key with val as a string to the logger context. // AnErr adds the field key with err as a string to the logger context.
func (c Context) Strs(key string, vals []string) Context {
c.l.context = enc.AppendStrings(enc.AppendKey(c.l.context, key), vals)
return c
}
// Stringer adds the field key with val.String() (or null if val is nil) to the logger context.
func (c Context) Stringer(key string, val fmt.Stringer) Context {
if val != nil {
c.l.context = enc.AppendString(enc.AppendKey(c.l.context, key), val.String())
return c
}
c.l.context = enc.AppendInterface(enc.AppendKey(c.l.context, key), nil)
return c
}
// Bytes adds the field key with val as a []byte to the logger context.
func (c Context) Bytes(key string, val []byte) Context {
c.l.context = enc.AppendBytes(enc.AppendKey(c.l.context, key), val)
return c
}
// Hex adds the field key with val as a hex string to the logger context.
func (c Context) Hex(key string, val []byte) Context {
c.l.context = enc.AppendHex(enc.AppendKey(c.l.context, key), val)
return c
}
// RawJSON adds already encoded JSON to context.
//
// No sanity check is performed on b; it must not contain carriage returns and
// be valid JSON.
func (c Context) RawJSON(key string, b []byte) Context {
c.l.context = appendJSON(enc.AppendKey(c.l.context, key), b)
return c
}
// AnErr adds the field key with serialized err to the logger context.
func (c Context) AnErr(key string, err error) Context { func (c Context) AnErr(key string, err error) Context {
switch m := ErrorMarshalFunc(err).(type) { c.l.context = appendErrorKey(c.l.context, key, err)
case nil:
return c return c
case LogObjectMarshaler:
return c.Object(key, m)
case error:
if m == nil || isNilValue(m) {
return c
} else {
return c.Str(key, m.Error())
}
case string:
return c.Str(key, m)
default:
return c.Interface(key, m)
}
} }
// Errs adds the field key with errs as an array of serialized errors to the // Err adds the field "error" with err as a string to the logger context.
// logger context. // To customize the key name, change zerolog.ErrorFieldName.
func (c Context) Errs(key string, errs []error) Context {
arr := Arr()
for _, err := range errs {
switch m := ErrorMarshalFunc(err).(type) {
case LogObjectMarshaler:
arr = arr.Object(m)
case error:
if m == nil || isNilValue(m) {
arr = arr.Interface(nil)
} else {
arr = arr.Str(m.Error())
}
case string:
arr = arr.Str(m)
default:
arr = arr.Interface(m)
}
}
return c.Array(key, arr)
}
// Err adds the field "error" with serialized err to the logger context.
func (c Context) Err(err error) Context { func (c Context) Err(err error) Context {
return c.AnErr(ErrorFieldName, err) c.l.context = appendError(c.l.context, err)
}
// Bool adds the field key with val as a bool to the logger context.
func (c Context) Bool(key string, b bool) Context {
c.l.context = enc.AppendBool(enc.AppendKey(c.l.context, key), b)
return c return c
} }
// Bools adds the field key with val as a []bool to the logger context. // Bool adds the field key with val as a Boolean to the logger context.
func (c Context) Bools(key string, b []bool) Context { func (c Context) Bool(key string, b bool) Context {
c.l.context = enc.AppendBools(enc.AppendKey(c.l.context, key), b) c.l.context = appendBool(c.l.context, key, b)
return c return c
} }
// Int adds the field key with i as a int to the logger context. // Int adds the field key with i as a int to the logger context.
func (c Context) Int(key string, i int) Context { func (c Context) Int(key string, i int) Context {
c.l.context = enc.AppendInt(enc.AppendKey(c.l.context, key), i) c.l.context = appendInt(c.l.context, key, i)
return c
}
// Ints adds the field key with i as a []int to the logger context.
func (c Context) Ints(key string, i []int) Context {
c.l.context = enc.AppendInts(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Int8 adds the field key with i as a int8 to the logger context. // Int8 adds the field key with i as a int8 to the logger context.
func (c Context) Int8(key string, i int8) Context { func (c Context) Int8(key string, i int8) Context {
c.l.context = enc.AppendInt8(enc.AppendKey(c.l.context, key), i) c.l.context = appendInt8(c.l.context, key, i)
return c
}
// Ints8 adds the field key with i as a []int8 to the logger context.
func (c Context) Ints8(key string, i []int8) Context {
c.l.context = enc.AppendInts8(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Int16 adds the field key with i as a int16 to the logger context. // Int16 adds the field key with i as a int16 to the logger context.
func (c Context) Int16(key string, i int16) Context { func (c Context) Int16(key string, i int16) Context {
c.l.context = enc.AppendInt16(enc.AppendKey(c.l.context, key), i) c.l.context = appendInt16(c.l.context, key, i)
return c
}
// Ints16 adds the field key with i as a []int16 to the logger context.
func (c Context) Ints16(key string, i []int16) Context {
c.l.context = enc.AppendInts16(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Int32 adds the field key with i as a int32 to the logger context. // Int32 adds the field key with i as a int32 to the logger context.
func (c Context) Int32(key string, i int32) Context { func (c Context) Int32(key string, i int32) Context {
c.l.context = enc.AppendInt32(enc.AppendKey(c.l.context, key), i) c.l.context = appendInt32(c.l.context, key, i)
return c
}
// Ints32 adds the field key with i as a []int32 to the logger context.
func (c Context) Ints32(key string, i []int32) Context {
c.l.context = enc.AppendInts32(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Int64 adds the field key with i as a int64 to the logger context. // Int64 adds the field key with i as a int64 to the logger context.
func (c Context) Int64(key string, i int64) Context { func (c Context) Int64(key string, i int64) Context {
c.l.context = enc.AppendInt64(enc.AppendKey(c.l.context, key), i) c.l.context = appendInt64(c.l.context, key, i)
return c
}
// Ints64 adds the field key with i as a []int64 to the logger context.
func (c Context) Ints64(key string, i []int64) Context {
c.l.context = enc.AppendInts64(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Uint adds the field key with i as a uint to the logger context. // Uint adds the field key with i as a uint to the logger context.
func (c Context) Uint(key string, i uint) Context { func (c Context) Uint(key string, i uint) Context {
c.l.context = enc.AppendUint(enc.AppendKey(c.l.context, key), i) c.l.context = appendUint(c.l.context, key, i)
return c
}
// Uints adds the field key with i as a []uint to the logger context.
func (c Context) Uints(key string, i []uint) Context {
c.l.context = enc.AppendUints(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Uint8 adds the field key with i as a uint8 to the logger context. // Uint8 adds the field key with i as a uint8 to the logger context.
func (c Context) Uint8(key string, i uint8) Context { func (c Context) Uint8(key string, i uint8) Context {
c.l.context = enc.AppendUint8(enc.AppendKey(c.l.context, key), i) c.l.context = appendUint8(c.l.context, key, i)
return c
}
// Uints8 adds the field key with i as a []uint8 to the logger context.
func (c Context) Uints8(key string, i []uint8) Context {
c.l.context = enc.AppendUints8(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Uint16 adds the field key with i as a uint16 to the logger context. // Uint16 adds the field key with i as a uint16 to the logger context.
func (c Context) Uint16(key string, i uint16) Context { func (c Context) Uint16(key string, i uint16) Context {
c.l.context = enc.AppendUint16(enc.AppendKey(c.l.context, key), i) c.l.context = appendUint16(c.l.context, key, i)
return c
}
// Uints16 adds the field key with i as a []uint16 to the logger context.
func (c Context) Uints16(key string, i []uint16) Context {
c.l.context = enc.AppendUints16(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Uint32 adds the field key with i as a uint32 to the logger context. // Uint32 adds the field key with i as a uint32 to the logger context.
func (c Context) Uint32(key string, i uint32) Context { func (c Context) Uint32(key string, i uint32) Context {
c.l.context = enc.AppendUint32(enc.AppendKey(c.l.context, key), i) c.l.context = appendUint32(c.l.context, key, i)
return c
}
// Uints32 adds the field key with i as a []uint32 to the logger context.
func (c Context) Uints32(key string, i []uint32) Context {
c.l.context = enc.AppendUints32(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Uint64 adds the field key with i as a uint64 to the logger context. // Uint64 adds the field key with i as a uint64 to the logger context.
func (c Context) Uint64(key string, i uint64) Context { func (c Context) Uint64(key string, i uint64) Context {
c.l.context = enc.AppendUint64(enc.AppendKey(c.l.context, key), i) c.l.context = appendUint64(c.l.context, key, i)
return c
}
// Uints64 adds the field key with i as a []uint64 to the logger context.
func (c Context) Uints64(key string, i []uint64) Context {
c.l.context = enc.AppendUints64(enc.AppendKey(c.l.context, key), i)
return c return c
} }
// Float32 adds the field key with f as a float32 to the logger context. // Float32 adds the field key with f as a float32 to the logger context.
func (c Context) Float32(key string, f float32) Context { func (c Context) Float32(key string, f float32) Context {
c.l.context = enc.AppendFloat32(enc.AppendKey(c.l.context, key), f) c.l.context = appendFloat32(c.l.context, key, f)
return c
}
// Floats32 adds the field key with f as a []float32 to the logger context.
func (c Context) Floats32(key string, f []float32) Context {
c.l.context = enc.AppendFloats32(enc.AppendKey(c.l.context, key), f)
return c return c
} }
// Float64 adds the field key with f as a float64 to the logger context. // Float64 adds the field key with f as a float64 to the logger context.
func (c Context) Float64(key string, f float64) Context { func (c Context) Float64(key string, f float64) Context {
c.l.context = enc.AppendFloat64(enc.AppendKey(c.l.context, key), f) c.l.context = appendFloat64(c.l.context, key, f)
return c return c
} }
// Floats64 adds the field key with f as a []float64 to the logger context. // Timestamp adds the current local time as UNIX timestamp to the logger context with the "time" key.
func (c Context) Floats64(key string, f []float64) Context { // To customize the key name, change zerolog.TimestampFieldName.
c.l.context = enc.AppendFloats64(enc.AppendKey(c.l.context, key), f)
return c
}
type timestampHook struct{}
func (ts timestampHook) Run(e *Event, level Level, msg string) {
e.Timestamp()
}
var th = timestampHook{}
// Timestamp adds the current local time to the logger context with the "time" key, formatted using zlog.TimeFieldFormat.
// To customize the key name, change zlog.TimestampFieldName.
// To customize the time format, change zlog.TimeFieldFormat.
//
// NOTE: It won't dedupe the "time" key if the *Context has one already.
func (c Context) Timestamp() Context { func (c Context) Timestamp() Context {
c.l = c.l.Hook(th) if len(c.l.context) > 0 {
c.l.context[0] = 1
} else {
c.l.context = append(c.l.context, 1)
}
return c return c
} }
// Time adds the field key with t formated as string using zlog.TimeFieldFormat. // Time adds the field key with t formated as string using zerolog.TimeFieldFormat.
func (c Context) Time(key string, t time.Time) Context { func (c Context) Time(key string, t time.Time) Context {
c.l.context = enc.AppendTime(enc.AppendKey(c.l.context, key), t, TimeFieldFormat) c.l.context = appendTime(c.l.context, key, t)
return c
}
// Times adds the field key with t formated as string using zlog.TimeFieldFormat.
func (c Context) Times(key string, t []time.Time) Context {
c.l.context = enc.AppendTimes(enc.AppendKey(c.l.context, key), t, TimeFieldFormat)
return c return c
} }
// Dur adds the fields key with d divided by unit and stored as a float. // Dur adds the fields key with d divided by unit and stored as a float.
func (c Context) Dur(key string, d time.Duration) Context { func (c Context) Dur(key string, d time.Duration) Context {
c.l.context = enc.AppendDuration(enc.AppendKey(c.l.context, key), d, DurationFieldUnit, DurationFieldInteger) c.l.context = appendDuration(c.l.context, key, d)
return c
}
// Durs adds the fields key with d divided by unit and stored as a float.
func (c Context) Durs(key string, d []time.Duration) Context {
c.l.context = enc.AppendDurations(enc.AppendKey(c.l.context, key), d, DurationFieldUnit, DurationFieldInteger)
return c return c
} }
// Interface adds the field key with obj marshaled using reflection. // Interface adds the field key with obj marshaled using reflection.
func (c Context) Interface(key string, i interface{}) Context { func (c Context) Interface(key string, i interface{}) Context {
c.l.context = enc.AppendInterface(enc.AppendKey(c.l.context, key), i) c.l.context = appendInterface(c.l.context, key, i)
return c
}
type callerHook struct {
callerSkipFrameCount int
}
func newCallerHook(skipFrameCount int) callerHook {
return callerHook{callerSkipFrameCount: skipFrameCount}
}
func (ch callerHook) Run(e *Event, level Level, msg string) {
switch ch.callerSkipFrameCount {
case useGlobalSkipFrameCount:
// Extra frames to skip (added by hook infra).
e.caller(CallerSkipFrameCount + contextCallerSkipFrameCount)
default:
// Extra frames to skip (added by hook infra).
e.caller(ch.callerSkipFrameCount + contextCallerSkipFrameCount)
}
}
// useGlobalSkipFrameCount acts as a flag to informat callerHook.Run
// to use the global CallerSkipFrameCount.
const useGlobalSkipFrameCount = math.MinInt32
// ch is the default caller hook using the global CallerSkipFrameCount.
var ch = newCallerHook(useGlobalSkipFrameCount)
// Caller adds the file:line of the caller with the zlog.CallerFieldName key.
func (c Context) Caller() Context {
c.l = c.l.Hook(ch)
return c
}
// CallerWithSkipFrameCount adds the file:line of the caller with the zlog.CallerFieldName key.
// The specified skipFrameCount int will override the global CallerSkipFrameCount for this context's respective logger.
// If set to -1 the global CallerSkipFrameCount will be used.
func (c Context) CallerWithSkipFrameCount(skipFrameCount int) Context {
c.l = c.l.Hook(newCallerHook(skipFrameCount))
return c
}
// Stack enables stack trace printing for the error passed to Err().
func (c Context) Stack() Context {
c.l.stack = true
return c
}
// IPAddr adds IPv4 or IPv6 Address to the context
func (c Context) IPAddr(key string, ip net.IP) Context {
c.l.context = enc.AppendIPAddr(enc.AppendKey(c.l.context, key), ip)
return c
}
// IPPrefix adds IPv4 or IPv6 Prefix (address and mask) to the context
func (c Context) IPPrefix(key string, pfx net.IPNet) Context {
c.l.context = enc.AppendIPPrefix(enc.AppendKey(c.l.context, key), pfx)
return c
}
// MACAddr adds MAC address to the context
func (c Context) MACAddr(key string, ha net.HardwareAddr) Context {
c.l.context = enc.AppendMACAddr(enc.AppendKey(c.l.context, key), ha)
return c return c
} }

43
ctx.go
View File

@ -1,52 +1,29 @@
package zlog package zerolog
import ( import (
"context" "context"
"io/ioutil"
) )
var disabledLogger *Logger var disabledLogger = New(ioutil.Discard).Level(Disabled)
func init() {
SetGlobalLevel(TraceLevel)
l := Nop()
disabledLogger = &l
}
type ctxKey struct{} type ctxKey struct{}
// WithContext returns a copy of ctx with the receiver attached. The Logger // WithContext returns a copy of ctx with l associated.
// attached to the provided Context (if any) will not be effected. If the
// receiver's log level is Disabled it will only be attached to the returned
// Context if the provided Context has a previously attached Logger. If the
// provided Context has no attached Logger, a Disabled Logger will not be
// attached.
//
// Note: to modify the existing Logger attached to a Context (instead of
// replacing it in a new Context), use UpdateContext with the following
// notation:
//
// ctx := r.Context()
// l := zlog.Ctx(ctx)
// l.UpdateContext(func(c Context) Context {
// return c.Str("bar", "baz")
// })
//
func (l Logger) WithContext(ctx context.Context) context.Context { func (l Logger) WithContext(ctx context.Context) context.Context {
if _, ok := ctx.Value(ctxKey{}).(*Logger); !ok && l.level == Disabled { if lp, ok := ctx.Value(ctxKey{}).(*Logger); ok {
// Do not store disabled logger. // Update existing pointer.
*lp = l
return ctx return ctx
} }
return context.WithValue(ctx, ctxKey{}, &l) return context.WithValue(ctx, ctxKey{}, &l)
} }
// Ctx returns the Logger associated with the ctx. If no logger // Ctx returns the Logger associated with the ctx. If no logger
// is associated, DefaultContextLogger is returned, unless DefaultContextLogger // is associated, a disabled logger is returned.
// is nil, in which case a disabled logger is returned. func Ctx(ctx context.Context) Logger {
func Ctx(ctx context.Context) *Logger {
if l, ok := ctx.Value(ctxKey{}).(*Logger); ok { if l, ok := ctx.Value(ctxKey{}).(*Logger); ok {
return l return *l
} else if l = DefaultContextLogger; l != nil {
return l
} }
return disabledLogger return disabledLogger
} }

View File

@ -1,4 +1,4 @@
package zlog package zerolog
import ( import (
"context" "context"
@ -11,7 +11,7 @@ func TestCtx(t *testing.T) {
log := New(ioutil.Discard) log := New(ioutil.Discard)
ctx := log.WithContext(context.Background()) ctx := log.WithContext(context.Background())
log2 := Ctx(ctx) log2 := Ctx(ctx)
if !reflect.DeepEqual(log, *log2) { if !reflect.DeepEqual(log, log2) {
t.Error("Ctx did not return the expected logger") t.Error("Ctx did not return the expected logger")
} }
@ -19,52 +19,12 @@ func TestCtx(t *testing.T) {
log = log.Level(InfoLevel) log = log.Level(InfoLevel)
ctx = log.WithContext(ctx) ctx = log.WithContext(ctx)
log2 = Ctx(ctx) log2 = Ctx(ctx)
if !reflect.DeepEqual(log, *log2) { if !reflect.DeepEqual(log, log2) {
t.Error("Ctx did not return the expected logger") t.Error("Ctx did not return the expected logger")
} }
log2 = Ctx(context.Background()) log2 = Ctx(context.Background())
if log2 != disabledLogger { if !reflect.DeepEqual(log2, disabledLogger) {
t.Error("Ctx did not return the expected logger")
}
DefaultContextLogger = &log
t.Cleanup(func() { DefaultContextLogger = nil })
log2 = Ctx(context.Background())
if log2 != &log {
t.Error("Ctx did not return the expected logger") t.Error("Ctx did not return the expected logger")
} }
} }
func TestCtxDisabled(t *testing.T) {
dl := New(ioutil.Discard).Level(Disabled)
ctx := dl.WithContext(context.Background())
if ctx != context.Background() {
t.Error("WithContext stored a disabled logger")
}
l := New(ioutil.Discard).With().Str("foo", "bar").Logger()
ctx = l.WithContext(ctx)
if !reflect.DeepEqual(Ctx(ctx), &l) {
t.Error("WithContext did not store logger")
}
l.UpdateContext(func(c Context) Context {
return c.Str("bar", "baz")
})
ctx = l.WithContext(ctx)
if !reflect.DeepEqual(Ctx(ctx), &l) {
t.Error("WithContext did not store updated logger")
}
l = l.Level(DebugLevel)
ctx = l.WithContext(ctx)
if !reflect.DeepEqual(Ctx(ctx), &l) {
t.Error("WithContext did not store copied logger")
}
ctx = dl.WithContext(ctx)
if !reflect.DeepEqual(Ctx(ctx), &dl) {
t.Error("WithContext did not override logger with a disabled logger")
}
}

View File

@ -1,114 +0,0 @@
// Package diode provides a thread-safe, lock-free, non-blocking io.Writer
// wrapper.
package diode
import (
"context"
"io"
"sync"
"time"
"tuxpa.in/a/zlog/diode/internal/diodes"
)
var bufPool = &sync.Pool{
New: func() interface{} {
return make([]byte, 0, 500)
},
}
type Alerter func(missed int)
type diodeFetcher interface {
diodes.Diode
Next() diodes.GenericDataType
}
// Writer is a io.Writer wrapper that uses a diode to make Write lock-free,
// non-blocking and thread safe.
type Writer struct {
w io.Writer
d diodeFetcher
c context.CancelFunc
done chan struct{}
}
// NewWriter creates a writer wrapping w with a many-to-one diode in order to
// never block log producers and drop events if the writer can't keep up with
// the flow of data.
//
// Use a diode.Writer when
//
// wr := diode.NewWriter(w, 1000, 0, func(missed int) {
// log.Printf("Dropped %d messages", missed)
// })
// log := zlog.New(wr)
//
// If pollInterval is greater than 0, a poller is used otherwise a waiter is
// used.
//
// See code.cloudfoundry.org/go-diodes for more info on diode.
func NewWriter(w io.Writer, size int, pollInterval time.Duration, f Alerter) Writer {
ctx, cancel := context.WithCancel(context.Background())
dw := Writer{
w: w,
c: cancel,
done: make(chan struct{}),
}
if f == nil {
f = func(int) {}
}
d := diodes.NewManyToOne(size, diodes.AlertFunc(f))
if pollInterval > 0 {
dw.d = diodes.NewPoller(d,
diodes.WithPollingInterval(pollInterval),
diodes.WithPollingContext(ctx))
} else {
dw.d = diodes.NewWaiter(d,
diodes.WithWaiterContext(ctx))
}
go dw.poll()
return dw
}
func (dw Writer) Write(p []byte) (n int, err error) {
// p is pooled in zlog so we can't hold it passed this call, hence the
// copy.
p = append(bufPool.Get().([]byte), p...)
dw.d.Set(diodes.GenericDataType(&p))
return len(p), nil
}
// Close releases the diode poller and call Close on the wrapped writer if
// io.Closer is implemented.
func (dw Writer) Close() error {
dw.c()
<-dw.done
if w, ok := dw.w.(io.Closer); ok {
return w.Close()
}
return nil
}
func (dw Writer) poll() {
defer close(dw.done)
for {
d := dw.d.Next()
if d == nil {
return
}
p := *(*[]byte)(d)
dw.w.Write(p)
// Proper usage of a sync.Pool requires each entry to have approximately
// the same memory cost. To obtain this property when the stored type
// contains a variably-sized buffer, we add a hard limit on the maximum buffer
// to place back in the pool.
//
// See https://golang.org/issue/23199
const maxSize = 1 << 16 // 64KiB
if cap(p) <= maxSize {
bufPool.Put(p[:0])
}
}
}

View File

@ -1,23 +0,0 @@
// +build !binary_log
package diode_test
import (
"fmt"
"os"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/diode"
)
func ExampleNewWriter() {
w := diode.NewWriter(os.Stdout, 1000, 0, func(missed int) {
fmt.Printf("Dropped %d messages\n", missed)
})
log := zlog.New(w)
log.Print("test")
w.Close()
// Output: {"level":"debug","message":"test"}
}

View File

@ -1,62 +0,0 @@
package diode_test
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"os"
"testing"
"time"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/diode"
"tuxpa.in/a/zlog/internal/cbor"
)
func TestNewWriter(t *testing.T) {
buf := bytes.Buffer{}
w := diode.NewWriter(&buf, 1000, 0, func(missed int) {
fmt.Printf("Dropped %d messages\n", missed)
})
log := zlog.New(w)
log.Print("test")
w.Close()
want := "{\"level\":\"debug\",\"message\":\"test\"}\n"
got := cbor.DecodeIfBinaryToString(buf.Bytes())
if got != want {
t.Errorf("Diode New Writer Test failed. got:%s, want:%s!", got, want)
}
}
func TestClose(t *testing.T) {
buf := bytes.Buffer{}
w := diode.NewWriter(&buf, 1000, 0, func(missed int) {})
log := zlog.New(w)
log.Print("test")
w.Close()
}
func Benchmark(b *testing.B) {
log.SetOutput(ioutil.Discard)
defer log.SetOutput(os.Stderr)
benchs := map[string]time.Duration{
"Waiter": 0,
"Pooler": 10 * time.Millisecond,
}
for name, interval := range benchs {
b.Run(name, func(b *testing.B) {
w := diode.NewWriter(ioutil.Discard, 100000, interval, nil)
log := zlog.New(w)
defer w.Close()
b.SetParallelism(1000)
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
log.Print("test")
}
})
})
}
}

View File

@ -1 +0,0 @@
Copied from https://github.com/cloudfoundry/go-diodes to avoid test dependencies.

View File

@ -1,130 +0,0 @@
package diodes
import (
"log"
"sync/atomic"
"unsafe"
)
// ManyToOne diode is optimal for many writers (go-routines B-n) and a single
// reader (go-routine A). It is not thread safe for multiple readers.
type ManyToOne struct {
writeIndex uint64
readIndex uint64
buffer []unsafe.Pointer
alerter Alerter
}
// NewManyToOne creates a new diode (ring buffer). The ManyToOne diode
// is optimized for many writers (on go-routines B-n) and a single reader
// (on go-routine A). The alerter is invoked on the read's go-routine. It is
// called when it notices that the writer go-routine has passed it and wrote
// over data. A nil can be used to ignore alerts.
func NewManyToOne(size int, alerter Alerter) *ManyToOne {
if alerter == nil {
alerter = AlertFunc(func(int) {})
}
d := &ManyToOne{
buffer: make([]unsafe.Pointer, size),
alerter: alerter,
}
// Start write index at the value before 0
// to allow the first write to use AddUint64
// and still have a beginning index of 0
d.writeIndex = ^d.writeIndex
return d
}
// Set sets the data in the next slot of the ring buffer.
func (d *ManyToOne) Set(data GenericDataType) {
for {
writeIndex := atomic.AddUint64(&d.writeIndex, 1)
idx := writeIndex % uint64(len(d.buffer))
old := atomic.LoadPointer(&d.buffer[idx])
if old != nil &&
(*bucket)(old) != nil &&
(*bucket)(old).seq > writeIndex-uint64(len(d.buffer)) {
log.Println("Diode set collision: consider using a larger diode")
continue
}
newBucket := &bucket{
data: data,
seq: writeIndex,
}
if !atomic.CompareAndSwapPointer(&d.buffer[idx], old, unsafe.Pointer(newBucket)) {
log.Println("Diode set collision: consider using a larger diode")
continue
}
return
}
}
// TryNext will attempt to read from the next slot of the ring buffer.
// If there is no data available, it will return (nil, false).
func (d *ManyToOne) TryNext() (data GenericDataType, ok bool) {
// Read a value from the ring buffer based on the readIndex.
idx := d.readIndex % uint64(len(d.buffer))
result := (*bucket)(atomic.SwapPointer(&d.buffer[idx], nil))
// When the result is nil that means the writer has not had the
// opportunity to write a value into the diode. This value must be ignored
// and the read head must not increment.
if result == nil {
return nil, false
}
// When the seq value is less than the current read index that means a
// value was read from idx that was previously written but since has
// been dropped. This value must be ignored and the read head must not
// increment.
//
// The simulation for this scenario assumes the fast forward occurred as
// detailed below.
//
// 5. The reader reads again getting seq 5. It then reads again expecting
// seq 6 but gets seq 2. This is a read of a stale value that was
// effectively "dropped" so the read fails and the read head stays put.
// `| 4 | 5 | 2 | 3 |` r: 7, w: 6
//
if result.seq < d.readIndex {
return nil, false
}
// When the seq value is greater than the current read index that means a
// value was read from idx that overwrote the value that was expected to
// be at this idx. This happens when the writer has lapped the reader. The
// reader needs to catch up to the writer so it moves its write head to
// the new seq, effectively dropping the messages that were not read in
// between the two values.
//
// Here is a simulation of this scenario:
//
// 1. Both the read and write heads start at 0.
// `| nil | nil | nil | nil |` r: 0, w: 0
// 2. The writer fills the buffer.
// `| 0 | 1 | 2 | 3 |` r: 0, w: 4
// 3. The writer laps the read head.
// `| 4 | 5 | 2 | 3 |` r: 0, w: 6
// 4. The reader reads the first value, expecting a seq of 0 but reads 4,
// this forces the reader to fast forward to 5.
// `| 4 | 5 | 2 | 3 |` r: 5, w: 6
//
if result.seq > d.readIndex {
dropped := result.seq - d.readIndex
d.readIndex = result.seq
d.alerter.Alert(int(dropped))
}
// Only increment read index if a regular read occurred (where seq was
// equal to readIndex) or a value was read that caused a fast forward
// (where seq was greater than readIndex).
//
d.readIndex++
return result.data, true
}

View File

@ -1,129 +0,0 @@
package diodes
import (
"sync/atomic"
"unsafe"
)
// GenericDataType is the data type the diodes operate on.
type GenericDataType unsafe.Pointer
// Alerter is used to report how many values were overwritten since the
// last write.
type Alerter interface {
Alert(missed int)
}
// AlertFunc type is an adapter to allow the use of ordinary functions as
// Alert handlers.
type AlertFunc func(missed int)
// Alert calls f(missed)
func (f AlertFunc) Alert(missed int) {
f(missed)
}
type bucket struct {
data GenericDataType
seq uint64 // seq is the recorded write index at the time of writing
}
// OneToOne diode is meant to be used by a single reader and a single writer.
// It is not thread safe if used otherwise.
type OneToOne struct {
writeIndex uint64
readIndex uint64
buffer []unsafe.Pointer
alerter Alerter
}
// NewOneToOne creates a new diode is meant to be used by a single reader and
// a single writer. The alerter is invoked on the read's go-routine. It is
// called when it notices that the writer go-routine has passed it and wrote
// over data. A nil can be used to ignore alerts.
func NewOneToOne(size int, alerter Alerter) *OneToOne {
if alerter == nil {
alerter = AlertFunc(func(int) {})
}
return &OneToOne{
buffer: make([]unsafe.Pointer, size),
alerter: alerter,
}
}
// Set sets the data in the next slot of the ring buffer.
func (d *OneToOne) Set(data GenericDataType) {
idx := d.writeIndex % uint64(len(d.buffer))
newBucket := &bucket{
data: data,
seq: d.writeIndex,
}
d.writeIndex++
atomic.StorePointer(&d.buffer[idx], unsafe.Pointer(newBucket))
}
// TryNext will attempt to read from the next slot of the ring buffer.
// If there is no data available, it will return (nil, false).
func (d *OneToOne) TryNext() (data GenericDataType, ok bool) {
// Read a value from the ring buffer based on the readIndex.
idx := d.readIndex % uint64(len(d.buffer))
result := (*bucket)(atomic.SwapPointer(&d.buffer[idx], nil))
// When the result is nil that means the writer has not had the
// opportunity to write a value into the diode. This value must be ignored
// and the read head must not increment.
if result == nil {
return nil, false
}
// When the seq value is less than the current read index that means a
// value was read from idx that was previously written but since has
// been dropped. This value must be ignored and the read head must not
// increment.
//
// The simulation for this scenario assumes the fast forward occurred as
// detailed below.
//
// 5. The reader reads again getting seq 5. It then reads again expecting
// seq 6 but gets seq 2. This is a read of a stale value that was
// effectively "dropped" so the read fails and the read head stays put.
// `| 4 | 5 | 2 | 3 |` r: 7, w: 6
//
if result.seq < d.readIndex {
return nil, false
}
// When the seq value is greater than the current read index that means a
// value was read from idx that overwrote the value that was expected to
// be at this idx. This happens when the writer has lapped the reader. The
// reader needs to catch up to the writer so it moves its write head to
// the new seq, effectively dropping the messages that were not read in
// between the two values.
//
// Here is a simulation of this scenario:
//
// 1. Both the read and write heads start at 0.
// `| nil | nil | nil | nil |` r: 0, w: 0
// 2. The writer fills the buffer.
// `| 0 | 1 | 2 | 3 |` r: 0, w: 4
// 3. The writer laps the read head.
// `| 4 | 5 | 2 | 3 |` r: 0, w: 6
// 4. The reader reads the first value, expecting a seq of 0 but reads 4,
// this forces the reader to fast forward to 5.
// `| 4 | 5 | 2 | 3 |` r: 5, w: 6
//
if result.seq > d.readIndex {
dropped := result.seq - d.readIndex
d.readIndex = result.seq
d.alerter.Alert(int(dropped))
}
// Only increment read index if a regular read occurred (where seq was
// equal to readIndex) or a value was read that caused a fast forward
// (where seq was greater than readIndex).
d.readIndex++
return result.data, true
}

View File

@ -1,80 +0,0 @@
package diodes
import (
"context"
"time"
)
// Diode is any implementation of a diode.
type Diode interface {
Set(GenericDataType)
TryNext() (GenericDataType, bool)
}
// Poller will poll a diode until a value is available.
type Poller struct {
Diode
interval time.Duration
ctx context.Context
}
// PollerConfigOption can be used to setup the poller.
type PollerConfigOption func(*Poller)
// WithPollingInterval sets the interval at which the diode is queried
// for new data. The default is 10ms.
func WithPollingInterval(interval time.Duration) PollerConfigOption {
return func(c *Poller) {
c.interval = interval
}
}
// WithPollingContext sets the context to cancel any retrieval (Next()). It
// will not change any results for adding data (Set()). Default is
// context.Background().
func WithPollingContext(ctx context.Context) PollerConfigOption {
return func(c *Poller) {
c.ctx = ctx
}
}
// NewPoller returns a new Poller that wraps the given diode.
func NewPoller(d Diode, opts ...PollerConfigOption) *Poller {
p := &Poller{
Diode: d,
interval: 10 * time.Millisecond,
ctx: context.Background(),
}
for _, o := range opts {
o(p)
}
return p
}
// Next polls the diode until data is available or until the context is done.
// If the context is done, then nil will be returned.
func (p *Poller) Next() GenericDataType {
for {
data, ok := p.Diode.TryNext()
if !ok {
if p.isDone() {
return nil
}
time.Sleep(p.interval)
continue
}
return data
}
}
func (p *Poller) isDone() bool {
select {
case <-p.ctx.Done():
return true
default:
return false
}
}

View File

@ -1,88 +0,0 @@
package diodes
import (
"context"
"sync"
)
// Waiter will use a conditional mutex to alert the reader to when data is
// available.
type Waiter struct {
Diode
mu sync.Mutex
c *sync.Cond
ctx context.Context
}
// WaiterConfigOption can be used to setup the waiter.
type WaiterConfigOption func(*Waiter)
// WithWaiterContext sets the context to cancel any retrieval (Next()). It
// will not change any results for adding data (Set()). Default is
// context.Background().
func WithWaiterContext(ctx context.Context) WaiterConfigOption {
return func(c *Waiter) {
c.ctx = ctx
}
}
// NewWaiter returns a new Waiter that wraps the given diode.
func NewWaiter(d Diode, opts ...WaiterConfigOption) *Waiter {
w := new(Waiter)
w.Diode = d
w.c = sync.NewCond(&w.mu)
w.ctx = context.Background()
for _, opt := range opts {
opt(w)
}
go func() {
<-w.ctx.Done()
// Mutex is strictly necessary here to avoid a race in Next() (between
// w.isDone() and w.c.Wait()) and w.c.Broadcast() here.
w.mu.Lock()
w.c.Broadcast()
w.mu.Unlock()
}()
return w
}
// Set invokes the wrapped diode's Set with the given data and uses Broadcast
// to wake up any readers.
func (w *Waiter) Set(data GenericDataType) {
w.Diode.Set(data)
w.c.Broadcast()
}
// Next returns the next data point on the wrapped diode. If there is not any
// new data, it will Wait for set to be called or the context to be done.
// If the context is done, then nil will be returned.
func (w *Waiter) Next() GenericDataType {
w.mu.Lock()
defer w.mu.Unlock()
for {
data, ok := w.Diode.TryNext()
if !ok {
if w.isDone() {
return nil
}
w.c.Wait()
continue
}
return data
}
}
func (w *Waiter) isDone() bool {
select {
case <-w.ctx.Done():
return true
default:
return false
}
}

View File

@ -1,56 +0,0 @@
package zlog
import (
"net"
"time"
)
type encoder interface {
AppendArrayDelim(dst []byte) []byte
AppendArrayEnd(dst []byte) []byte
AppendArrayStart(dst []byte) []byte
AppendBeginMarker(dst []byte) []byte
AppendBool(dst []byte, val bool) []byte
AppendBools(dst []byte, vals []bool) []byte
AppendBytes(dst, s []byte) []byte
AppendDuration(dst []byte, d time.Duration, unit time.Duration, useInt bool) []byte
AppendDurations(dst []byte, vals []time.Duration, unit time.Duration, useInt bool) []byte
AppendEndMarker(dst []byte) []byte
AppendFloat32(dst []byte, val float32) []byte
AppendFloat64(dst []byte, val float64) []byte
AppendFloats32(dst []byte, vals []float32) []byte
AppendFloats64(dst []byte, vals []float64) []byte
AppendHex(dst, s []byte) []byte
AppendIPAddr(dst []byte, ip net.IP) []byte
AppendIPPrefix(dst []byte, pfx net.IPNet) []byte
AppendInt(dst []byte, val int) []byte
AppendInt16(dst []byte, val int16) []byte
AppendInt32(dst []byte, val int32) []byte
AppendInt64(dst []byte, val int64) []byte
AppendInt8(dst []byte, val int8) []byte
AppendInterface(dst []byte, i interface{}) []byte
AppendInts(dst []byte, vals []int) []byte
AppendInts16(dst []byte, vals []int16) []byte
AppendInts32(dst []byte, vals []int32) []byte
AppendInts64(dst []byte, vals []int64) []byte
AppendInts8(dst []byte, vals []int8) []byte
AppendKey(dst []byte, key string) []byte
AppendLineBreak(dst []byte) []byte
AppendMACAddr(dst []byte, ha net.HardwareAddr) []byte
AppendNil(dst []byte) []byte
AppendObjectData(dst []byte, o []byte) []byte
AppendString(dst []byte, s string) []byte
AppendStrings(dst []byte, vals []string) []byte
AppendTime(dst []byte, t time.Time, format string) []byte
AppendTimes(dst []byte, vals []time.Time, format string) []byte
AppendUint(dst []byte, val uint) []byte
AppendUint16(dst []byte, val uint16) []byte
AppendUint32(dst []byte, val uint32) []byte
AppendUint64(dst []byte, val uint64) []byte
AppendUint8(dst []byte, val uint8) []byte
AppendUints(dst []byte, vals []uint) []byte
AppendUints16(dst []byte, vals []uint16) []byte
AppendUints32(dst []byte, vals []uint32) []byte
AppendUints64(dst []byte, vals []uint64) []byte
AppendUints8(dst []byte, vals []uint8) []byte
}

View File

@ -1,45 +0,0 @@
// +build binary_log
package zlog
// This file contains bindings to do binary encoding.
import (
"tuxpa.in/a/zlog/internal/cbor"
)
var (
_ encoder = (*cbor.Encoder)(nil)
enc = cbor.Encoder{}
)
func init() {
// using closure to reflect the changes at runtime.
cbor.JSONMarshalFunc = func(v interface{}) ([]byte, error) {
return InterfaceMarshalFunc(v)
}
}
func appendJSON(dst []byte, j []byte) []byte {
return cbor.AppendEmbeddedJSON(dst, j)
}
func appendCBOR(dst []byte, c []byte) []byte {
return cbor.AppendEmbeddedCBOR(dst, c)
}
// decodeIfBinaryToString - converts a binary formatted log msg to a
// JSON formatted String Log message.
func decodeIfBinaryToString(in []byte) string {
return cbor.DecodeIfBinaryToString(in)
}
func decodeObjectToStr(in []byte) string {
return cbor.DecodeObjectToStr(in)
}
// decodeIfBinaryToBytes - converts a binary formatted log msg to a
// JSON formatted Bytes Log message.
func decodeIfBinaryToBytes(in []byte) []byte {
return cbor.DecodeIfBinaryToBytes(in)
}

View File

@ -1,52 +0,0 @@
//go:build !binary_log
// +build !binary_log
package zlog
// encoder_json.go file contains bindings to generate
// JSON encoded byte stream.
import (
"encoding/base64"
"tuxpa.in/a/zlog/internal/json"
)
var (
_ encoder = (*json.Encoder)(nil)
enc = json.Encoder{}
)
func init() {
// using closure to reflect the changes at runtime.
json.JSONMarshalFunc = func(v interface{}) ([]byte, error) {
return InterfaceMarshalFunc(v)
}
}
func appendJSON(dst []byte, j []byte) []byte {
return append(dst, j...)
}
func appendCBOR(dst []byte, cbor []byte) []byte {
dst = append(dst, []byte("\"data:application/cbor;base64,")...)
l := len(dst)
enc := base64.StdEncoding
n := enc.EncodedLen(len(cbor))
for i := 0; i < n; i++ {
dst = append(dst, '.')
}
enc.Encode(dst[l:], cbor)
return append(dst, '"')
}
func decodeIfBinaryToString(in []byte) string {
return string(in)
}
func decodeObjectToStr(in []byte) string {
return string(in)
}
func decodeIfBinaryToBytes(in []byte) []byte {
return in
}

668
event.go
View File

@ -1,10 +1,9 @@
package zlog package zerolog
import ( import (
"fmt" "fmt"
"net" "io/ioutil"
"os" "os"
"runtime"
"sync" "sync"
"time" "time"
) )
@ -23,78 +22,37 @@ type Event struct {
buf []byte buf []byte
w LevelWriter w LevelWriter
level Level level Level
enabled bool
done func(msg string) done func(msg string)
stack bool // enable error stack trace
ch []Hook // hooks from context
skipFrame int // The number of additional frames to skip when printing the caller.
} }
func putEvent(e *Event) { func newEvent(w LevelWriter, level Level, enabled bool) *Event {
// Proper usage of a sync.Pool requires each entry to have approximately if !enabled {
// the same memory cost. To obtain this property when the stored type return &Event{}
// contains a variably-sized buffer, we add a hard limit on the maximum buffer
// to place back in the pool.
//
// See https://golang.org/issue/23199
const maxSize = 1 << 16 // 64KiB
if cap(e.buf) > maxSize {
return
} }
eventPool.Put(e)
}
// LogObjectMarshaler provides a strongly-typed and encoding-agnostic interface
// to be implemented by types used with Event/Context's Object methods.
type LogObjectMarshaler interface {
MarshalZerologObject(e *Event)
}
// LogArrayMarshaler provides a strongly-typed and encoding-agnostic interface
// to be implemented by types used with Event/Context's Array methods.
type LogArrayMarshaler interface {
MarshalZerologArray(a *Array)
}
func newEvent(w LevelWriter, level Level) *Event {
e := eventPool.Get().(*Event) e := eventPool.Get().(*Event)
e.buf = e.buf[:0] e.buf = e.buf[:1]
e.ch = nil e.buf[0] = '{'
e.buf = enc.AppendBeginMarker(e.buf)
e.w = w e.w = w
e.level = level e.level = level
e.stack = false e.enabled = true
e.skipFrame = 0
return e return e
} }
func (e *Event) write() (err error) { func (e *Event) write() (err error) {
if e == nil { if !e.enabled {
return nil return nil
} }
if e.level != Disabled { e.buf = append(e.buf, '}', '\n')
e.buf = enc.AppendEndMarker(e.buf)
e.buf = enc.AppendLineBreak(e.buf)
if e.w != nil {
_, err = e.w.WriteLevel(e.level, e.buf) _, err = e.w.WriteLevel(e.level, e.buf)
} eventPool.Put(e)
}
putEvent(e)
return return
} }
// Enabled return false if the *Event is going to be filtered out by // Enabled return false if the *Event is going to be filtered out by
// log level or sampling. // log level or sampling.
func (e *Event) Enabled() bool { func (e *Event) Enabled() bool {
return e != nil && e.level != Disabled return e.enabled
}
// Discard disables the event so Msg(f) won't print it.
func (e *Event) Discard() *Event {
if e == nil {
return e
}
e.level = Disabled
return nil
} }
// Msg sends the *Event with msg added as the message field if not empty. // Msg sends the *Event with msg added as the message field if not empty.
@ -102,79 +60,48 @@ func (e *Event) Discard() *Event {
// NOTICE: once this method is called, the *Event should be disposed. // NOTICE: once this method is called, the *Event should be disposed.
// Calling Msg twice can have unexpected result. // Calling Msg twice can have unexpected result.
func (e *Event) Msg(msg string) { func (e *Event) Msg(msg string) {
if e == nil { if !e.enabled {
return return
} }
e.msg(msg)
}
// Send is equivalent to calling Msg("").
//
// NOTICE: once this method is called, the *Event should be disposed.
func (e *Event) Send() {
if e == nil {
return
}
e.msg("")
}
// Msgf sends the event with formatted msg added as the message field if not empty.
//
// NOTICE: once this method is called, the *Event should be disposed.
// Calling Msgf twice can have unexpected result.
func (e *Event) Msgf(format string, v ...interface{}) {
if e == nil {
return
}
e.msg(fmt.Sprintf(format, v...))
}
func (e *Event) MsgFunc(createMsg func() string) {
if e == nil {
return
}
e.msg(createMsg())
}
func (e *Event) msg(msg string) {
for _, hook := range e.ch {
hook.Run(e, e.level, msg)
}
if msg != "" { if msg != "" {
e.buf = enc.AppendString(enc.AppendKey(e.buf, MessageFieldName), msg) e.buf = appendString(e.buf, MessageFieldName, msg)
} }
if e.done != nil { if e.done != nil {
defer e.done(msg) defer e.done(msg)
} }
if err := e.write(); err != nil { if err := e.write(); err != nil {
if ErrorHandler != nil { fmt.Fprintf(os.Stderr, "zerolog: could not write event: %v", err)
ErrorHandler(err)
} else {
fmt.Fprintf(os.Stderr, "zlog: could not write event: %v\n", err)
}
} }
} }
// Fields is a helper function to use a map or slice to set fields using type assertion. // Msgf sends the event with formated msg added as the message field if not empty.
// Only map[string]interface{} and []interface{} are accepted. []interface{} must //
// alternate string keys and arbitrary values, and extraneous ones are ignored. // NOTICE: once this methid is called, the *Event should be disposed.
func (e *Event) Fields(fields interface{}) *Event { // Calling Msg twice can have unexpected result.
if e == nil { func (e *Event) Msgf(format string, v ...interface{}) {
return e if !e.enabled {
return
}
msg := fmt.Sprintf(format, v...)
if msg != "" {
e.buf = appendString(e.buf, MessageFieldName, msg)
}
if e.done != nil {
defer e.done(msg)
}
if err := e.write(); err != nil {
fmt.Fprintf(os.Stderr, "zerolog: could not write event: %v", err)
} }
e.buf = appendFields(e.buf, fields)
return e
} }
// Dict adds the field key with a dict to the event context. // Dict adds the field key with a dict to the event context.
// Use zlog.Dict() to create the dictionary. // Use zerolog.Dict() to create the dictionary.
func (e *Event) Dict(key string, dict *Event) *Event { func (e *Event) Dict(key string, dict *Event) *Event {
if e == nil { if !e.enabled {
return e return e
} }
dict.buf = enc.AppendEndMarker(dict.buf) e.buf = append(append(appendKey(e.buf, key), dict.buf...), '}')
e.buf = append(enc.AppendKey(e.buf, key), dict.buf...) eventPool.Put(dict)
putEvent(dict)
return e return e
} }
@ -182,525 +109,183 @@ func (e *Event) Dict(key string, dict *Event) *Event {
// Call usual field methods like Str, Int etc to add fields to this // Call usual field methods like Str, Int etc to add fields to this
// event and give it as argument the *Event.Dict method. // event and give it as argument the *Event.Dict method.
func Dict() *Event { func Dict() *Event {
return newEvent(nil, 0) return newEvent(levelWriterAdapter{ioutil.Discard}, 0, true)
}
// Array adds the field key with an array to the event context.
// Use zlog.Arr() to create the array or pass a type that
// implement the LogArrayMarshaler interface.
func (e *Event) Array(key string, arr LogArrayMarshaler) *Event {
if e == nil {
return e
}
e.buf = enc.AppendKey(e.buf, key)
var a *Array
if aa, ok := arr.(*Array); ok {
a = aa
} else {
a = Arr()
arr.MarshalZerologArray(a)
}
e.buf = a.write(e.buf)
return e
}
func (e *Event) appendObject(obj LogObjectMarshaler) {
e.buf = enc.AppendBeginMarker(e.buf)
obj.MarshalZerologObject(e)
e.buf = enc.AppendEndMarker(e.buf)
}
// Object marshals an object that implement the LogObjectMarshaler interface.
func (e *Event) Object(key string, obj LogObjectMarshaler) *Event {
if e == nil {
return e
}
e.buf = enc.AppendKey(e.buf, key)
if obj == nil {
e.buf = enc.AppendNil(e.buf)
return e
}
e.appendObject(obj)
return e
}
// Func allows an anonymous func to run only if the event is enabled.
func (e *Event) Func(f func(e *Event)) *Event {
if e != nil && e.Enabled() {
f(e)
}
return e
}
// EmbedObject marshals an object that implement the LogObjectMarshaler interface.
func (e *Event) EmbedObject(obj LogObjectMarshaler) *Event {
if e == nil {
return e
}
if obj == nil {
return e
}
obj.MarshalZerologObject(e)
return e
} }
// Str adds the field key with val as a string to the *Event context. // Str adds the field key with val as a string to the *Event context.
func (e *Event) Str(key, val string) *Event { func (e *Event) Str(key, val string) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendString(enc.AppendKey(e.buf, key), val) e.buf = appendString(e.buf, key, val)
return e return e
} }
// Strs adds the field key with vals as a []string to the *Event context. // AnErr adds the field key with err as a string to the *Event context.
func (e *Event) Strs(key string, vals []string) *Event {
if e == nil {
return e
}
e.buf = enc.AppendStrings(enc.AppendKey(e.buf, key), vals)
return e
}
// Stringer adds the field key with val.String() (or null if val is nil)
// to the *Event context.
func (e *Event) Stringer(key string, val fmt.Stringer) *Event {
if e == nil {
return e
}
e.buf = enc.AppendStringer(enc.AppendKey(e.buf, key), val)
return e
}
// Stringers adds the field key with vals where each individual val
// is used as val.String() (or null if val is empty) to the *Event
// context.
func (e *Event) Stringers(key string, vals []fmt.Stringer) *Event {
if e == nil {
return e
}
e.buf = enc.AppendStringers(enc.AppendKey(e.buf, key), vals)
return e
}
// Bytes adds the field key with val as a string to the *Event context.
//
// Runes outside of normal ASCII ranges will be hex-encoded in the resulting
// JSON.
func (e *Event) Bytes(key string, val []byte) *Event {
if e == nil {
return e
}
e.buf = enc.AppendBytes(enc.AppendKey(e.buf, key), val)
return e
}
// Hex adds the field key with val as a hex string to the *Event context.
func (e *Event) Hex(key string, val []byte) *Event {
if e == nil {
return e
}
e.buf = enc.AppendHex(enc.AppendKey(e.buf, key), val)
return e
}
// RawJSON adds already encoded JSON to the log line under key.
//
// No sanity check is performed on b; it must not contain carriage returns and
// be valid JSON.
func (e *Event) RawJSON(key string, b []byte) *Event {
if e == nil {
return e
}
e.buf = appendJSON(enc.AppendKey(e.buf, key), b)
return e
}
// RawCBOR adds already encoded CBOR to the log line under key.
//
// No sanity check is performed on b
// Note: The full featureset of CBOR is supported as data will not be mapped to json but stored as data-url
func (e *Event) RawCBOR(key string, b []byte) *Event {
if e == nil {
return e
}
e.buf = appendCBOR(enc.AppendKey(e.buf, key), b)
return e
}
// AnErr adds the field key with serialized err to the *Event context.
// If err is nil, no field is added. // If err is nil, no field is added.
func (e *Event) AnErr(key string, err error) *Event { func (e *Event) AnErr(key string, err error) *Event {
if e == nil { if !e.enabled {
return e return e
} }
switch m := ErrorMarshalFunc(err).(type) { e.buf = appendErrorKey(e.buf, key, err)
case nil:
return e return e
case LogObjectMarshaler:
return e.Object(key, m)
case error:
if m == nil || isNilValue(m) {
return e
} else {
return e.Str(key, m.Error())
}
case string:
return e.Str(key, m)
default:
return e.Interface(key, m)
}
} }
// Errs adds the field key with errs as an array of serialized errors to the // Err adds the field "error" with err as a string to the *Event context.
// *Event context.
func (e *Event) Errs(key string, errs []error) *Event {
if e == nil {
return e
}
arr := Arr()
for _, err := range errs {
switch m := ErrorMarshalFunc(err).(type) {
case LogObjectMarshaler:
arr = arr.Object(m)
case error:
arr = arr.Err(m)
case string:
arr = arr.Str(m)
default:
arr = arr.Interface(m)
}
}
return e.Array(key, arr)
}
// Err adds the field "error" with serialized err to the *Event context.
// If err is nil, no field is added. // If err is nil, no field is added.
// // To customize the key name, change zerolog.ErrorFieldName.
// To customize the key name, change zlog.ErrorFieldName.
//
// If Stack() has been called before and zlog.ErrorStackMarshaler is defined,
// the err is passed to ErrorStackMarshaler and the result is appended to the
// zlog.ErrorStackFieldName.
func (e *Event) Err(err error) *Event { func (e *Event) Err(err error) *Event {
if e == nil { if !e.enabled {
return e return e
} }
if e.stack && ErrorStackMarshaler != nil { e.buf = appendError(e.buf, err)
switch m := ErrorStackMarshaler(err).(type) {
case nil:
case LogObjectMarshaler:
e.Object(ErrorStackFieldName, m)
case error:
if m != nil && !isNilValue(m) {
e.Str(ErrorStackFieldName, m.Error())
}
case string:
e.Str(ErrorStackFieldName, m)
default:
e.Interface(ErrorStackFieldName, m)
}
}
return e.AnErr(ErrorFieldName, err)
}
// Stack enables stack trace printing for the error passed to Err().
//
// ErrorStackMarshaler must be set for this method to do something.
func (e *Event) Stack() *Event {
if e != nil {
e.stack = true
}
return e return e
} }
// Bool adds the field key with val as a bool to the *Event context. // Bool adds the field key with val as a Boolean to the *Event context.
func (e *Event) Bool(key string, b bool) *Event { func (e *Event) Bool(key string, b bool) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendBool(enc.AppendKey(e.buf, key), b) e.buf = appendBool(e.buf, key, b)
return e
}
// Bools adds the field key with val as a []bool to the *Event context.
func (e *Event) Bools(key string, b []bool) *Event {
if e == nil {
return e
}
e.buf = enc.AppendBools(enc.AppendKey(e.buf, key), b)
return e return e
} }
// Int adds the field key with i as a int to the *Event context. // Int adds the field key with i as a int to the *Event context.
func (e *Event) Int(key string, i int) *Event { func (e *Event) Int(key string, i int) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendInt(enc.AppendKey(e.buf, key), i) e.buf = appendInt(e.buf, key, i)
return e
}
// Ints adds the field key with i as a []int to the *Event context.
func (e *Event) Ints(key string, i []int) *Event {
if e == nil {
return e
}
e.buf = enc.AppendInts(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Int8 adds the field key with i as a int8 to the *Event context. // Int8 adds the field key with i as a int8 to the *Event context.
func (e *Event) Int8(key string, i int8) *Event { func (e *Event) Int8(key string, i int8) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendInt8(enc.AppendKey(e.buf, key), i) e.buf = appendInt8(e.buf, key, i)
return e
}
// Ints8 adds the field key with i as a []int8 to the *Event context.
func (e *Event) Ints8(key string, i []int8) *Event {
if e == nil {
return e
}
e.buf = enc.AppendInts8(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Int16 adds the field key with i as a int16 to the *Event context. // Int16 adds the field key with i as a int16 to the *Event context.
func (e *Event) Int16(key string, i int16) *Event { func (e *Event) Int16(key string, i int16) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendInt16(enc.AppendKey(e.buf, key), i) e.buf = appendInt16(e.buf, key, i)
return e
}
// Ints16 adds the field key with i as a []int16 to the *Event context.
func (e *Event) Ints16(key string, i []int16) *Event {
if e == nil {
return e
}
e.buf = enc.AppendInts16(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Int32 adds the field key with i as a int32 to the *Event context. // Int32 adds the field key with i as a int32 to the *Event context.
func (e *Event) Int32(key string, i int32) *Event { func (e *Event) Int32(key string, i int32) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendInt32(enc.AppendKey(e.buf, key), i) e.buf = appendInt32(e.buf, key, i)
return e
}
// Ints32 adds the field key with i as a []int32 to the *Event context.
func (e *Event) Ints32(key string, i []int32) *Event {
if e == nil {
return e
}
e.buf = enc.AppendInts32(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Int64 adds the field key with i as a int64 to the *Event context. // Int64 adds the field key with i as a int64 to the *Event context.
func (e *Event) Int64(key string, i int64) *Event { func (e *Event) Int64(key string, i int64) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendInt64(enc.AppendKey(e.buf, key), i) e.buf = appendInt64(e.buf, key, i)
return e
}
// Ints64 adds the field key with i as a []int64 to the *Event context.
func (e *Event) Ints64(key string, i []int64) *Event {
if e == nil {
return e
}
e.buf = enc.AppendInts64(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Uint adds the field key with i as a uint to the *Event context. // Uint adds the field key with i as a uint to the *Event context.
func (e *Event) Uint(key string, i uint) *Event { func (e *Event) Uint(key string, i uint) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendUint(enc.AppendKey(e.buf, key), i) e.buf = appendUint(e.buf, key, i)
return e
}
// Uints adds the field key with i as a []int to the *Event context.
func (e *Event) Uints(key string, i []uint) *Event {
if e == nil {
return e
}
e.buf = enc.AppendUints(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Uint8 adds the field key with i as a uint8 to the *Event context. // Uint8 adds the field key with i as a uint8 to the *Event context.
func (e *Event) Uint8(key string, i uint8) *Event { func (e *Event) Uint8(key string, i uint8) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendUint8(enc.AppendKey(e.buf, key), i) e.buf = appendUint8(e.buf, key, i)
return e
}
// Uints8 adds the field key with i as a []int8 to the *Event context.
func (e *Event) Uints8(key string, i []uint8) *Event {
if e == nil {
return e
}
e.buf = enc.AppendUints8(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Uint16 adds the field key with i as a uint16 to the *Event context. // Uint16 adds the field key with i as a uint16 to the *Event context.
func (e *Event) Uint16(key string, i uint16) *Event { func (e *Event) Uint16(key string, i uint16) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendUint16(enc.AppendKey(e.buf, key), i) e.buf = appendUint16(e.buf, key, i)
return e
}
// Uints16 adds the field key with i as a []int16 to the *Event context.
func (e *Event) Uints16(key string, i []uint16) *Event {
if e == nil {
return e
}
e.buf = enc.AppendUints16(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Uint32 adds the field key with i as a uint32 to the *Event context. // Uint32 adds the field key with i as a uint32 to the *Event context.
func (e *Event) Uint32(key string, i uint32) *Event { func (e *Event) Uint32(key string, i uint32) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendUint32(enc.AppendKey(e.buf, key), i) e.buf = appendUint32(e.buf, key, i)
return e
}
// Uints32 adds the field key with i as a []int32 to the *Event context.
func (e *Event) Uints32(key string, i []uint32) *Event {
if e == nil {
return e
}
e.buf = enc.AppendUints32(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Uint64 adds the field key with i as a uint64 to the *Event context. // Uint64 adds the field key with i as a uint64 to the *Event context.
func (e *Event) Uint64(key string, i uint64) *Event { func (e *Event) Uint64(key string, i uint64) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendUint64(enc.AppendKey(e.buf, key), i) e.buf = appendUint64(e.buf, key, i)
return e
}
// Uints64 adds the field key with i as a []int64 to the *Event context.
func (e *Event) Uints64(key string, i []uint64) *Event {
if e == nil {
return e
}
e.buf = enc.AppendUints64(enc.AppendKey(e.buf, key), i)
return e return e
} }
// Float32 adds the field key with f as a float32 to the *Event context. // Float32 adds the field key with f as a float32 to the *Event context.
func (e *Event) Float32(key string, f float32) *Event { func (e *Event) Float32(key string, f float32) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendFloat32(enc.AppendKey(e.buf, key), f) e.buf = appendFloat32(e.buf, key, f)
return e
}
// Floats32 adds the field key with f as a []float32 to the *Event context.
func (e *Event) Floats32(key string, f []float32) *Event {
if e == nil {
return e
}
e.buf = enc.AppendFloats32(enc.AppendKey(e.buf, key), f)
return e return e
} }
// Float64 adds the field key with f as a float64 to the *Event context. // Float64 adds the field key with f as a float64 to the *Event context.
func (e *Event) Float64(key string, f float64) *Event { func (e *Event) Float64(key string, f float64) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendFloat64(enc.AppendKey(e.buf, key), f) e.buf = appendFloat64(e.buf, key, f)
return e
}
// Floats64 adds the field key with f as a []float64 to the *Event context.
func (e *Event) Floats64(key string, f []float64) *Event {
if e == nil {
return e
}
e.buf = enc.AppendFloats64(enc.AppendKey(e.buf, key), f)
return e return e
} }
// Timestamp adds the current local time as UNIX timestamp to the *Event context with the "time" key. // Timestamp adds the current local time as UNIX timestamp to the *Event context with the "time" key.
// To customize the key name, change zlog.TimestampFieldName. // To customize the key name, change zerolog.TimestampFieldName.
//
// NOTE: It won't dedupe the "time" key if the *Event (or *Context) has one
// already.
func (e *Event) Timestamp() *Event { func (e *Event) Timestamp() *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendTime(enc.AppendKey(e.buf, TimestampFieldName), TimestampFunc(), TimeFieldFormat) e.buf = appendTimestamp(e.buf)
return e return e
} }
// Time adds the field key with t formatted as string using zlog.TimeFieldFormat. // Time adds the field key with t formated as string using zerolog.TimeFieldFormat.
func (e *Event) Time(key string, t time.Time) *Event { func (e *Event) Time(key string, t time.Time) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendTime(enc.AppendKey(e.buf, key), t, TimeFieldFormat) e.buf = appendTime(e.buf, key, t)
return e return e
} }
// Times adds the field key with t formatted as string using zlog.TimeFieldFormat. // Dur adds the field key with duration d stored as zerolog.DurationFieldUnit.
func (e *Event) Times(key string, t []time.Time) *Event { // If zerolog.DurationFieldInteger is true, durations are rendered as integer
if e == nil {
return e
}
e.buf = enc.AppendTimes(enc.AppendKey(e.buf, key), t, TimeFieldFormat)
return e
}
// Dur adds the field key with duration d stored as zlog.DurationFieldUnit.
// If zlog.DurationFieldInteger is true, durations are rendered as integer
// instead of float. // instead of float.
func (e *Event) Dur(key string, d time.Duration) *Event { func (e *Event) Dur(key string, d time.Duration) *Event {
if e == nil { if !e.enabled {
return e return e
} }
e.buf = enc.AppendDuration(enc.AppendKey(e.buf, key), d, DurationFieldUnit, DurationFieldInteger) e.buf = appendDuration(e.buf, key, d)
return e
}
// Durs adds the field key with duration d stored as zlog.DurationFieldUnit.
// If zlog.DurationFieldInteger is true, durations are rendered as integer
// instead of float.
func (e *Event) Durs(key string, d []time.Duration) *Event {
if e == nil {
return e
}
e.buf = enc.AppendDurations(enc.AppendKey(e.buf, key), d, DurationFieldUnit, DurationFieldInteger)
return e return e
} }
@ -708,99 +293,22 @@ func (e *Event) Durs(key string, d []time.Duration) *Event {
// If time t is not greater than start, duration will be 0. // If time t is not greater than start, duration will be 0.
// Duration format follows the same principle as Dur(). // Duration format follows the same principle as Dur().
func (e *Event) TimeDiff(key string, t time.Time, start time.Time) *Event { func (e *Event) TimeDiff(key string, t time.Time, start time.Time) *Event {
if e == nil { if !e.enabled {
return e return e
} }
var d time.Duration var d time.Duration
if t.After(start) { if t.After(start) {
d = t.Sub(start) d = t.Sub(start)
} }
e.buf = enc.AppendDuration(enc.AppendKey(e.buf, key), d, DurationFieldUnit, DurationFieldInteger) e.buf = appendDuration(e.buf, key, d)
return e return e
} }
// Any is a wrapper around Event.Interface.
func (e *Event) Any(key string, i interface{}) *Event {
return e.Interface(key, i)
}
// Interface adds the field key with i marshaled using reflection. // Interface adds the field key with i marshaled using reflection.
func (e *Event) Interface(key string, i interface{}) *Event { func (e *Event) Interface(key string, i interface{}) *Event {
if e == nil { if !e.enabled {
return e return e
} }
if obj, ok := i.(LogObjectMarshaler); ok { e.buf = appendInterface(e.buf, key, i)
return e.Object(key, obj)
}
e.buf = enc.AppendInterface(enc.AppendKey(e.buf, key), i)
return e
}
// Type adds the field key with val's type using reflection.
func (e *Event) Type(key string, val interface{}) *Event {
if e == nil {
return e
}
e.buf = enc.AppendType(enc.AppendKey(e.buf, key), val)
return e
}
// CallerSkipFrame instructs any future Caller calls to skip the specified number of frames.
// This includes those added via hooks from the context.
func (e *Event) CallerSkipFrame(skip int) *Event {
if e == nil {
return e
}
e.skipFrame += skip
return e
}
// Caller adds the file:line of the caller with the zlog.CallerFieldName key.
// The argument skip is the number of stack frames to ascend
// Skip If not passed, use the global variable CallerSkipFrameCount
func (e *Event) Caller(skip ...int) *Event {
sk := CallerSkipFrameCount
if len(skip) > 0 {
sk = skip[0] + CallerSkipFrameCount
}
return e.caller(sk)
}
func (e *Event) caller(skip int) *Event {
if e == nil {
return e
}
pc, file, line, ok := runtime.Caller(skip + e.skipFrame)
if !ok {
return e
}
e.buf = enc.AppendString(enc.AppendKey(e.buf, CallerFieldName), CallerMarshalFunc(pc, file, line))
return e
}
// IPAddr adds IPv4 or IPv6 Address to the event
func (e *Event) IPAddr(key string, ip net.IP) *Event {
if e == nil {
return e
}
e.buf = enc.AppendIPAddr(enc.AppendKey(e.buf, key), ip)
return e
}
// IPPrefix adds IPv4 or IPv6 Prefix (address and mask) to the event
func (e *Event) IPPrefix(key string, pfx net.IPNet) *Event {
if e == nil {
return e
}
e.buf = enc.AppendIPPrefix(enc.AppendKey(e.buf, key), pfx)
return e
}
// MACAddr adds MAC address to the event
func (e *Event) MACAddr(key string, ha net.HardwareAddr) *Event {
if e == nil {
return e
}
e.buf = enc.AppendMACAddr(enc.AppendKey(e.buf, key), ha)
return e return e
} }

View File

@ -1,65 +0,0 @@
// +build !binary_log
package zlog
import (
"bytes"
"errors"
"strings"
"testing"
)
type nilError struct{}
func (nilError) Error() string {
return ""
}
func TestEvent_AnErr(t *testing.T) {
tests := []struct {
name string
err error
want string
}{
{"nil", nil, `{}`},
{"error", errors.New("test"), `{"err":"test"}`},
{"nil interface", func() *nilError { return nil }(), `{}`},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
e := newEvent(levelWriterAdapter{&buf}, DebugLevel)
e.AnErr("err", tt.err)
_ = e.write()
if got, want := strings.TrimSpace(buf.String()), tt.want; got != want {
t.Errorf("Event.AnErr() = %v, want %v", got, want)
}
})
}
}
func TestEvent_ObjectWithNil(t *testing.T) {
var buf bytes.Buffer
e := newEvent(levelWriterAdapter{&buf}, DebugLevel)
_ = e.Object("obj", nil)
_ = e.write()
want := `{"obj":null}`
got := strings.TrimSpace(buf.String())
if got != want {
t.Errorf("Event.Object() = %q, want %q", got, want)
}
}
func TestEvent_EmbedObjectWithNil(t *testing.T) {
var buf bytes.Buffer
e := newEvent(levelWriterAdapter{&buf}, DebugLevel)
_ = e.EmbedObject(nil)
_ = e.write()
want := "{}"
got := strings.TrimSpace(buf.String())
if got != want {
t.Errorf("Event.EmbedObject() = %q, want %q", got, want)
}
}

109
field.go Normal file
View File

@ -0,0 +1,109 @@
package zerolog
import (
"encoding/json"
"fmt"
"strconv"
"time"
)
func appendKey(dst []byte, key string) []byte {
if len(dst) > 1 {
dst = append(dst, ',')
}
dst = appendJSONString(dst, key)
return append(dst, ':')
}
func appendString(dst []byte, key, val string) []byte {
return appendJSONString(appendKey(dst, key), val)
}
func appendErrorKey(dst []byte, key string, err error) []byte {
if err == nil {
return dst
}
return appendJSONString(appendKey(dst, key), err.Error())
}
func appendError(dst []byte, err error) []byte {
return appendErrorKey(dst, ErrorFieldName, err)
}
func appendBool(dst []byte, key string, val bool) []byte {
return strconv.AppendBool(appendKey(dst, key), val)
}
func appendInt(dst []byte, key string, val int) []byte {
return strconv.AppendInt(appendKey(dst, key), int64(val), 10)
}
func appendInt8(dst []byte, key string, val int8) []byte {
return strconv.AppendInt(appendKey(dst, key), int64(val), 10)
}
func appendInt16(dst []byte, key string, val int16) []byte {
return strconv.AppendInt(appendKey(dst, key), int64(val), 10)
}
func appendInt32(dst []byte, key string, val int32) []byte {
return strconv.AppendInt(appendKey(dst, key), int64(val), 10)
}
func appendInt64(dst []byte, key string, val int64) []byte {
return strconv.AppendInt(appendKey(dst, key), int64(val), 10)
}
func appendUint(dst []byte, key string, val uint) []byte {
return strconv.AppendUint(appendKey(dst, key), uint64(val), 10)
}
func appendUint8(dst []byte, key string, val uint8) []byte {
return strconv.AppendUint(appendKey(dst, key), uint64(val), 10)
}
func appendUint16(dst []byte, key string, val uint16) []byte {
return strconv.AppendUint(appendKey(dst, key), uint64(val), 10)
}
func appendUint32(dst []byte, key string, val uint32) []byte {
return strconv.AppendUint(appendKey(dst, key), uint64(val), 10)
}
func appendUint64(dst []byte, key string, val uint64) []byte {
return strconv.AppendUint(appendKey(dst, key), uint64(val), 10)
}
func appendFloat32(dst []byte, key string, val float32) []byte {
return strconv.AppendFloat(appendKey(dst, key), float64(val), 'f', -1, 32)
}
func appendFloat64(dst []byte, key string, val float64) []byte {
return strconv.AppendFloat(appendKey(dst, key), float64(val), 'f', -1, 32)
}
func appendTime(dst []byte, key string, t time.Time) []byte {
if TimeFieldFormat == "" {
return appendInt64(dst, key, t.Unix())
}
return append(t.AppendFormat(append(appendKey(dst, key), '"'), TimeFieldFormat), '"')
}
func appendTimestamp(dst []byte) []byte {
return appendTime(dst, TimestampFieldName, TimestampFunc())
}
func appendDuration(dst []byte, key string, d time.Duration) []byte {
if DurationFieldInteger {
return appendInt64(dst, key, int64(d/DurationFieldUnit))
}
return appendFloat64(dst, key, float64(d)/float64(DurationFieldUnit))
}
func appendInterface(dst []byte, key string, i interface{}) []byte {
marshaled, err := json.Marshal(i)
if err != nil {
return appendString(dst, key, fmt.Sprintf("marshaling error: %v", err))
}
return append(appendKey(dst, key), marshaled...)
}

277
fields.go
View File

@ -1,277 +0,0 @@
package zlog
import (
"encoding/json"
"net"
"sort"
"time"
"unsafe"
)
func isNilValue(i interface{}) bool {
return (*[2]uintptr)(unsafe.Pointer(&i))[1] == 0
}
func appendFields(dst []byte, fields interface{}) []byte {
switch fields := fields.(type) {
case []interface{}:
if n := len(fields); n&0x1 == 1 { // odd number
fields = fields[:n-1]
}
dst = appendFieldList(dst, fields)
case map[string]interface{}:
keys := make([]string, 0, len(fields))
for key := range fields {
keys = append(keys, key)
}
sort.Strings(keys)
kv := make([]interface{}, 2)
for _, key := range keys {
kv[0], kv[1] = key, fields[key]
dst = appendFieldList(dst, kv)
}
}
return dst
}
func appendFieldList(dst []byte, kvList []interface{}) []byte {
for i, n := 0, len(kvList); i < n; i += 2 {
key, val := kvList[i], kvList[i+1]
if key, ok := key.(string); ok {
dst = enc.AppendKey(dst, key)
} else {
continue
}
if val, ok := val.(LogObjectMarshaler); ok {
e := newEvent(nil, 0)
e.buf = e.buf[:0]
e.appendObject(val)
dst = append(dst, e.buf...)
putEvent(e)
continue
}
switch val := val.(type) {
case string:
dst = enc.AppendString(dst, val)
case []byte:
dst = enc.AppendBytes(dst, val)
case error:
switch m := ErrorMarshalFunc(val).(type) {
case LogObjectMarshaler:
e := newEvent(nil, 0)
e.buf = e.buf[:0]
e.appendObject(m)
dst = append(dst, e.buf...)
putEvent(e)
case error:
if m == nil || isNilValue(m) {
dst = enc.AppendNil(dst)
} else {
dst = enc.AppendString(dst, m.Error())
}
case string:
dst = enc.AppendString(dst, m)
default:
dst = enc.AppendInterface(dst, m)
}
case []error:
dst = enc.AppendArrayStart(dst)
for i, err := range val {
switch m := ErrorMarshalFunc(err).(type) {
case LogObjectMarshaler:
e := newEvent(nil, 0)
e.buf = e.buf[:0]
e.appendObject(m)
dst = append(dst, e.buf...)
putEvent(e)
case error:
if m == nil || isNilValue(m) {
dst = enc.AppendNil(dst)
} else {
dst = enc.AppendString(dst, m.Error())
}
case string:
dst = enc.AppendString(dst, m)
default:
dst = enc.AppendInterface(dst, m)
}
if i < (len(val) - 1) {
enc.AppendArrayDelim(dst)
}
}
dst = enc.AppendArrayEnd(dst)
case bool:
dst = enc.AppendBool(dst, val)
case int:
dst = enc.AppendInt(dst, val)
case int8:
dst = enc.AppendInt8(dst, val)
case int16:
dst = enc.AppendInt16(dst, val)
case int32:
dst = enc.AppendInt32(dst, val)
case int64:
dst = enc.AppendInt64(dst, val)
case uint:
dst = enc.AppendUint(dst, val)
case uint8:
dst = enc.AppendUint8(dst, val)
case uint16:
dst = enc.AppendUint16(dst, val)
case uint32:
dst = enc.AppendUint32(dst, val)
case uint64:
dst = enc.AppendUint64(dst, val)
case float32:
dst = enc.AppendFloat32(dst, val)
case float64:
dst = enc.AppendFloat64(dst, val)
case time.Time:
dst = enc.AppendTime(dst, val, TimeFieldFormat)
case time.Duration:
dst = enc.AppendDuration(dst, val, DurationFieldUnit, DurationFieldInteger)
case *string:
if val != nil {
dst = enc.AppendString(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *bool:
if val != nil {
dst = enc.AppendBool(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *int:
if val != nil {
dst = enc.AppendInt(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *int8:
if val != nil {
dst = enc.AppendInt8(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *int16:
if val != nil {
dst = enc.AppendInt16(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *int32:
if val != nil {
dst = enc.AppendInt32(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *int64:
if val != nil {
dst = enc.AppendInt64(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *uint:
if val != nil {
dst = enc.AppendUint(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *uint8:
if val != nil {
dst = enc.AppendUint8(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *uint16:
if val != nil {
dst = enc.AppendUint16(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *uint32:
if val != nil {
dst = enc.AppendUint32(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *uint64:
if val != nil {
dst = enc.AppendUint64(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *float32:
if val != nil {
dst = enc.AppendFloat32(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *float64:
if val != nil {
dst = enc.AppendFloat64(dst, *val)
} else {
dst = enc.AppendNil(dst)
}
case *time.Time:
if val != nil {
dst = enc.AppendTime(dst, *val, TimeFieldFormat)
} else {
dst = enc.AppendNil(dst)
}
case *time.Duration:
if val != nil {
dst = enc.AppendDuration(dst, *val, DurationFieldUnit, DurationFieldInteger)
} else {
dst = enc.AppendNil(dst)
}
case []string:
dst = enc.AppendStrings(dst, val)
case []bool:
dst = enc.AppendBools(dst, val)
case []int:
dst = enc.AppendInts(dst, val)
case []int8:
dst = enc.AppendInts8(dst, val)
case []int16:
dst = enc.AppendInts16(dst, val)
case []int32:
dst = enc.AppendInts32(dst, val)
case []int64:
dst = enc.AppendInts64(dst, val)
case []uint:
dst = enc.AppendUints(dst, val)
// case []uint8:
// dst = enc.AppendUints8(dst, val)
case []uint16:
dst = enc.AppendUints16(dst, val)
case []uint32:
dst = enc.AppendUints32(dst, val)
case []uint64:
dst = enc.AppendUints64(dst, val)
case []float32:
dst = enc.AppendFloats32(dst, val)
case []float64:
dst = enc.AppendFloats64(dst, val)
case []time.Time:
dst = enc.AppendTimes(dst, val, TimeFieldFormat)
case []time.Duration:
dst = enc.AppendDurations(dst, val, DurationFieldUnit, DurationFieldInteger)
case nil:
dst = enc.AppendNil(dst)
case net.IP:
dst = enc.AppendIPAddr(dst, val)
case net.IPNet:
dst = enc.AppendIPPrefix(dst, val)
case net.HardwareAddr:
dst = enc.AppendMACAddr(dst, val)
case json.RawMessage:
dst = appendJSON(dst, val)
default:
dst = enc.AppendInterface(dst, val)
}
}
return dst
}

View File

@ -1,29 +1,7 @@
package zlog package zerolog
import ( import "time"
"encoding/json" import "sync/atomic"
"strconv"
"sync/atomic"
"time"
)
const (
// TimeFormatUnix defines a time format that makes time fields to be
// serialized as Unix timestamp integers.
TimeFormatUnix = ""
// TimeFormatUnixMs defines a time format that makes time fields to be
// serialized as Unix timestamp integers in milliseconds.
TimeFormatUnixMs = "UNIXMS"
// TimeFormatUnixMicro defines a time format that makes time fields to be
// serialized as Unix timestamp integers in microseconds.
TimeFormatUnixMicro = "UNIXMICRO"
// TimeFormatUnixNano defines a time format that makes time fields to be
// serialized as Unix timestamp integers in nanoseconds.
TimeFormatUnixNano = "UNIXNANO"
)
var ( var (
// TimestampFieldName is the field name used for the timestamp field. // TimestampFieldName is the field name used for the timestamp field.
@ -32,61 +10,18 @@ var (
// LevelFieldName is the field name used for the level field. // LevelFieldName is the field name used for the level field.
LevelFieldName = "level" LevelFieldName = "level"
// LevelTraceValue is the value used for the trace level field.
LevelTraceValue = "trace"
// LevelDebugValue is the value used for the debug level field.
LevelDebugValue = "debug"
// LevelInfoValue is the value used for the info level field.
LevelInfoValue = "info"
// LevelWarnValue is the value used for the warn level field.
LevelWarnValue = "warn"
// LevelErrorValue is the value used for the error level field.
LevelErrorValue = "error"
// LevelFatalValue is the value used for the fatal level field.
LevelFatalValue = "fatal"
// LevelPanicValue is the value used for the panic level field.
LevelPanicValue = "panic"
// LevelFieldMarshalFunc allows customization of global level field marshaling.
LevelFieldMarshalFunc = func(l Level) string {
return l.String()
}
// MessageFieldName is the field name used for the message field. // MessageFieldName is the field name used for the message field.
MessageFieldName = "message" MessageFieldName = "message"
// ErrorFieldName is the field name used for error fields. // ErrorFieldName is the field name used for error fields.
ErrorFieldName = "error" ErrorFieldName = "error"
// CallerFieldName is the field name used for caller field. // SampleFieldName is the name of the field used to report sampling.
CallerFieldName = "caller" SampleFieldName = "sample"
// CallerSkipFrameCount is the number of stack frames to skip to find the caller. // TimeFieldFormat defines the time format of the Time field type.
CallerSkipFrameCount = 2 // If set to an empty string, the time is formatted as an UNIX timestamp
// as integer.
// CallerMarshalFunc allows customization of global caller marshaling
CallerMarshalFunc = func(pc uintptr, file string, line int) string {
return file + ":" + strconv.Itoa(line)
}
// ErrorStackFieldName is the field name used for error stacks.
ErrorStackFieldName = "stack"
// ErrorStackMarshaler extract the stack from err if any.
ErrorStackMarshaler func(err error) interface{}
// ErrorMarshalFunc allows customization of global error marshaling
ErrorMarshalFunc = func(err error) interface{} {
return err
}
// InterfaceMarshalFunc allows customization of interface marshaling.
// Default: "encoding/json.Marshal"
InterfaceMarshalFunc = json.Marshal
// TimeFieldFormat defines the time format of the Time field type. If set to
// TimeFormatUnix, TimeFormatUnixMs, TimeFormatUnixMicro or TimeFormatUnixNano, the time is formatted as a UNIX
// timestamp as integer.
TimeFieldFormat = time.RFC3339 TimeFieldFormat = time.RFC3339
// TimestampFunc defines the function called to generate a timestamp. // TimestampFunc defines the function called to generate a timestamp.
@ -99,20 +34,11 @@ var (
// DurationFieldInteger renders Dur fields as integer instead of float if // DurationFieldInteger renders Dur fields as integer instead of float if
// set to true. // set to true.
DurationFieldInteger = false DurationFieldInteger = false
// ErrorHandler is called whenever zlog fails to write an event on its
// output. If not set, an error is printed on the stderr. This handler must
// be thread safe and non-blocking.
ErrorHandler func(err error)
// DefaultContextLogger is returned from Ctx() if there is no logger associated
// with the context.
DefaultContextLogger *Logger
) )
var ( var (
gLevel = new(int32) gLevel = new(uint32)
disableSampling = new(int32) disableSampling = new(uint32)
) )
// SetGlobalLevel sets the global override for log level. If this // SetGlobalLevel sets the global override for log level. If this
@ -120,23 +46,22 @@ var (
// //
// To globally disable logs, set GlobalLevel to Disabled. // To globally disable logs, set GlobalLevel to Disabled.
func SetGlobalLevel(l Level) { func SetGlobalLevel(l Level) {
atomic.StoreInt32(gLevel, int32(l)) atomic.StoreUint32(gLevel, uint32(l))
} }
// GlobalLevel returns the current global log level func globalLevel() Level {
func GlobalLevel() Level { return Level(atomic.LoadUint32(gLevel))
return Level(atomic.LoadInt32(gLevel))
} }
// DisableSampling will disable sampling in all Loggers if true. // DisableSampling will disable sampling in all Loggers if true.
func DisableSampling(v bool) { func DisableSampling(v bool) {
var i int32 var i uint32
if v { if v {
i = 1 i = 1
} }
atomic.StoreInt32(disableSampling, i) atomic.StoreUint32(disableSampling, i)
} }
func samplingDisabled() bool { func samplingDisabled() bool {
return atomic.LoadInt32(disableSampling) == 1 return atomic.LoadUint32(gLevel) == 1
} }

12
go.mod
View File

@ -1,12 +0,0 @@
module tuxpa.in/a/zlog
go 1.15
require (
github.com/coreos/go-systemd/v22 v22.5.0
github.com/mattn/go-colorable v0.1.12
github.com/pkg/errors v0.9.1
github.com/rs/xid v1.5.0
github.com/rs/zerolog v1.28.0
golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 // indirect
)

19
go.sum
View File

@ -1,19 +0,0 @@
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/rs/xid v1.4.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.28.0 h1:MirSo27VyNi7RJYP3078AA1+Cyzd2GB66qy3aUHvsWY=
github.com/rs/zerolog v1.28.0/go.mod h1:NILgTygv/Uej1ra5XxGf82ZFSLk58MFGAUS2o6usyD0=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41 h1:ohgcoMbSofXygzo6AD2I1kz3BFmW1QArPYTtwEM3UXc=
golang.org/x/sys v0.0.0-20220915200043-7b5979e65e41/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=

View File

@ -1,7 +0,0 @@
// +build go1.12
package zlog
// Since go 1.12, some auto generated init functions are hidden from
// runtime.Caller.
const contextCallerSkipFrameCount = 2

View File

@ -1,31 +1,27 @@
// Package hlog provides a set of http.Handler helpers for zlog. // Package hlog provides a set of http.Handler helpers for zerolog.
package hlog package hlog
import ( import (
"context" "context"
"net"
"net/http" "net/http"
"time"
"github.com/rs/xid" "github.com/rs/xid"
"tuxpa.in/a/zlog" "github.com/rs/zerolog"
"tuxpa.in/a/zlog/hlog/internal/mutil" "github.com/rs/zerolog/log"
"tuxpa.in/a/zlog/log"
) )
// FromRequest gets the logger in the request's context. // FromRequest gets the logger in the request's context.
// This is a shortcut for log.Ctx(r.Context()) // This is a shortcut for log.Ctx(r.Context())
func FromRequest(r *http.Request) *zlog.Logger { func FromRequest(r *http.Request) zerolog.Logger {
return log.Ctx(r.Context()) return log.Ctx(r.Context())
} }
// NewHandler injects log into requests context. // NewHandler injects log into requests context.
func NewHandler(log zlog.Logger) func(http.Handler) http.Handler { func NewHandler(log zerolog.Logger) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Create a copy of the logger (including internal context slice) r = r.WithContext(log.WithContext(r.Context()))
// to prevent data race when using UpdateContext.
l := log.With().Logger()
r = r.WithContext(l.WithContext(r.Context()))
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
@ -36,10 +32,9 @@ func NewHandler(log zlog.Logger) func(http.Handler) http.Handler {
func URLHandler(fieldKey string) func(next http.Handler) http.Handler { func URLHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, r.URL.String()).Logger()
return c.Str(fieldKey, r.URL.String()) r = r.WithContext(log.WithContext(r.Context()))
})
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
@ -50,10 +45,9 @@ func URLHandler(fieldKey string) func(next http.Handler) http.Handler {
func MethodHandler(fieldKey string) func(next http.Handler) http.Handler { func MethodHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, r.Method).Logger()
return c.Str(fieldKey, r.Method) r = r.WithContext(log.WithContext(r.Context()))
})
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
@ -64,10 +58,9 @@ func MethodHandler(fieldKey string) func(next http.Handler) http.Handler {
func RequestHandler(fieldKey string) func(next http.Handler) http.Handler { func RequestHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, r.Method+" "+r.URL.String()).Logger()
return c.Str(fieldKey, r.Method+" "+r.URL.String()) r = r.WithContext(log.WithContext(r.Context()))
})
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
@ -78,11 +71,10 @@ func RequestHandler(fieldKey string) func(next http.Handler) http.Handler {
func RemoteAddrHandler(fieldKey string) func(next http.Handler) http.Handler { func RemoteAddrHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.RemoteAddr != "" { if host, _, err := net.SplitHostPort(r.RemoteAddr); err == nil {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, host).Logger()
return c.Str(fieldKey, r.RemoteAddr) r = r.WithContext(log.WithContext(r.Context()))
})
} }
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
@ -95,10 +87,9 @@ func UserAgentHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if ua := r.Header.Get("User-Agent"); ua != "" { if ua := r.Header.Get("User-Agent"); ua != "" {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, ua).Logger()
return c.Str(fieldKey, ua) r = r.WithContext(log.WithContext(r.Context()))
})
} }
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
@ -111,51 +102,26 @@ func RefererHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if ref := r.Header.Get("Referer"); ref != "" { if ref := r.Header.Get("Referer"); ref != "" {
log := zlog.Ctx(r.Context()) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, ref).Logger()
return c.Str(fieldKey, ref) r = r.WithContext(log.WithContext(r.Context()))
})
} }
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
} }
// ProtoHandler adds the requests protocol version as a field to the context logger
// using fieldKey as field Key.
func ProtoHandler(fieldKey string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log := zlog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context {
return c.Str(fieldKey, r.Proto)
})
next.ServeHTTP(w, r)
})
}
}
type idKey struct{} type idKey struct{}
// IDFromRequest returns the unique id associated to the request if any. // IDFromRequest returns the unique id accociated to the request if any.
func IDFromRequest(r *http.Request) (id xid.ID, ok bool) { func IDFromRequest(r *http.Request) (id xid.ID, ok bool) {
if r == nil { if r == nil {
return return
} }
return IDFromCtx(r.Context()) id, ok = r.Context().Value(idKey{}).(xid.ID)
}
// IDFromCtx returns the unique id associated to the context if any.
func IDFromCtx(ctx context.Context) (id xid.ID, ok bool) {
id, ok = ctx.Value(idKey{}).(xid.ID)
return return
} }
// CtxWithID adds the given xid.ID to the context
func CtxWithID(ctx context.Context, id xid.ID) context.Context {
return context.WithValue(ctx, idKey{}, id)
}
// RequestIDHandler returns a handler setting a unique id to the request which can // RequestIDHandler returns a handler setting a unique id to the request which can
// be gathered using IDFromRequest(req). This generated id is added as a field to the // be gathered using IDFromRequest(req). This generated id is added as a field to the
// logger using the passed fieldKey as field name. The id is also added as a response // logger using the passed fieldKey as field name. The id is also added as a response
@ -168,18 +134,16 @@ func CtxWithID(ctx context.Context, id xid.ID) context.Context {
func RequestIDHandler(fieldKey, headerName string) func(next http.Handler) http.Handler { func RequestIDHandler(fieldKey, headerName string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
id, ok := IDFromRequest(r) id, ok := IDFromRequest(r)
if !ok { if !ok {
id = xid.New() id = xid.New()
ctx = CtxWithID(ctx, id) ctx := context.WithValue(r.Context(), idKey{}, id)
r = r.WithContext(ctx) r = r.WithContext(ctx)
} }
if fieldKey != "" { if fieldKey != "" {
log := zlog.Ctx(ctx) log := zerolog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context { log = log.With().Str(fieldKey, id.String()).Logger()
return c.Str(fieldKey, id.String()) r = r.WithContext(log.WithContext(r.Context()))
})
} }
if headerName != "" { if headerName != "" {
w.Header().Set(headerName, id.String()) w.Header().Set(headerName, id.String())
@ -188,31 +152,3 @@ func RequestIDHandler(fieldKey, headerName string) func(next http.Handler) http.
}) })
} }
} }
// CustomHeaderHandler adds given header from request's header as a field to
// the context's logger using fieldKey as field key.
func CustomHeaderHandler(fieldKey, header string) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if val := r.Header.Get(header); val != "" {
log := zlog.Ctx(r.Context())
log.UpdateContext(func(c zlog.Context) zlog.Context {
return c.Str(fieldKey, val)
})
}
next.ServeHTTP(w, r)
})
}
}
// AccessHandler returns a handler that call f after each request.
func AccessHandler(f func(r *http.Request, status, size int, duration time.Duration)) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
lw := mutil.WrapWriter(w)
next.ServeHTTP(lw, r)
f(r, lw.Status(), lw.BytesWritten(), time.Since(start))
})
}
}

View File

@ -1,5 +1,3 @@
// +build !binary_log
package hlog_test package hlog_test
import ( import (
@ -9,8 +7,8 @@ import (
"net/http/httptest" "net/http/httptest"
"tuxpa.in/a/zlog" "github.com/rs/zerolog"
"tuxpa.in/a/zlog/hlog" "github.com/rs/zerolog/hlog"
) )
// fake alice to avoid dep // fake alice to avoid dep
@ -31,13 +29,13 @@ func (a alice) Then(h http.Handler) http.Handler {
} }
func init() { func init() {
zlog.TimestampFunc = func() time.Time { zerolog.TimestampFunc = func() time.Time {
return time.Date(2001, time.February, 3, 4, 5, 6, 7, time.UTC) return time.Date(2001, time.February, 3, 4, 5, 6, 7, time.UTC)
} }
} }
func Example_handler() { func Example_handler() {
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Timestamp(). Timestamp().
Str("role", "my-service"). Str("role", "my-service").
Str("host", "local-hostname"). Str("host", "local-hostname").
@ -48,8 +46,8 @@ func Example_handler() {
// Install the logger handler with default output on the console // Install the logger handler with default output on the console
c = c.Append(hlog.NewHandler(log)) c = c.Append(hlog.NewHandler(log))
// Install some provided extra handlers to set some request's context fields. // Install some provided extra handler to set some request's context fields.
// Thanks to those handlers, all our logs will come with some pre-populated fields. // Thanks to those handler, all our logs will come with some pre-populated fields.
c = c.Append(hlog.RemoteAddrHandler("ip")) c = c.Append(hlog.RemoteAddrHandler("ip"))
c = c.Append(hlog.UserAgentHandler("user_agent")) c = c.Append(hlog.UserAgentHandler("user_agent"))
c = c.Append(hlog.RefererHandler("referer")) c = c.Append(hlog.RefererHandler("referer"))
@ -63,11 +61,11 @@ func Example_handler() {
hlog.FromRequest(r).Info(). hlog.FromRequest(r).Info().
Str("user", "current user"). Str("user", "current user").
Str("status", "ok"). Str("status", "ok").
Msg("Something happened") Msg("Something happend")
})) }))
http.Handle("/", h) http.Handle("/", h)
h.ServeHTTP(httptest.NewRecorder(), &http.Request{}) h.ServeHTTP(httptest.NewRecorder(), &http.Request{})
// Output: {"level":"info","role":"my-service","host":"local-hostname","user":"current user","status":"ok","time":"2001-02-03T04:05:06Z","message":"Something happened"} // Output: {"time":"2001-02-03T04:05:06Z","level":"info","role":"my-service","host":"local-hostname","user":"current user","status":"ok","message":"Something happend"}
} }

View File

@ -1,40 +1,29 @@
//go:build go1.7
// +build go1.7 // +build go1.7
package hlog package hlog
import ( import (
"bytes" "bytes"
"context"
"fmt" "fmt"
"io/ioutil"
"net/http" "net/http"
"net/http/httptest"
"net/url" "net/url"
"reflect"
"testing" "testing"
"github.com/rs/xid" "reflect"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/internal/cbor" "net/http/httptest"
"github.com/rs/zerolog"
) )
func decodeIfBinary(out *bytes.Buffer) string {
p := out.Bytes()
if len(p) == 0 || p[0] < 0x7F {
return out.String()
}
return cbor.DecodeObjectToStr(p) + "\n"
}
func TestNewHandler(t *testing.T) { func TestNewHandler(t *testing.T) {
log := zlog.New(nil).With(). log := zerolog.New(nil).With().
Str("foo", "bar"). Str("foo", "bar").
Logger() Logger()
lh := NewHandler(log) lh := NewHandler(log)
h := lh(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := lh(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
if !reflect.DeepEqual(*l, log) { if !reflect.DeepEqual(l, log) {
t.Fail() t.Fail()
} }
})) }))
@ -49,12 +38,12 @@ func TestURLHandler(t *testing.T) {
h := URLHandler("url")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := URLHandler("url")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"url":"/path?foo=bar"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"url":"/path?foo=bar"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestMethodHandler(t *testing.T) { func TestMethodHandler(t *testing.T) {
@ -65,12 +54,12 @@ func TestMethodHandler(t *testing.T) {
h := MethodHandler("method")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := MethodHandler("method")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"method":"POST"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"method":"POST"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestRequestHandler(t *testing.T) { func TestRequestHandler(t *testing.T) {
@ -82,12 +71,12 @@ func TestRequestHandler(t *testing.T) {
h := RequestHandler("request")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := RequestHandler("request")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"request":"POST /path?foo=bar"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"request":"POST /path?foo=bar"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestRemoteAddrHandler(t *testing.T) { func TestRemoteAddrHandler(t *testing.T) {
@ -98,12 +87,12 @@ func TestRemoteAddrHandler(t *testing.T) {
h := RemoteAddrHandler("ip")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := RemoteAddrHandler("ip")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"ip":"1.2.3.4"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"ip":"1.2.3.4:1234"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestRemoteAddrHandlerIPv6(t *testing.T) { func TestRemoteAddrHandlerIPv6(t *testing.T) {
@ -114,12 +103,12 @@ func TestRemoteAddrHandlerIPv6(t *testing.T) {
h := RemoteAddrHandler("ip")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := RemoteAddrHandler("ip")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"ip":"2001:db8:a0b:12f0::1"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"ip":"[2001:db8:a0b:12f0::1]:1234"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestUserAgentHandler(t *testing.T) { func TestUserAgentHandler(t *testing.T) {
@ -132,12 +121,12 @@ func TestUserAgentHandler(t *testing.T) {
h := UserAgentHandler("ua")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := UserAgentHandler("ua")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"ua":"some user agent string"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"ua":"some user agent string"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestRefererHandler(t *testing.T) { func TestRefererHandler(t *testing.T) {
@ -150,12 +139,12 @@ func TestRefererHandler(t *testing.T) {
h := RefererHandler("referer")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { h := RefererHandler("referer")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
})) if want, got := `{"referer":"http://foo.com/bar"}`+"\n", out.String(); want != got {
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"referer":"http://foo.com/bar"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
}))
h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(nil, r)
} }
func TestRequestIDHandler(t *testing.T) { func TestRequestIDHandler(t *testing.T) {
@ -175,120 +164,10 @@ func TestRequestIDHandler(t *testing.T) {
} }
l := FromRequest(r) l := FromRequest(r)
l.Log().Msg("") l.Log().Msg("")
if want, got := fmt.Sprintf(`{"id":"%s"}`+"\n", id), decodeIfBinary(out); want != got { if want, got := fmt.Sprintf(`{"id":"%s"}`+"\n", id), out.String(); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want) t.Errorf("Invalid log output, got: %s, want: %s", got, want)
} }
})) }))
h = NewHandler(zlog.New(out))(h) h = NewHandler(zerolog.New(out))(h)
h.ServeHTTP(httptest.NewRecorder(), r) h.ServeHTTP(httptest.NewRecorder(), r)
} }
func TestCustomHeaderHandler(t *testing.T) {
out := &bytes.Buffer{}
r := &http.Request{
Header: http.Header{
"X-Request-Id": []string{"514bbe5bb5251c92bd07a9846f4a1ab6"},
},
}
h := CustomHeaderHandler("reqID", "X-Request-Id")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r)
l.Log().Msg("")
}))
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"reqID":"514bbe5bb5251c92bd07a9846f4a1ab6"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want)
}
}
func TestProtoHandler(t *testing.T) {
out := &bytes.Buffer{}
r := &http.Request{
Proto: "test",
}
h := ProtoHandler("proto")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r)
l.Log().Msg("")
}))
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"proto":"test"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want)
}
}
func TestCombinedHandlers(t *testing.T) {
out := &bytes.Buffer{}
r := &http.Request{
Method: "POST",
URL: &url.URL{Path: "/path", RawQuery: "foo=bar"},
}
h := MethodHandler("method")(RequestHandler("request")(URLHandler("url")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r)
l.Log().Msg("")
}))))
h = NewHandler(zlog.New(out))(h)
h.ServeHTTP(nil, r)
if want, got := `{"method":"POST","request":"POST /path?foo=bar","url":"/path?foo=bar"}`+"\n", decodeIfBinary(out); want != got {
t.Errorf("Invalid log output, got: %s, want: %s", got, want)
}
}
func BenchmarkHandlers(b *testing.B) {
r := &http.Request{
Method: "POST",
URL: &url.URL{Path: "/path", RawQuery: "foo=bar"},
}
h1 := URLHandler("url")(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r)
l.Log().Msg("")
}))
h2 := MethodHandler("method")(RequestHandler("request")(h1))
handlers := map[string]http.Handler{
"Single": NewHandler(zlog.New(ioutil.Discard))(h1),
"Combined": NewHandler(zlog.New(ioutil.Discard))(h2),
"SingleDisabled": NewHandler(zlog.New(ioutil.Discard).Level(zlog.Disabled))(h1),
"CombinedDisabled": NewHandler(zlog.New(ioutil.Discard).Level(zlog.Disabled))(h2),
}
for name := range handlers {
h := handlers[name]
b.Run(name, func(b *testing.B) {
for i := 0; i < b.N; i++ {
h.ServeHTTP(nil, r)
}
})
}
}
func BenchmarkDataRace(b *testing.B) {
log := zlog.New(nil).With().
Str("foo", "bar").
Logger()
lh := NewHandler(log)
h := lh(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := FromRequest(r)
l.UpdateContext(func(c zlog.Context) zlog.Context {
return c.Str("bar", "baz")
})
l.Log().Msg("")
}))
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
h.ServeHTTP(nil, &http.Request{})
}
})
}
func TestCtxWithID(t *testing.T) {
ctx := context.Background()
id, _ := xid.FromString(`c0umremcie6smuu506pg`)
want := context.Background()
want = context.WithValue(want, idKey{}, id)
if got := CtxWithID(ctx, id); !reflect.DeepEqual(got, want) {
t.Errorf("CtxWithID() = %v, want %v", got, want)
}
}

View File

@ -1,20 +0,0 @@
Copyright (c) 2014, 2015, 2016 Carl Jackson (carl@avtok.com)
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,6 +0,0 @@
// Package mutil contains various functions that are helpful when writing http
// middleware.
//
// It has been vendored from Goji v1.0, with the exception of the code for Go 1.8:
// https://github.com/zenazn/goji/
package mutil

View File

@ -1,154 +0,0 @@
package mutil
import (
"bufio"
"io"
"net"
"net/http"
)
// WriterProxy is a proxy around an http.ResponseWriter that allows you to hook
// into various parts of the response process.
type WriterProxy interface {
http.ResponseWriter
// Status returns the HTTP status of the request, or 0 if one has not
// yet been sent.
Status() int
// BytesWritten returns the total number of bytes sent to the client.
BytesWritten() int
// Tee causes the response body to be written to the given io.Writer in
// addition to proxying the writes through. Only one io.Writer can be
// tee'd to at once: setting a second one will overwrite the first.
// Writes will be sent to the proxy before being written to this
// io.Writer. It is illegal for the tee'd writer to be modified
// concurrently with writes.
Tee(io.Writer)
// Unwrap returns the original proxied target.
Unwrap() http.ResponseWriter
}
// WrapWriter wraps an http.ResponseWriter, returning a proxy that allows you to
// hook into various parts of the response process.
func WrapWriter(w http.ResponseWriter) WriterProxy {
_, cn := w.(http.CloseNotifier)
_, fl := w.(http.Flusher)
_, hj := w.(http.Hijacker)
_, rf := w.(io.ReaderFrom)
bw := basicWriter{ResponseWriter: w}
if cn && fl && hj && rf {
return &fancyWriter{bw}
}
if fl {
return &flushWriter{bw}
}
return &bw
}
// basicWriter wraps a http.ResponseWriter that implements the minimal
// http.ResponseWriter interface.
type basicWriter struct {
http.ResponseWriter
wroteHeader bool
code int
bytes int
tee io.Writer
}
func (b *basicWriter) WriteHeader(code int) {
if !b.wroteHeader {
b.code = code
b.wroteHeader = true
b.ResponseWriter.WriteHeader(code)
}
}
func (b *basicWriter) Write(buf []byte) (int, error) {
b.WriteHeader(http.StatusOK)
n, err := b.ResponseWriter.Write(buf)
if b.tee != nil {
_, err2 := b.tee.Write(buf[:n])
// Prefer errors generated by the proxied writer.
if err == nil {
err = err2
}
}
b.bytes += n
return n, err
}
func (b *basicWriter) maybeWriteHeader() {
if !b.wroteHeader {
b.WriteHeader(http.StatusOK)
}
}
func (b *basicWriter) Status() int {
return b.code
}
func (b *basicWriter) BytesWritten() int {
return b.bytes
}
func (b *basicWriter) Tee(w io.Writer) {
b.tee = w
}
func (b *basicWriter) Unwrap() http.ResponseWriter {
return b.ResponseWriter
}
// fancyWriter is a writer that additionally satisfies http.CloseNotifier,
// http.Flusher, http.Hijacker, and io.ReaderFrom. It exists for the common case
// of wrapping the http.ResponseWriter that package http gives you, in order to
// make the proxied object support the full method set of the proxied object.
type fancyWriter struct {
basicWriter
}
func (f *fancyWriter) CloseNotify() <-chan bool {
cn := f.basicWriter.ResponseWriter.(http.CloseNotifier)
return cn.CloseNotify()
}
func (f *fancyWriter) Flush() {
fl := f.basicWriter.ResponseWriter.(http.Flusher)
fl.Flush()
}
func (f *fancyWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hj := f.basicWriter.ResponseWriter.(http.Hijacker)
return hj.Hijack()
}
func (f *fancyWriter) ReadFrom(r io.Reader) (int64, error) {
if f.basicWriter.tee != nil {
n, err := io.Copy(&f.basicWriter, r)
f.bytes += int(n)
return n, err
}
rf := f.basicWriter.ResponseWriter.(io.ReaderFrom)
f.basicWriter.maybeWriteHeader()
n, err := rf.ReadFrom(r)
f.bytes += int(n)
return n, err
}
type flushWriter struct {
basicWriter
}
func (f *flushWriter) Flush() {
fl := f.basicWriter.ResponseWriter.(http.Flusher)
fl.Flush()
}
var (
_ http.CloseNotifier = &fancyWriter{}
_ http.Flusher = &fancyWriter{}
_ http.Hijacker = &fancyWriter{}
_ io.ReaderFrom = &fancyWriter{}
_ http.Flusher = &flushWriter{}
)

64
hook.go
View File

@ -1,64 +0,0 @@
package zlog
// Hook defines an interface to a log hook.
type Hook interface {
// Run runs the hook with the event.
Run(e *Event, level Level, message string)
}
// HookFunc is an adaptor to allow the use of an ordinary function
// as a Hook.
type HookFunc func(e *Event, level Level, message string)
// Run implements the Hook interface.
func (h HookFunc) Run(e *Event, level Level, message string) {
h(e, level, message)
}
// LevelHook applies a different hook for each level.
type LevelHook struct {
NoLevelHook, TraceHook, DebugHook, InfoHook, WarnHook, ErrorHook, FatalHook, PanicHook Hook
}
// Run implements the Hook interface.
func (h LevelHook) Run(e *Event, level Level, message string) {
switch level {
case TraceLevel:
if h.TraceHook != nil {
h.TraceHook.Run(e, level, message)
}
case DebugLevel:
if h.DebugHook != nil {
h.DebugHook.Run(e, level, message)
}
case InfoLevel:
if h.InfoHook != nil {
h.InfoHook.Run(e, level, message)
}
case WarnLevel:
if h.WarnHook != nil {
h.WarnHook.Run(e, level, message)
}
case ErrorLevel:
if h.ErrorHook != nil {
h.ErrorHook.Run(e, level, message)
}
case FatalLevel:
if h.FatalHook != nil {
h.FatalHook.Run(e, level, message)
}
case PanicLevel:
if h.PanicHook != nil {
h.PanicHook.Run(e, level, message)
}
case NoLevel:
if h.NoLevelHook != nil {
h.NoLevelHook.Run(e, level, message)
}
}
}
// NewLevelHook returns a new LevelHook.
func NewLevelHook() LevelHook {
return LevelHook{}
}

View File

@ -1,167 +0,0 @@
package zlog
import (
"bytes"
"io/ioutil"
"testing"
)
var (
levelNameHook = HookFunc(func(e *Event, level Level, msg string) {
levelName := level.String()
if level == NoLevel {
levelName = "nolevel"
}
e.Str("level_name", levelName)
})
simpleHook = HookFunc(func(e *Event, level Level, msg string) {
e.Bool("has_level", level != NoLevel)
e.Str("test", "logged")
})
copyHook = HookFunc(func(e *Event, level Level, msg string) {
hasLevel := level != NoLevel
e.Bool("copy_has_level", hasLevel)
if hasLevel {
e.Str("copy_level", level.String())
}
e.Str("copy_msg", msg)
})
nopHook = HookFunc(func(e *Event, level Level, message string) {
})
discardHook = HookFunc(func(e *Event, level Level, message string) {
e.Discard()
})
)
func TestHook(t *testing.T) {
tests := []struct {
name string
want string
test func(log Logger)
}{
{"Message", `{"level_name":"nolevel","message":"test message"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook)
log.Log().Msg("test message")
}},
{"NoLevel", `{"level_name":"nolevel"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook)
log.Log().Msg("")
}},
{"Print", `{"level":"debug","level_name":"debug"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook)
log.Print("")
}},
{"Error", `{"level":"error","level_name":"error"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook)
log.Error().Msg("")
}},
{"Copy/1", `{"copy_has_level":false,"copy_msg":""}` + "\n", func(log Logger) {
log = log.Hook(copyHook)
log.Log().Msg("")
}},
{"Copy/2", `{"level":"info","copy_has_level":true,"copy_level":"info","copy_msg":"a message","message":"a message"}` + "\n", func(log Logger) {
log = log.Hook(copyHook)
log.Info().Msg("a message")
}},
{"Multi", `{"level":"error","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook).Hook(simpleHook)
log.Error().Msg("")
}},
{"Multi/Message", `{"level":"error","level_name":"error","has_level":true,"test":"logged","message":"a message"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook).Hook(simpleHook)
log.Error().Msg("a message")
}},
{"Output/single/pre", `{"level":"error","level_name":"error"}` + "\n", func(log Logger) {
ignored := &bytes.Buffer{}
log = New(ignored).Hook(levelNameHook).Output(log.w)
log.Error().Msg("")
}},
{"Output/single/post", `{"level":"error","level_name":"error"}` + "\n", func(log Logger) {
ignored := &bytes.Buffer{}
log = New(ignored).Output(log.w).Hook(levelNameHook)
log.Error().Msg("")
}},
{"Output/multi/pre", `{"level":"error","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
ignored := &bytes.Buffer{}
log = New(ignored).Hook(levelNameHook).Hook(simpleHook).Output(log.w)
log.Error().Msg("")
}},
{"Output/multi/post", `{"level":"error","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
ignored := &bytes.Buffer{}
log = New(ignored).Output(log.w).Hook(levelNameHook).Hook(simpleHook)
log.Error().Msg("")
}},
{"Output/mixed", `{"level":"error","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
ignored := &bytes.Buffer{}
log = New(ignored).Hook(levelNameHook).Output(log.w).Hook(simpleHook)
log.Error().Msg("")
}},
{"With/single/pre", `{"level":"error","with":"pre","level_name":"error"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook).With().Str("with", "pre").Logger()
log.Error().Msg("")
}},
{"With/single/post", `{"level":"error","with":"post","level_name":"error"}` + "\n", func(log Logger) {
log = log.With().Str("with", "post").Logger().Hook(levelNameHook)
log.Error().Msg("")
}},
{"With/multi/pre", `{"level":"error","with":"pre","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook).Hook(simpleHook).With().Str("with", "pre").Logger()
log.Error().Msg("")
}},
{"With/multi/post", `{"level":"error","with":"post","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
log = log.With().Str("with", "post").Logger().Hook(levelNameHook).Hook(simpleHook)
log.Error().Msg("")
}},
{"With/mixed", `{"level":"error","with":"mixed","level_name":"error","has_level":true,"test":"logged"}` + "\n", func(log Logger) {
log = log.Hook(levelNameHook).With().Str("with", "mixed").Logger().Hook(simpleHook)
log.Error().Msg("")
}},
{"Discard", "", func(log Logger) {
log = log.Hook(discardHook)
log.Log().Msg("test message")
}},
{"None", `{"level":"error"}` + "\n", func(log Logger) {
log.Error().Msg("")
}},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
out := &bytes.Buffer{}
log := New(out)
tt.test(log)
if got, want := decodeIfBinaryToString(out.Bytes()), tt.want; got != want {
t.Errorf("invalid log output:\ngot: %v\nwant: %v", got, want)
}
})
}
}
func BenchmarkHooks(b *testing.B) {
logger := New(ioutil.Discard)
b.ResetTimer()
b.Run("Nop/Single", func(b *testing.B) {
log := logger.Hook(nopHook)
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
log.Log().Msg("")
}
})
})
b.Run("Nop/Multi", func(b *testing.B) {
log := logger.Hook(nopHook).Hook(nopHook)
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
log.Log().Msg("")
}
})
})
b.Run("Simple", func(b *testing.B) {
log := logger.Hook(simpleHook)
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
log.Log().Msg("")
}
})
})
}

View File

@ -1,56 +0,0 @@
## Reference:
CBOR Encoding is described in [RFC7049](https://tools.ietf.org/html/rfc7049)
## Comparison of JSON vs CBOR
Two main areas of reduction are:
1. CPU usage to write a log msg
2. Size (in bytes) of log messages.
CPU Usage savings are below:
```
name JSON time/op CBOR time/op delta
Info-32 15.3ns ± 1% 11.7ns ± 3% -23.78% (p=0.000 n=9+10)
ContextFields-32 16.2ns ± 2% 12.3ns ± 3% -23.97% (p=0.000 n=9+9)
ContextAppend-32 6.70ns ± 0% 6.20ns ± 0% -7.44% (p=0.000 n=9+9)
LogFields-32 66.4ns ± 0% 24.6ns ± 2% -62.89% (p=0.000 n=10+9)
LogArrayObject-32 911ns ±11% 768ns ± 6% -15.64% (p=0.000 n=10+10)
LogFieldType/Floats-32 70.3ns ± 2% 29.5ns ± 1% -57.98% (p=0.000 n=10+10)
LogFieldType/Err-32 14.0ns ± 3% 12.1ns ± 8% -13.20% (p=0.000 n=8+10)
LogFieldType/Dur-32 17.2ns ± 2% 13.1ns ± 1% -24.27% (p=0.000 n=10+9)
LogFieldType/Object-32 54.3ns ±11% 52.3ns ± 7% ~ (p=0.239 n=10+10)
LogFieldType/Ints-32 20.3ns ± 2% 15.1ns ± 2% -25.50% (p=0.000 n=9+10)
LogFieldType/Interfaces-32 642ns ±11% 621ns ± 9% ~ (p=0.118 n=10+10)
LogFieldType/Interface(Objects)-32 635ns ±13% 632ns ± 9% ~ (p=0.592 n=10+10)
LogFieldType/Times-32 294ns ± 0% 27ns ± 1% -90.71% (p=0.000 n=10+9)
LogFieldType/Durs-32 121ns ± 0% 33ns ± 2% -72.44% (p=0.000 n=9+9)
LogFieldType/Interface(Object)-32 56.6ns ± 8% 52.3ns ± 8% -7.54% (p=0.007 n=10+10)
LogFieldType/Errs-32 17.8ns ± 3% 16.1ns ± 2% -9.71% (p=0.000 n=10+9)
LogFieldType/Time-32 40.5ns ± 1% 12.7ns ± 6% -68.66% (p=0.000 n=8+9)
LogFieldType/Bool-32 12.0ns ± 5% 10.2ns ± 2% -15.18% (p=0.000 n=10+8)
LogFieldType/Bools-32 17.2ns ± 2% 12.6ns ± 4% -26.63% (p=0.000 n=10+10)
LogFieldType/Int-32 12.3ns ± 2% 11.2ns ± 4% -9.27% (p=0.000 n=9+10)
LogFieldType/Float-32 16.7ns ± 1% 12.6ns ± 2% -24.42% (p=0.000 n=7+9)
LogFieldType/Str-32 12.7ns ± 7% 11.3ns ± 7% -10.88% (p=0.000 n=10+9)
LogFieldType/Strs-32 20.3ns ± 3% 18.2ns ± 3% -10.25% (p=0.000 n=9+10)
LogFieldType/Interface-32 183ns ±12% 175ns ± 9% ~ (p=0.078 n=10+10)
```
Log message size savings is greatly dependent on the number and type of fields in the log message.
Assuming this log message (with an Integer, timestamp and string, in addition to level).
`{"level":"error","Fault":41650,"time":"2018-04-01T15:18:19-07:00","message":"Some Message"}`
Two measurements were done for the log file sizes - one without any compression, second
using [compress/zlib](https://golang.org/pkg/compress/zlib/).
Results for 10,000 log messages:
| Log Format | Plain File Size (in KB) | Compressed File Size (in KB) |
| :--- | :---: | :---: |
| JSON | 920 | 28 |
| CBOR | 550 | 28 |
The example used to calculate the above data is available in [Examples](examples).

View File

@ -1,19 +0,0 @@
package cbor
// JSONMarshalFunc is used to marshal interface to JSON encoded byte slice.
// Making it package level instead of embedded in Encoder brings
// some extra efforts at importing, but avoids value copy when the functions
// of Encoder being invoked.
// DO REMEMBER to set this variable at importing, or
// you might get a nil pointer dereference panic at runtime.
var JSONMarshalFunc func(v interface{}) ([]byte, error)
type Encoder struct{}
// AppendKey adds a key (string) to the binary encoded log message
func (e Encoder) AppendKey(dst []byte, key string) []byte {
if len(dst) < 1 {
dst = e.AppendBeginMarker(dst)
}
return e.AppendString(dst, key)
}

View File

@ -1,102 +0,0 @@
// Package cbor provides primitives for storing different data
// in the CBOR (binary) format. CBOR is defined in RFC7049.
package cbor
import "time"
const (
majorOffset = 5
additionalMax = 23
// Non Values.
additionalTypeBoolFalse byte = 20
additionalTypeBoolTrue byte = 21
additionalTypeNull byte = 22
// Integer (+ve and -ve) Sub-types.
additionalTypeIntUint8 byte = 24
additionalTypeIntUint16 byte = 25
additionalTypeIntUint32 byte = 26
additionalTypeIntUint64 byte = 27
// Float Sub-types.
additionalTypeFloat16 byte = 25
additionalTypeFloat32 byte = 26
additionalTypeFloat64 byte = 27
additionalTypeBreak byte = 31
// Tag Sub-types.
additionalTypeTimestamp byte = 01
additionalTypeEmbeddedCBOR byte = 63
// Extended Tags - from https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml
additionalTypeTagNetworkAddr uint16 = 260
additionalTypeTagNetworkPrefix uint16 = 261
additionalTypeEmbeddedJSON uint16 = 262
additionalTypeTagHexString uint16 = 263
// Unspecified number of elements.
additionalTypeInfiniteCount byte = 31
)
const (
majorTypeUnsignedInt byte = iota << majorOffset // Major type 0
majorTypeNegativeInt // Major type 1
majorTypeByteString // Major type 2
majorTypeUtf8String // Major type 3
majorTypeArray // Major type 4
majorTypeMap // Major type 5
majorTypeTags // Major type 6
majorTypeSimpleAndFloat // Major type 7
)
const (
maskOutAdditionalType byte = (7 << majorOffset)
maskOutMajorType byte = 31
)
const (
float32Nan = "\xfa\x7f\xc0\x00\x00"
float32PosInfinity = "\xfa\x7f\x80\x00\x00"
float32NegInfinity = "\xfa\xff\x80\x00\x00"
float64Nan = "\xfb\x7f\xf8\x00\x00\x00\x00\x00\x00"
float64PosInfinity = "\xfb\x7f\xf0\x00\x00\x00\x00\x00\x00"
float64NegInfinity = "\xfb\xff\xf0\x00\x00\x00\x00\x00\x00"
)
// IntegerTimeFieldFormat indicates the format of timestamp decoded
// from an integer (time in seconds).
var IntegerTimeFieldFormat = time.RFC3339
// NanoTimeFieldFormat indicates the format of timestamp decoded
// from a float value (time in seconds and nanoseconds).
var NanoTimeFieldFormat = time.RFC3339Nano
func appendCborTypePrefix(dst []byte, major byte, number uint64) []byte {
byteCount := 8
var minor byte
switch {
case number < 256:
byteCount = 1
minor = additionalTypeIntUint8
case number < 65536:
byteCount = 2
minor = additionalTypeIntUint16
case number < 4294967296:
byteCount = 4
minor = additionalTypeIntUint32
default:
byteCount = 8
minor = additionalTypeIntUint64
}
dst = append(dst, major|minor)
byteCount--
for ; byteCount >= 0; byteCount-- {
dst = append(dst, byte(number>>(uint(byteCount)*8)))
}
return dst
}

View File

@ -1,654 +0,0 @@
package cbor
// This file contains code to decode a stream of CBOR Data into JSON.
import (
"bufio"
"bytes"
"encoding/base64"
"fmt"
"io"
"math"
"net"
"runtime"
"strconv"
"strings"
"time"
"unicode/utf8"
)
var decodeTimeZone *time.Location
const hexTable = "0123456789abcdef"
const isFloat32 = 4
const isFloat64 = 8
func readNBytes(src *bufio.Reader, n int) []byte {
ret := make([]byte, n)
for i := 0; i < n; i++ {
ch, e := src.ReadByte()
if e != nil {
panic(fmt.Errorf("Tried to Read %d Bytes.. But hit end of file", n))
}
ret[i] = ch
}
return ret
}
func readByte(src *bufio.Reader) byte {
b, e := src.ReadByte()
if e != nil {
panic(fmt.Errorf("Tried to Read 1 Byte.. But hit end of file"))
}
return b
}
func decodeIntAdditionalType(src *bufio.Reader, minor byte) int64 {
val := int64(0)
if minor <= 23 {
val = int64(minor)
} else {
bytesToRead := 0
switch minor {
case additionalTypeIntUint8:
bytesToRead = 1
case additionalTypeIntUint16:
bytesToRead = 2
case additionalTypeIntUint32:
bytesToRead = 4
case additionalTypeIntUint64:
bytesToRead = 8
default:
panic(fmt.Errorf("Invalid Additional Type: %d in decodeInteger (expected <28)", minor))
}
pb := readNBytes(src, bytesToRead)
for i := 0; i < bytesToRead; i++ {
val = val * 256
val += int64(pb[i])
}
}
return val
}
func decodeInteger(src *bufio.Reader) int64 {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeUnsignedInt && major != majorTypeNegativeInt {
panic(fmt.Errorf("Major type is: %d in decodeInteger!! (expected 0 or 1)", major))
}
val := decodeIntAdditionalType(src, minor)
if major == 0 {
return val
}
return (-1 - val)
}
func decodeFloat(src *bufio.Reader) (float64, int) {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeSimpleAndFloat {
panic(fmt.Errorf("Incorrect Major type is: %d in decodeFloat", major))
}
switch minor {
case additionalTypeFloat16:
panic(fmt.Errorf("float16 is not suppported in decodeFloat"))
case additionalTypeFloat32:
pb := readNBytes(src, 4)
switch string(pb) {
case float32Nan:
return math.NaN(), isFloat32
case float32PosInfinity:
return math.Inf(0), isFloat32
case float32NegInfinity:
return math.Inf(-1), isFloat32
}
n := uint32(0)
for i := 0; i < 4; i++ {
n = n * 256
n += uint32(pb[i])
}
val := math.Float32frombits(n)
return float64(val), isFloat32
case additionalTypeFloat64:
pb := readNBytes(src, 8)
switch string(pb) {
case float64Nan:
return math.NaN(), isFloat64
case float64PosInfinity:
return math.Inf(0), isFloat64
case float64NegInfinity:
return math.Inf(-1), isFloat64
}
n := uint64(0)
for i := 0; i < 8; i++ {
n = n * 256
n += uint64(pb[i])
}
val := math.Float64frombits(n)
return val, isFloat64
}
panic(fmt.Errorf("Invalid Additional Type: %d in decodeFloat", minor))
}
func decodeStringComplex(dst []byte, s string, pos uint) []byte {
i := int(pos)
start := 0
for i < len(s) {
b := s[i]
if b >= utf8.RuneSelf {
r, size := utf8.DecodeRuneInString(s[i:])
if r == utf8.RuneError && size == 1 {
// In case of error, first append previous simple characters to
// the byte slice if any and append a replacement character code
// in place of the invalid sequence.
if start < i {
dst = append(dst, s[start:i]...)
}
dst = append(dst, `\ufffd`...)
i += size
start = i
continue
}
i += size
continue
}
if b >= 0x20 && b <= 0x7e && b != '\\' && b != '"' {
i++
continue
}
// We encountered a character that needs to be encoded.
// Let's append the previous simple characters to the byte slice
// and switch our operation to read and encode the remainder
// characters byte-by-byte.
if start < i {
dst = append(dst, s[start:i]...)
}
switch b {
case '"', '\\':
dst = append(dst, '\\', b)
case '\b':
dst = append(dst, '\\', 'b')
case '\f':
dst = append(dst, '\\', 'f')
case '\n':
dst = append(dst, '\\', 'n')
case '\r':
dst = append(dst, '\\', 'r')
case '\t':
dst = append(dst, '\\', 't')
default:
dst = append(dst, '\\', 'u', '0', '0', hexTable[b>>4], hexTable[b&0xF])
}
i++
start = i
}
if start < len(s) {
dst = append(dst, s[start:]...)
}
return dst
}
func decodeString(src *bufio.Reader, noQuotes bool) []byte {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeByteString {
panic(fmt.Errorf("Major type is: %d in decodeString", major))
}
result := []byte{}
if !noQuotes {
result = append(result, '"')
}
length := decodeIntAdditionalType(src, minor)
len := int(length)
pbs := readNBytes(src, len)
result = append(result, pbs...)
if noQuotes {
return result
}
return append(result, '"')
}
func decodeStringToDataUrl(src *bufio.Reader, mimeType string) []byte {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeByteString {
panic(fmt.Errorf("Major type is: %d in decodeString", major))
}
length := decodeIntAdditionalType(src, minor)
l := int(length)
enc := base64.StdEncoding
lEnc := enc.EncodedLen(l)
result := make([]byte, len("\"data:;base64,\"")+len(mimeType)+lEnc)
dest := result
u := copy(dest, "\"data:")
dest = dest[u:]
u = copy(dest, mimeType)
dest = dest[u:]
u = copy(dest, ";base64,")
dest = dest[u:]
pbs := readNBytes(src, l)
enc.Encode(dest, pbs)
dest = dest[lEnc:]
dest[0] = '"'
return result
}
func decodeUTF8String(src *bufio.Reader) []byte {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeUtf8String {
panic(fmt.Errorf("Major type is: %d in decodeUTF8String", major))
}
result := []byte{'"'}
length := decodeIntAdditionalType(src, minor)
len := int(length)
pbs := readNBytes(src, len)
for i := 0; i < len; i++ {
// Check if the character needs encoding. Control characters, slashes,
// and the double quote need json encoding. Bytes above the ascii
// boundary needs utf8 encoding.
if pbs[i] < 0x20 || pbs[i] > 0x7e || pbs[i] == '\\' || pbs[i] == '"' {
// We encountered a character that needs to be encoded. Switch
// to complex version of the algorithm.
dst := []byte{'"'}
dst = decodeStringComplex(dst, string(pbs), uint(i))
return append(dst, '"')
}
}
// The string has no need for encoding and therefore is directly
// appended to the byte slice.
result = append(result, pbs...)
return append(result, '"')
}
func array2Json(src *bufio.Reader, dst io.Writer) {
dst.Write([]byte{'['})
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeArray {
panic(fmt.Errorf("Major type is: %d in array2Json", major))
}
len := 0
unSpecifiedCount := false
if minor == additionalTypeInfiniteCount {
unSpecifiedCount = true
} else {
length := decodeIntAdditionalType(src, minor)
len = int(length)
}
for i := 0; unSpecifiedCount || i < len; i++ {
if unSpecifiedCount {
pb, e := src.Peek(1)
if e != nil {
panic(e)
}
if pb[0] == majorTypeSimpleAndFloat|additionalTypeBreak {
readByte(src)
break
}
}
cbor2JsonOneObject(src, dst)
if unSpecifiedCount {
pb, e := src.Peek(1)
if e != nil {
panic(e)
}
if pb[0] == majorTypeSimpleAndFloat|additionalTypeBreak {
readByte(src)
break
}
dst.Write([]byte{','})
} else if i+1 < len {
dst.Write([]byte{','})
}
}
dst.Write([]byte{']'})
}
func map2Json(src *bufio.Reader, dst io.Writer) {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeMap {
panic(fmt.Errorf("Major type is: %d in map2Json", major))
}
len := 0
unSpecifiedCount := false
if minor == additionalTypeInfiniteCount {
unSpecifiedCount = true
} else {
length := decodeIntAdditionalType(src, minor)
len = int(length)
}
dst.Write([]byte{'{'})
for i := 0; unSpecifiedCount || i < len; i++ {
if unSpecifiedCount {
pb, e := src.Peek(1)
if e != nil {
panic(e)
}
if pb[0] == majorTypeSimpleAndFloat|additionalTypeBreak {
readByte(src)
break
}
}
cbor2JsonOneObject(src, dst)
if i%2 == 0 {
// Even position values are keys.
dst.Write([]byte{':'})
} else {
if unSpecifiedCount {
pb, e := src.Peek(1)
if e != nil {
panic(e)
}
if pb[0] == majorTypeSimpleAndFloat|additionalTypeBreak {
readByte(src)
break
}
dst.Write([]byte{','})
} else if i+1 < len {
dst.Write([]byte{','})
}
}
}
dst.Write([]byte{'}'})
}
func decodeTagData(src *bufio.Reader) []byte {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeTags {
panic(fmt.Errorf("Major type is: %d in decodeTagData", major))
}
switch minor {
case additionalTypeTimestamp:
return decodeTimeStamp(src)
case additionalTypeIntUint8:
val := decodeIntAdditionalType(src, minor)
switch byte(val) {
case additionalTypeEmbeddedCBOR:
pb := readByte(src)
dataMajor := pb & maskOutAdditionalType
if dataMajor != majorTypeByteString {
panic(fmt.Errorf("Unsupported embedded Type: %d in decodeEmbeddedCBOR", dataMajor))
}
src.UnreadByte()
return decodeStringToDataUrl(src, "application/cbor")
default:
panic(fmt.Errorf("Unsupported Additional Tag Type: %d in decodeTagData", val))
}
// Tag value is larger than 256 (so uint16).
case additionalTypeIntUint16:
val := decodeIntAdditionalType(src, minor)
switch uint16(val) {
case additionalTypeEmbeddedJSON:
pb := readByte(src)
dataMajor := pb & maskOutAdditionalType
if dataMajor != majorTypeByteString {
panic(fmt.Errorf("Unsupported embedded Type: %d in decodeEmbeddedJSON", dataMajor))
}
src.UnreadByte()
return decodeString(src, true)
case additionalTypeTagNetworkAddr:
octets := decodeString(src, true)
ss := []byte{'"'}
switch len(octets) {
case 6: // MAC address.
ha := net.HardwareAddr(octets)
ss = append(append(ss, ha.String()...), '"')
case 4: // IPv4 address.
fallthrough
case 16: // IPv6 address.
ip := net.IP(octets)
ss = append(append(ss, ip.String()...), '"')
default:
panic(fmt.Errorf("Unexpected Network Address length: %d (expected 4,6,16)", len(octets)))
}
return ss
case additionalTypeTagNetworkPrefix:
pb := readByte(src)
if pb != majorTypeMap|0x1 {
panic(fmt.Errorf("IP Prefix is NOT of MAP of 1 elements as expected"))
}
octets := decodeString(src, true)
val := decodeInteger(src)
ip := net.IP(octets)
var mask net.IPMask
pfxLen := int(val)
if len(octets) == 4 {
mask = net.CIDRMask(pfxLen, 32)
} else {
mask = net.CIDRMask(pfxLen, 128)
}
ipPfx := net.IPNet{IP: ip, Mask: mask}
ss := []byte{'"'}
ss = append(append(ss, ipPfx.String()...), '"')
return ss
case additionalTypeTagHexString:
octets := decodeString(src, true)
ss := []byte{'"'}
for _, v := range octets {
ss = append(ss, hexTable[v>>4], hexTable[v&0x0f])
}
return append(ss, '"')
default:
panic(fmt.Errorf("Unsupported Additional Tag Type: %d in decodeTagData", val))
}
}
panic(fmt.Errorf("Unsupported Additional Type: %d in decodeTagData", minor))
}
func decodeTimeStamp(src *bufio.Reader) []byte {
pb := readByte(src)
src.UnreadByte()
tsMajor := pb & maskOutAdditionalType
if tsMajor == majorTypeUnsignedInt || tsMajor == majorTypeNegativeInt {
n := decodeInteger(src)
t := time.Unix(n, 0)
if decodeTimeZone != nil {
t = t.In(decodeTimeZone)
} else {
t = t.In(time.UTC)
}
tsb := []byte{}
tsb = append(tsb, '"')
tsb = t.AppendFormat(tsb, IntegerTimeFieldFormat)
tsb = append(tsb, '"')
return tsb
} else if tsMajor == majorTypeSimpleAndFloat {
n, _ := decodeFloat(src)
secs := int64(n)
n -= float64(secs)
n *= float64(1e9)
t := time.Unix(secs, int64(n))
if decodeTimeZone != nil {
t = t.In(decodeTimeZone)
} else {
t = t.In(time.UTC)
}
tsb := []byte{}
tsb = append(tsb, '"')
tsb = t.AppendFormat(tsb, NanoTimeFieldFormat)
tsb = append(tsb, '"')
return tsb
}
panic(fmt.Errorf("TS format is neigther int nor float: %d", tsMajor))
}
func decodeSimpleFloat(src *bufio.Reader) []byte {
pb := readByte(src)
major := pb & maskOutAdditionalType
minor := pb & maskOutMajorType
if major != majorTypeSimpleAndFloat {
panic(fmt.Errorf("Major type is: %d in decodeSimpleFloat", major))
}
switch minor {
case additionalTypeBoolTrue:
return []byte("true")
case additionalTypeBoolFalse:
return []byte("false")
case additionalTypeNull:
return []byte("null")
case additionalTypeFloat16:
fallthrough
case additionalTypeFloat32:
fallthrough
case additionalTypeFloat64:
src.UnreadByte()
v, bc := decodeFloat(src)
ba := []byte{}
switch {
case math.IsNaN(v):
return []byte("\"NaN\"")
case math.IsInf(v, 1):
return []byte("\"+Inf\"")
case math.IsInf(v, -1):
return []byte("\"-Inf\"")
}
if bc == isFloat32 {
ba = strconv.AppendFloat(ba, v, 'f', -1, 32)
} else if bc == isFloat64 {
ba = strconv.AppendFloat(ba, v, 'f', -1, 64)
} else {
panic(fmt.Errorf("Invalid Float precision from decodeFloat: %d", bc))
}
return ba
default:
panic(fmt.Errorf("Invalid Additional Type: %d in decodeSimpleFloat", minor))
}
}
func cbor2JsonOneObject(src *bufio.Reader, dst io.Writer) {
pb, e := src.Peek(1)
if e != nil {
panic(e)
}
major := (pb[0] & maskOutAdditionalType)
switch major {
case majorTypeUnsignedInt:
fallthrough
case majorTypeNegativeInt:
n := decodeInteger(src)
dst.Write([]byte(strconv.Itoa(int(n))))
case majorTypeByteString:
s := decodeString(src, false)
dst.Write(s)
case majorTypeUtf8String:
s := decodeUTF8String(src)
dst.Write(s)
case majorTypeArray:
array2Json(src, dst)
case majorTypeMap:
map2Json(src, dst)
case majorTypeTags:
s := decodeTagData(src)
dst.Write(s)
case majorTypeSimpleAndFloat:
s := decodeSimpleFloat(src)
dst.Write(s)
}
}
func moreBytesToRead(src *bufio.Reader) bool {
_, e := src.ReadByte()
if e == nil {
src.UnreadByte()
return true
}
return false
}
// Cbor2JsonManyObjects decodes all the CBOR Objects read from src
// reader. It keeps on decoding until reader returns EOF (error when reading).
// Decoded string is written to the dst. At the end of every CBOR Object
// newline is written to the output stream.
//
// Returns error (if any) that was encountered during decode.
// The child functions will generate a panic when error is encountered and
// this function will recover non-runtime Errors and return the reason as error.
func Cbor2JsonManyObjects(src io.Reader, dst io.Writer) (err error) {
defer func() {
if r := recover(); r != nil {
if _, ok := r.(runtime.Error); ok {
panic(r)
}
err = r.(error)
}
}()
bufRdr := bufio.NewReader(src)
for moreBytesToRead(bufRdr) {
cbor2JsonOneObject(bufRdr, dst)
dst.Write([]byte("\n"))
}
return nil
}
// Detect if the bytes to be printed is Binary or not.
func binaryFmt(p []byte) bool {
if len(p) > 0 && p[0] > 0x7F {
return true
}
return false
}
func getReader(str string) *bufio.Reader {
return bufio.NewReader(strings.NewReader(str))
}
// DecodeIfBinaryToString converts a binary formatted log msg to a
// JSON formatted String Log message - suitable for printing to Console/Syslog.
func DecodeIfBinaryToString(in []byte) string {
if binaryFmt(in) {
var b bytes.Buffer
Cbor2JsonManyObjects(strings.NewReader(string(in)), &b)
return b.String()
}
return string(in)
}
// DecodeObjectToStr checks if the input is a binary format, if so,
// it will decode a single Object and return the decoded string.
func DecodeObjectToStr(in []byte) string {
if binaryFmt(in) {
var b bytes.Buffer
cbor2JsonOneObject(getReader(string(in)), &b)
return b.String()
}
return string(in)
}
// DecodeIfBinaryToBytes checks if the input is a binary format, if so,
// it will decode all Objects and return the decoded string as byte array.
func DecodeIfBinaryToBytes(in []byte) []byte {
if binaryFmt(in) {
var b bytes.Buffer
Cbor2JsonManyObjects(bytes.NewReader(in), &b)
return b.Bytes()
}
return in
}

View File

@ -1,205 +0,0 @@
package cbor
import (
"bytes"
"encoding/hex"
"testing"
"time"
)
func TestDecodeInteger(t *testing.T) {
for _, tc := range integerTestCases {
gotv := decodeInteger(getReader(tc.binary))
if gotv != int64(tc.val) {
t.Errorf("decodeInteger(0x%s)=0x%d, want: 0x%d",
hex.EncodeToString([]byte(tc.binary)), gotv, tc.val)
}
}
}
func TestDecodeString(t *testing.T) {
for _, tt := range encodeStringTests {
got := decodeUTF8String(getReader(tt.binary))
if string(got) != "\""+tt.json+"\"" {
t.Errorf("DecodeString(0x%s)=%s, want:\"%s\"\n", hex.EncodeToString([]byte(tt.binary)), string(got),
hex.EncodeToString([]byte(tt.json)))
}
}
}
func TestDecodeArray(t *testing.T) {
for _, tc := range integerArrayTestCases {
buf := bytes.NewBuffer([]byte{})
array2Json(getReader(tc.binary), buf)
if buf.String() != tc.json {
t.Errorf("array2Json(0x%s)=%s, want: %s", hex.EncodeToString([]byte(tc.binary)), buf.String(), tc.json)
}
}
//Unspecified Length Array
var infiniteArrayTestCases = []struct {
in string
out string
}{
{"\x9f\x20\x00\x18\xc8\x14\xff", "[-1,0,200,20]"},
{"\x9f\x38\xc7\x29\x18\xc8\x19\x01\x90\xff", "[-200,-10,200,400]"},
{"\x9f\x01\x02\x03\xff", "[1,2,3]"},
{"\x9f\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x18\x18\x19\xff",
"[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]"},
}
for _, tc := range infiniteArrayTestCases {
buf := bytes.NewBuffer([]byte{})
array2Json(getReader(tc.in), buf)
if buf.String() != tc.out {
t.Errorf("array2Json(0x%s)=%s, want: %s", hex.EncodeToString([]byte(tc.out)), buf.String(), tc.out)
}
}
for _, tc := range booleanArrayTestCases {
buf := bytes.NewBuffer([]byte{})
array2Json(getReader(tc.binary), buf)
if buf.String() != tc.json {
t.Errorf("array2Json(0x%s)=%s, want: %s", hex.EncodeToString([]byte(tc.binary)), buf.String(), tc.json)
}
}
//TODO add cases for arrays of other types
}
var infiniteMapDecodeTestCases = []struct {
bin []byte
json string
}{
{[]byte("\xbf\x64IETF\x20\xff"), "{\"IETF\":-1}"},
{[]byte("\xbf\x65Array\x84\x20\x00\x18\xc8\x14\xff"), "{\"Array\":[-1,0,200,20]}"},
}
var mapDecodeTestCases = []struct {
bin []byte
json string
}{
{[]byte("\xa2\x64IETF\x20"), "{\"IETF\":-1}"},
{[]byte("\xa2\x65Array\x84\x20\x00\x18\xc8\x14"), "{\"Array\":[-1,0,200,20]}"},
}
func TestDecodeMap(t *testing.T) {
for _, tc := range mapDecodeTestCases {
buf := bytes.NewBuffer([]byte{})
map2Json(getReader(string(tc.bin)), buf)
if buf.String() != tc.json {
t.Errorf("map2Json(0x%s)=%s, want: %s", hex.EncodeToString(tc.bin), buf.String(), tc.json)
}
}
for _, tc := range infiniteMapDecodeTestCases {
buf := bytes.NewBuffer([]byte{})
map2Json(getReader(string(tc.bin)), buf)
if buf.String() != tc.json {
t.Errorf("map2Json(0x%s)=%s, want: %s", hex.EncodeToString(tc.bin), buf.String(), tc.json)
}
}
}
func TestDecodeBool(t *testing.T) {
for _, tc := range booleanTestCases {
got := decodeSimpleFloat(getReader(tc.binary))
if string(got) != tc.json {
t.Errorf("decodeSimpleFloat(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.binary)), string(got), tc.json)
}
}
}
func TestDecodeFloat(t *testing.T) {
for _, tc := range float32TestCases {
got, _ := decodeFloat(getReader(tc.binary))
if got != float64(tc.val) {
t.Errorf("decodeFloat(0x%s)=%f, want:%f", hex.EncodeToString([]byte(tc.binary)), got, tc.val)
}
}
}
func TestDecodeTimestamp(t *testing.T) {
decodeTimeZone, _ = time.LoadLocation("UTC")
for _, tc := range timeIntegerTestcases {
tm := decodeTagData(getReader(tc.binary))
if string(tm) != "\""+tc.rfcStr+"\"" {
t.Errorf("decodeFloat(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.binary)), tm, tc.rfcStr)
}
}
for _, tc := range timeFloatTestcases {
tm := decodeTagData(getReader(tc.out))
//Since we convert to float and back - it may be slightly off - so
//we cannot check for exact equality instead, we'll check it is
//very close to each other Less than a Microsecond (lets not yet do nanosec)
got, _ := time.Parse(string(tm), string(tm))
want, _ := time.Parse(tc.rfcStr, tc.rfcStr)
if got.Sub(want) > time.Microsecond {
t.Errorf("decodeFloat(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.out)), tm, tc.rfcStr)
}
}
}
func TestDecodeNetworkAddr(t *testing.T) {
for _, tc := range ipAddrTestCases {
d1 := decodeTagData(getReader(tc.binary))
if string(d1) != tc.text {
t.Errorf("decodeNetworkAddr(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.binary)), d1, tc.text)
}
}
}
func TestDecodeMACAddr(t *testing.T) {
for _, tc := range macAddrTestCases {
d1 := decodeTagData(getReader(tc.binary))
if string(d1) != tc.text {
t.Errorf("decodeNetworkAddr(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.binary)), d1, tc.text)
}
}
}
func TestDecodeIPPrefix(t *testing.T) {
for _, tc := range IPPrefixTestCases {
d1 := decodeTagData(getReader(tc.binary))
if string(d1) != tc.text {
t.Errorf("decodeIPPrefix(0x%s)=%s, want:%s", hex.EncodeToString([]byte(tc.binary)), d1, tc.text)
}
}
}
var compositeCborTestCases = []struct {
binary []byte
json string
}{
{[]byte("\xbf\x64IETF\x20\x65Array\x9f\x20\x00\x18\xc8\x14\xff\xff"), "{\"IETF\":-1,\"Array\":[-1,0,200,20]}\n"},
{[]byte("\xbf\x64IETF\x64YES!\x65Array\x9f\x20\x00\x18\xc8\x14\xff\xff"), "{\"IETF\":\"YES!\",\"Array\":[-1,0,200,20]}\n"},
}
func TestDecodeCbor2Json(t *testing.T) {
for _, tc := range compositeCborTestCases {
buf := bytes.NewBuffer([]byte{})
err := Cbor2JsonManyObjects(getReader(string(tc.binary)), buf)
if buf.String() != tc.json || err != nil {
t.Errorf("cbor2JsonManyObjects(0x%s)=%s, want: %s, err:%s", hex.EncodeToString(tc.binary), buf.String(), tc.json, err.Error())
}
}
}
var negativeCborTestCases = []struct {
binary []byte
errStr string
}{
{[]byte("\xb9\x64IETF\x20\x65Array\x9f\x20\x00\x18\xc8\x14"), "Tried to Read 18 Bytes.. But hit end of file"},
{[]byte("\xbf\x64IETF\x20\x65Array\x9f\x20\x00\x18\xc8\x14"), "EOF"},
{[]byte("\xbf\x14IETF\x20\x65Array\x9f\x20\x00\x18\xc8\x14"), "Tried to Read 40736 Bytes.. But hit end of file"},
{[]byte("\xbf\x64IETF"), "EOF"},
{[]byte("\xbf\x64IETF\x20\x65Array\x9f\x20\x00\x18\xc8\xff\xff\xff"), "Invalid Additional Type: 31 in decodeSimpleFloat"},
{[]byte("\xbf\x64IETF\x20\x65Array"), "EOF"},
{[]byte("\xbf\x64"), "Tried to Read 4 Bytes.. But hit end of file"},
}
func TestDecodeNegativeCbor2Json(t *testing.T) {
for _, tc := range negativeCborTestCases {
buf := bytes.NewBuffer([]byte{})
err := Cbor2JsonManyObjects(getReader(string(tc.binary)), buf)
if err == nil || err.Error() != tc.errStr {
t.Errorf("Expected error got:%s, want:%s", err, tc.errStr)
}
}
}

View File

@ -1,55 +0,0 @@
package main
import (
"compress/zlib"
"flag"
"io"
"log"
"os"
"time"
"tuxpa.in/a/zlog"
)
func writeLog(fname string, count int, useCompress bool) {
opFile := os.Stdout
if fname != "<stdout>" {
fil, _ := os.Create(fname)
opFile = fil
defer func() {
if err := fil.Close(); err != nil {
log.Fatal(err)
}
}()
}
var f io.WriteCloser = opFile
if useCompress {
f = zlib.NewWriter(f)
defer func() {
if err := f.Close(); err != nil {
log.Fatal(err)
}
}()
}
zlog.TimestampFunc = func() time.Time { return time.Now().Round(time.Second) }
log := zlog.New(f).With().
Timestamp().
Logger()
for i := 0; i < count; i++ {
log.Error().
Int("Fault", 41650+i).Msg("Some Message")
}
}
func main() {
outFile := flag.String("out", "<stdout>", "Output File to which logs will be written to (WILL overwrite if already present).")
numLogs := flag.Int("num", 10, "Number of log messages to generate.")
doCompress := flag.Bool("compress", false, "Enable inline compressed writer")
flag.Parse()
writeLog(*outFile, *numLogs, *doCompress)
}

View File

@ -1,10 +0,0 @@
all: genLogJSON genLogCBOR
genLogJSON: genLog.go
go build -o genLogJSON genLog.go
genLogCBOR: genLog.go
go build -tags binary_log -o genLogCBOR genLog.go
clean:
rm -f genLogJSON genLogCBOR

View File

@ -1,117 +0,0 @@
package cbor
import "fmt"
// AppendStrings encodes and adds an array of strings to the dst byte array.
func (e Encoder) AppendStrings(dst []byte, vals []string) []byte {
major := majorTypeArray
l := len(vals)
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendString(dst, v)
}
return dst
}
// AppendString encodes and adds a string to the dst byte array.
func (Encoder) AppendString(dst []byte, s string) []byte {
major := majorTypeUtf8String
l := len(s)
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, majorTypeUtf8String, uint64(l))
}
return append(dst, s...)
}
// AppendStringers encodes and adds an array of Stringer values
// to the dst byte array.
func (e Encoder) AppendStringers(dst []byte, vals []fmt.Stringer) []byte {
if len(vals) == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
dst = e.AppendArrayStart(dst)
dst = e.AppendStringer(dst, vals[0])
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = e.AppendStringer(dst, val)
}
}
return e.AppendArrayEnd(dst)
}
// AppendStringer encodes and adds the Stringer value to the dst
// byte array.
func (e Encoder) AppendStringer(dst []byte, val fmt.Stringer) []byte {
if val == nil {
return e.AppendNil(dst)
}
return e.AppendString(dst, val.String())
}
// AppendBytes encodes and adds an array of bytes to the dst byte array.
func (Encoder) AppendBytes(dst, s []byte) []byte {
major := majorTypeByteString
l := len(s)
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
return append(dst, s...)
}
// AppendEmbeddedJSON adds a tag and embeds input JSON as such.
func AppendEmbeddedJSON(dst, s []byte) []byte {
major := majorTypeTags
minor := additionalTypeEmbeddedJSON
// Append the TAG to indicate this is Embedded JSON.
dst = append(dst, major|additionalTypeIntUint16)
dst = append(dst, byte(minor>>8))
dst = append(dst, byte(minor&0xff))
// Append the JSON Object as Byte String.
major = majorTypeByteString
l := len(s)
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
return append(dst, s...)
}
// AppendEmbeddedCBOR adds a tag and embeds input CBOR as such.
func AppendEmbeddedCBOR(dst, s []byte) []byte {
major := majorTypeTags
minor := additionalTypeEmbeddedCBOR
// Append the TAG to indicate this is Embedded JSON.
dst = append(dst, major|additionalTypeIntUint8)
dst = append(dst, minor)
// Append the CBOR Object as Byte String.
major = majorTypeByteString
l := len(s)
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
return append(dst, s...)
}

View File

@ -1,118 +0,0 @@
package cbor
import (
"bytes"
"testing"
)
var encodeStringTests = []struct {
plain string
binary string
json string //begin and end quotes are implied
}{
{"", "\x60", ""},
{"\\", "\x61\x5c", "\\\\"},
{"\x00", "\x61\x00", "\\u0000"},
{"\x01", "\x61\x01", "\\u0001"},
{"\x02", "\x61\x02", "\\u0002"},
{"\x03", "\x61\x03", "\\u0003"},
{"\x04", "\x61\x04", "\\u0004"},
{"*", "\x61*", "*"},
{"a", "\x61a", "a"},
{"IETF", "\x64IETF", "IETF"},
{"abcdefghijklmnopqrstuvwxyzABCD", "\x78\x1eabcdefghijklmnopqrstuvwxyzABCD", "abcdefghijklmnopqrstuvwxyzABCD"},
{"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->",
"\x79\x01\x2c<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->",
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->"},
{"emoji \u2764\ufe0f!", "\x6demoji ❤️!", "emoji \u2764\ufe0f!"},
}
var encodeByteTests = []struct {
plain []byte
binary string
}{
{[]byte{}, "\x40"},
{[]byte("\\"), "\x41\x5c"},
{[]byte("\x00"), "\x41\x00"},
{[]byte("\x01"), "\x41\x01"},
{[]byte("\x02"), "\x41\x02"},
{[]byte("\x03"), "\x41\x03"},
{[]byte("\x04"), "\x41\x04"},
{[]byte("*"), "\x41*"},
{[]byte("a"), "\x41a"},
{[]byte("IETF"), "\x44IETF"},
{[]byte("abcdefghijklmnopqrstuvwxyzABCD"), "\x58\x1eabcdefghijklmnopqrstuvwxyzABCD"},
{[]byte("<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->"),
"\x59\x01\x2c<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->" +
"<------------------------------------ This is a 100 character string ----------------------------->"},
{[]byte("emoji \u2764\ufe0f!"), "\x4demoji ❤️!"},
}
func TestAppendString(t *testing.T) {
for _, tt := range encodeStringTests {
b := enc.AppendString([]byte{}, tt.plain)
if got, want := string(b), tt.binary; got != want {
t.Errorf("appendString(%q) = %#q, want %#q", tt.plain, got, want)
}
}
//Test a large string > 65535 length
var buffer bytes.Buffer
for i := 0; i < 0x00011170; i++ { //70,000 character string
buffer.WriteString("a")
}
inp := buffer.String()
want := "\x7a\x00\x01\x11\x70" + inp
b := enc.AppendString([]byte{}, inp)
if got := string(b); got != want {
t.Errorf("appendString(%q) = %#q, want %#q", inp, got, want)
}
}
func TestAppendBytes(t *testing.T) {
for _, tt := range encodeByteTests {
b := enc.AppendBytes([]byte{}, tt.plain)
if got, want := string(b), tt.binary; got != want {
t.Errorf("appendString(%q) = %#q, want %#q", tt.plain, got, want)
}
}
//Test a large string > 65535 length
inp := []byte{}
for i := 0; i < 0x00011170; i++ { //70,000 character string
inp = append(inp, byte('a'))
}
want := "\x5a\x00\x01\x11\x70" + string(inp)
b := enc.AppendBytes([]byte{}, inp)
if got := string(b); got != want {
t.Errorf("appendString(%q) = %#q, want %#q", inp, got, want)
}
}
func BenchmarkAppendString(b *testing.B) {
tests := map[string]string{
"NoEncoding": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingFirst": `"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa"aaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"`,
"MultiBytesFirst": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa❤aaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa❤`,
}
for name, str := range tests {
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 120)
for i := 0; i < b.N; i++ {
_ = enc.AppendString(buf, str)
}
})
}
}

View File

@ -1,93 +0,0 @@
package cbor
import (
"time"
)
func appendIntegerTimestamp(dst []byte, t time.Time) []byte {
major := majorTypeTags
minor := additionalTypeTimestamp
dst = append(dst, major|minor)
secs := t.Unix()
var val uint64
if secs < 0 {
major = majorTypeNegativeInt
val = uint64(-secs - 1)
} else {
major = majorTypeUnsignedInt
val = uint64(secs)
}
dst = appendCborTypePrefix(dst, major, val)
return dst
}
func (e Encoder) appendFloatTimestamp(dst []byte, t time.Time) []byte {
major := majorTypeTags
minor := additionalTypeTimestamp
dst = append(dst, major|minor)
secs := t.Unix()
nanos := t.Nanosecond()
var val float64
val = float64(secs)*1.0 + float64(nanos)*1e-9
return e.AppendFloat64(dst, val)
}
// AppendTime encodes and adds a timestamp to the dst byte array.
func (e Encoder) AppendTime(dst []byte, t time.Time, unused string) []byte {
utc := t.UTC()
if utc.Nanosecond() == 0 {
return appendIntegerTimestamp(dst, utc)
}
return e.appendFloatTimestamp(dst, utc)
}
// AppendTimes encodes and adds an array of timestamps to the dst byte array.
func (e Encoder) AppendTimes(dst []byte, vals []time.Time, unused string) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, t := range vals {
dst = e.AppendTime(dst, t, unused)
}
return dst
}
// AppendDuration encodes and adds a duration to the dst byte array.
// useInt field indicates whether to store the duration as seconds (integer) or
// as seconds+nanoseconds (float).
func (e Encoder) AppendDuration(dst []byte, d time.Duration, unit time.Duration, useInt bool) []byte {
if useInt {
return e.AppendInt64(dst, int64(d/unit))
}
return e.AppendFloat64(dst, float64(d)/float64(unit))
}
// AppendDurations encodes and adds an array of durations to the dst byte array.
// useInt field indicates whether to store the duration as seconds (integer) or
// as seconds+nanoseconds (float).
func (e Encoder) AppendDurations(dst []byte, vals []time.Duration, unit time.Duration, useInt bool) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, d := range vals {
dst = e.AppendDuration(dst, d, unit, useInt)
}
return dst
}

View File

@ -1,99 +0,0 @@
package cbor
import (
"encoding/hex"
"fmt"
"math"
"testing"
"time"
)
func TestAppendTimeNow(t *testing.T) {
tm := time.Now()
s := enc.AppendTime([]byte{}, tm, "unused")
got := string(s)
tm1 := float64(tm.Unix()) + float64(tm.Nanosecond())*1E-9
tm2 := math.Float64bits(tm1)
var tm3 [8]byte
for i := uint(0); i < 8; i++ {
tm3[i] = byte(tm2 >> ((8 - i - 1) * 8))
}
want := append([]byte{0xc1, 0xfb}, tm3[:]...)
if got != string(want) {
t.Errorf("Appendtime(%s)=0x%s, want: 0x%s",
"time.Now()", hex.EncodeToString(s),
hex.EncodeToString(want))
}
}
var timeIntegerTestcases = []struct {
txt string
binary string
rfcStr string
}{
{"2013-02-03T19:54:00-08:00", "\xc1\x1a\x51\x0f\x30\xd8", "2013-02-04T03:54:00Z"},
{"1950-02-03T19:54:00-08:00", "\xc1\x3a\x25\x71\x93\xa7", "1950-02-04T03:54:00Z"},
}
func TestAppendTimePastPresentInteger(t *testing.T) {
for _, tt := range timeIntegerTestcases {
tin, err := time.Parse(time.RFC3339, tt.txt)
if err != nil {
fmt.Println("Cannot parse input", tt.txt, ".. Skipping!", err)
continue
}
b := enc.AppendTime([]byte{}, tin, "unused")
if got, want := string(b), tt.binary; got != want {
t.Errorf("appendString(%s) = 0x%s, want 0x%s", tt.txt,
hex.EncodeToString(b),
hex.EncodeToString([]byte(want)))
}
}
}
var timeFloatTestcases = []struct {
rfcStr string
out string
}{
{"2006-01-02T15:04:05.999999-08:00", "\xc1\xfb\x41\xd0\xee\x6c\x59\x7f\xff\xfc"},
{"1956-01-02T15:04:05.999999-08:00", "\xc1\xfb\xc1\xba\x53\x81\x1a\x00\x00\x11"},
}
func TestAppendTimePastPresentFloat(t *testing.T) {
const timeFloatFmt = "2006-01-02T15:04:05.999999-07:00"
for _, tt := range timeFloatTestcases {
tin, err := time.Parse(timeFloatFmt, tt.rfcStr)
if err != nil {
fmt.Println("Cannot parse input", tt.rfcStr, ".. Skipping!")
continue
}
b := enc.AppendTime([]byte{}, tin, "unused")
if got, want := string(b), tt.out; got != want {
t.Errorf("appendString(%s) = 0x%s, want 0x%s", tt.rfcStr,
hex.EncodeToString(b),
hex.EncodeToString([]byte(want)))
}
}
}
func BenchmarkAppendTime(b *testing.B) {
tests := map[string]string{
"Integer": "Feb 3, 2013 at 7:54pm (PST)",
"Float": "2006-01-02T15:04:05.999999-08:00",
}
const timeFloatFmt = "2006-01-02T15:04:05.999999-07:00"
for name, str := range tests {
t, err := time.Parse(time.RFC3339, str)
if err != nil {
t, _ = time.Parse(timeFloatFmt, str)
}
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
_ = enc.AppendTime(buf, t, "unused")
}
})
}
}

View File

@ -1,486 +0,0 @@
package cbor
import (
"fmt"
"math"
"net"
"reflect"
)
// AppendNil inserts a 'Nil' object into the dst byte array.
func (Encoder) AppendNil(dst []byte) []byte {
return append(dst, majorTypeSimpleAndFloat|additionalTypeNull)
}
// AppendBeginMarker inserts a map start into the dst byte array.
func (Encoder) AppendBeginMarker(dst []byte) []byte {
return append(dst, majorTypeMap|additionalTypeInfiniteCount)
}
// AppendEndMarker inserts a map end into the dst byte array.
func (Encoder) AppendEndMarker(dst []byte) []byte {
return append(dst, majorTypeSimpleAndFloat|additionalTypeBreak)
}
// AppendObjectData takes an object in form of a byte array and appends to dst.
func (Encoder) AppendObjectData(dst []byte, o []byte) []byte {
// BeginMarker is present in the dst, which
// should not be copied when appending to existing data.
return append(dst, o[1:]...)
}
// AppendArrayStart adds markers to indicate the start of an array.
func (Encoder) AppendArrayStart(dst []byte) []byte {
return append(dst, majorTypeArray|additionalTypeInfiniteCount)
}
// AppendArrayEnd adds markers to indicate the end of an array.
func (Encoder) AppendArrayEnd(dst []byte) []byte {
return append(dst, majorTypeSimpleAndFloat|additionalTypeBreak)
}
// AppendArrayDelim adds markers to indicate end of a particular array element.
func (Encoder) AppendArrayDelim(dst []byte) []byte {
//No delimiters needed in cbor
return dst
}
// AppendLineBreak is a noop that keep API compat with json encoder.
func (Encoder) AppendLineBreak(dst []byte) []byte {
// No line breaks needed in binary format.
return dst
}
// AppendBool encodes and inserts a boolean value into the dst byte array.
func (Encoder) AppendBool(dst []byte, val bool) []byte {
b := additionalTypeBoolFalse
if val {
b = additionalTypeBoolTrue
}
return append(dst, majorTypeSimpleAndFloat|b)
}
// AppendBools encodes and inserts an array of boolean values into the dst byte array.
func (e Encoder) AppendBools(dst []byte, vals []bool) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendBool(dst, v)
}
return dst
}
// AppendInt encodes and inserts an integer value into the dst byte array.
func (Encoder) AppendInt(dst []byte, val int) []byte {
major := majorTypeUnsignedInt
contentVal := val
if val < 0 {
major = majorTypeNegativeInt
contentVal = -val - 1
}
if contentVal <= additionalMax {
lb := byte(contentVal)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(contentVal))
}
return dst
}
// AppendInts encodes and inserts an array of integer values into the dst byte array.
func (e Encoder) AppendInts(dst []byte, vals []int) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendInt(dst, v)
}
return dst
}
// AppendInt8 encodes and inserts an int8 value into the dst byte array.
func (e Encoder) AppendInt8(dst []byte, val int8) []byte {
return e.AppendInt(dst, int(val))
}
// AppendInts8 encodes and inserts an array of integer values into the dst byte array.
func (e Encoder) AppendInts8(dst []byte, vals []int8) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendInt(dst, int(v))
}
return dst
}
// AppendInt16 encodes and inserts a int16 value into the dst byte array.
func (e Encoder) AppendInt16(dst []byte, val int16) []byte {
return e.AppendInt(dst, int(val))
}
// AppendInts16 encodes and inserts an array of int16 values into the dst byte array.
func (e Encoder) AppendInts16(dst []byte, vals []int16) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendInt(dst, int(v))
}
return dst
}
// AppendInt32 encodes and inserts a int32 value into the dst byte array.
func (e Encoder) AppendInt32(dst []byte, val int32) []byte {
return e.AppendInt(dst, int(val))
}
// AppendInts32 encodes and inserts an array of int32 values into the dst byte array.
func (e Encoder) AppendInts32(dst []byte, vals []int32) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendInt(dst, int(v))
}
return dst
}
// AppendInt64 encodes and inserts a int64 value into the dst byte array.
func (Encoder) AppendInt64(dst []byte, val int64) []byte {
major := majorTypeUnsignedInt
contentVal := val
if val < 0 {
major = majorTypeNegativeInt
contentVal = -val - 1
}
if contentVal <= additionalMax {
lb := byte(contentVal)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(contentVal))
}
return dst
}
// AppendInts64 encodes and inserts an array of int64 values into the dst byte array.
func (e Encoder) AppendInts64(dst []byte, vals []int64) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendInt64(dst, v)
}
return dst
}
// AppendUint encodes and inserts an unsigned integer value into the dst byte array.
func (e Encoder) AppendUint(dst []byte, val uint) []byte {
return e.AppendInt64(dst, int64(val))
}
// AppendUints encodes and inserts an array of unsigned integer values into the dst byte array.
func (e Encoder) AppendUints(dst []byte, vals []uint) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendUint(dst, v)
}
return dst
}
// AppendUint8 encodes and inserts a unsigned int8 value into the dst byte array.
func (e Encoder) AppendUint8(dst []byte, val uint8) []byte {
return e.AppendUint(dst, uint(val))
}
// AppendUints8 encodes and inserts an array of uint8 values into the dst byte array.
func (e Encoder) AppendUints8(dst []byte, vals []uint8) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendUint8(dst, v)
}
return dst
}
// AppendUint16 encodes and inserts a uint16 value into the dst byte array.
func (e Encoder) AppendUint16(dst []byte, val uint16) []byte {
return e.AppendUint(dst, uint(val))
}
// AppendUints16 encodes and inserts an array of uint16 values into the dst byte array.
func (e Encoder) AppendUints16(dst []byte, vals []uint16) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendUint16(dst, v)
}
return dst
}
// AppendUint32 encodes and inserts a uint32 value into the dst byte array.
func (e Encoder) AppendUint32(dst []byte, val uint32) []byte {
return e.AppendUint(dst, uint(val))
}
// AppendUints32 encodes and inserts an array of uint32 values into the dst byte array.
func (e Encoder) AppendUints32(dst []byte, vals []uint32) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendUint32(dst, v)
}
return dst
}
// AppendUint64 encodes and inserts a uint64 value into the dst byte array.
func (Encoder) AppendUint64(dst []byte, val uint64) []byte {
major := majorTypeUnsignedInt
contentVal := val
if contentVal <= additionalMax {
lb := byte(contentVal)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, contentVal)
}
return dst
}
// AppendUints64 encodes and inserts an array of uint64 values into the dst byte array.
func (e Encoder) AppendUints64(dst []byte, vals []uint64) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendUint64(dst, v)
}
return dst
}
// AppendFloat32 encodes and inserts a single precision float value into the dst byte array.
func (Encoder) AppendFloat32(dst []byte, val float32) []byte {
switch {
case math.IsNaN(float64(val)):
return append(dst, "\xfa\x7f\xc0\x00\x00"...)
case math.IsInf(float64(val), 1):
return append(dst, "\xfa\x7f\x80\x00\x00"...)
case math.IsInf(float64(val), -1):
return append(dst, "\xfa\xff\x80\x00\x00"...)
}
major := majorTypeSimpleAndFloat
subType := additionalTypeFloat32
n := math.Float32bits(val)
var buf [4]byte
for i := uint(0); i < 4; i++ {
buf[i] = byte(n >> ((3 - i) * 8))
}
return append(append(dst, major|subType), buf[0], buf[1], buf[2], buf[3])
}
// AppendFloats32 encodes and inserts an array of single precision float value into the dst byte array.
func (e Encoder) AppendFloats32(dst []byte, vals []float32) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendFloat32(dst, v)
}
return dst
}
// AppendFloat64 encodes and inserts a double precision float value into the dst byte array.
func (Encoder) AppendFloat64(dst []byte, val float64) []byte {
switch {
case math.IsNaN(val):
return append(dst, "\xfb\x7f\xf8\x00\x00\x00\x00\x00\x00"...)
case math.IsInf(val, 1):
return append(dst, "\xfb\x7f\xf0\x00\x00\x00\x00\x00\x00"...)
case math.IsInf(val, -1):
return append(dst, "\xfb\xff\xf0\x00\x00\x00\x00\x00\x00"...)
}
major := majorTypeSimpleAndFloat
subType := additionalTypeFloat64
n := math.Float64bits(val)
dst = append(dst, major|subType)
for i := uint(1); i <= 8; i++ {
b := byte(n >> ((8 - i) * 8))
dst = append(dst, b)
}
return dst
}
// AppendFloats64 encodes and inserts an array of double precision float values into the dst byte array.
func (e Encoder) AppendFloats64(dst []byte, vals []float64) []byte {
major := majorTypeArray
l := len(vals)
if l == 0 {
return e.AppendArrayEnd(e.AppendArrayStart(dst))
}
if l <= additionalMax {
lb := byte(l)
dst = append(dst, major|lb)
} else {
dst = appendCborTypePrefix(dst, major, uint64(l))
}
for _, v := range vals {
dst = e.AppendFloat64(dst, v)
}
return dst
}
// AppendInterface takes an arbitrary object and converts it to JSON and embeds it dst.
func (e Encoder) AppendInterface(dst []byte, i interface{}) []byte {
marshaled, err := JSONMarshalFunc(i)
if err != nil {
return e.AppendString(dst, fmt.Sprintf("marshaling error: %v", err))
}
return AppendEmbeddedJSON(dst, marshaled)
}
// AppendType appends the parameter type (as a string) to the input byte slice.
func (e Encoder) AppendType(dst []byte, i interface{}) []byte {
if i == nil {
return e.AppendString(dst, "<nil>")
}
return e.AppendString(dst, reflect.TypeOf(i).String())
}
// AppendIPAddr encodes and inserts an IP Address (IPv4 or IPv6).
func (e Encoder) AppendIPAddr(dst []byte, ip net.IP) []byte {
dst = append(dst, majorTypeTags|additionalTypeIntUint16)
dst = append(dst, byte(additionalTypeTagNetworkAddr>>8))
dst = append(dst, byte(additionalTypeTagNetworkAddr&0xff))
return e.AppendBytes(dst, ip)
}
// AppendIPPrefix encodes and inserts an IP Address Prefix (Address + Mask Length).
func (e Encoder) AppendIPPrefix(dst []byte, pfx net.IPNet) []byte {
dst = append(dst, majorTypeTags|additionalTypeIntUint16)
dst = append(dst, byte(additionalTypeTagNetworkPrefix>>8))
dst = append(dst, byte(additionalTypeTagNetworkPrefix&0xff))
// Prefix is a tuple (aka MAP of 1 pair of elements) -
// first element is prefix, second is mask length.
dst = append(dst, majorTypeMap|0x1)
dst = e.AppendBytes(dst, pfx.IP)
maskLen, _ := pfx.Mask.Size()
return e.AppendUint8(dst, uint8(maskLen))
}
// AppendMACAddr encodes and inserts a Hardware (MAC) address.
func (e Encoder) AppendMACAddr(dst []byte, ha net.HardwareAddr) []byte {
dst = append(dst, majorTypeTags|additionalTypeIntUint16)
dst = append(dst, byte(additionalTypeTagNetworkAddr>>8))
dst = append(dst, byte(additionalTypeTagNetworkAddr&0xff))
return e.AppendBytes(dst, ha)
}
// AppendHex adds a TAG and inserts a hex bytes as a string.
func (e Encoder) AppendHex(dst []byte, val []byte) []byte {
dst = append(dst, majorTypeTags|additionalTypeIntUint16)
dst = append(dst, byte(additionalTypeTagHexString>>8))
dst = append(dst, byte(additionalTypeTagHexString&0xff))
return e.AppendBytes(dst, val)
}

View File

@ -1,34 +0,0 @@
// +build !386
package cbor
import (
"encoding/hex"
"testing"
)
var enc2 = Encoder{}
var integerTestCases_64bit = []struct {
val int
binary string
}{
// Value in 8 bytes.
{0xabcd100000000, "\x1b\x00\x0a\xbc\xd1\x00\x00\x00\x00"},
{1000000000000, "\x1b\x00\x00\x00\xe8\xd4\xa5\x10\x00"},
// Value in 8 bytes.
{-0xabcd100000001, "\x3b\x00\x0a\xbc\xd1\x00\x00\x00\x00"},
{-1000000000001, "\x3b\x00\x00\x00\xe8\xd4\xa5\x10\x00"},
}
func TestAppendInt_64bit(t *testing.T) {
for _, tc := range integerTestCases_64bit {
s := enc2.AppendInt([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendInt(0x%x)=0x%s, want: 0x%s",
tc.val, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}

View File

@ -1,316 +0,0 @@
package cbor
import (
"encoding/hex"
"net"
"testing"
)
var enc = Encoder{}
func TestAppendNil(t *testing.T) {
s := enc.AppendNil([]byte{})
got := string(s)
want := "\xf6"
if got != want {
t.Errorf("appendNull() = 0x%s, want: 0x%s", hex.EncodeToString(s),
hex.EncodeToString([]byte(want)))
}
}
var booleanTestCases = []struct {
val bool
binary string
json string
}{
{true, "\xf5", "true"},
{false, "\xf4", "false"},
}
func TestAppendBool(t *testing.T) {
for _, tc := range booleanTestCases {
s := enc.AppendBool([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendBool(%s)=0x%s, want: 0x%s",
tc.json, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var booleanArrayTestCases = []struct {
val []bool
binary string
json string
}{
{[]bool{true, false, true}, "\x83\xf5\xf4\xf5", "[true,false,true]"},
{[]bool{true, false, false, true, false, true}, "\x86\xf5\xf4\xf4\xf5\xf4\xf5", "[true,false,false,true,false,true]"},
}
func TestAppendBoolArray(t *testing.T) {
for _, tc := range booleanArrayTestCases {
s := enc.AppendBools([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendBools(%s)=0x%s, want: 0x%s",
tc.json, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var integerTestCases = []struct {
val int
binary string
}{
// Value included in the type.
{0, "\x00"},
{1, "\x01"},
{2, "\x02"},
{3, "\x03"},
{8, "\x08"},
{9, "\x09"},
{10, "\x0a"},
{22, "\x16"},
{23, "\x17"},
// Value in 1 byte.
{24, "\x18\x18"},
{25, "\x18\x19"},
{26, "\x18\x1a"},
{100, "\x18\x64"},
{254, "\x18\xfe"},
{255, "\x18\xff"},
// Value in 2 bytes.
{256, "\x19\x01\x00"},
{257, "\x19\x01\x01"},
{1000, "\x19\x03\xe8"},
{0xFFFF, "\x19\xff\xff"},
// Value in 4 bytes.
{0x10000, "\x1a\x00\x01\x00\x00"},
{0x7FFFFFFE, "\x1a\x7f\xff\xff\xfe"},
{1000000, "\x1a\x00\x0f\x42\x40"},
// Negative number test cases.
// Value included in the type.
{-1, "\x20"},
{-2, "\x21"},
{-3, "\x22"},
{-10, "\x29"},
{-21, "\x34"},
{-22, "\x35"},
{-23, "\x36"},
{-24, "\x37"},
// Value in 1 byte.
{-25, "\x38\x18"},
{-26, "\x38\x19"},
{-100, "\x38\x63"},
{-254, "\x38\xfd"},
{-255, "\x38\xfe"},
{-256, "\x38\xff"},
// Value in 2 bytes.
{-257, "\x39\x01\x00"},
{-258, "\x39\x01\x01"},
{-1000, "\x39\x03\xe7"},
// Value in 4 bytes.
{-0x10001, "\x3a\x00\x01\x00\x00"},
{-0x7FFFFFFE, "\x3a\x7f\xff\xff\xfd"},
{-1000000, "\x3a\x00\x0f\x42\x3f"},
}
func TestAppendInt(t *testing.T) {
for _, tc := range integerTestCases {
s := enc.AppendInt([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendInt(0x%x)=0x%s, want: 0x%s",
tc.val, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var integerArrayTestCases = []struct {
val []int
binary string
json string
}{
{[]int{-1, 0, 200, 20}, "\x84\x20\x00\x18\xc8\x14", "[-1,0,200,20]"},
{[]int{-200, -10, 200, 400}, "\x84\x38\xc7\x29\x18\xc8\x19\x01\x90", "[-200,-10,200,400]"},
{[]int{1, 2, 3}, "\x83\x01\x02\x03", "[1,2,3]"},
{[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25},
"\x98\x19\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x18\x18\x19",
"[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]"},
}
func TestAppendIntArray(t *testing.T) {
for _, tc := range integerArrayTestCases {
s := enc.AppendInts([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendInts(%s)=0x%s, want: 0x%s",
tc.json, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var float32TestCases = []struct {
val float32
binary string
}{
{0.0, "\xfa\x00\x00\x00\x00"},
{-0.0, "\xfa\x00\x00\x00\x00"},
{1.0, "\xfa\x3f\x80\x00\x00"},
{1.5, "\xfa\x3f\xc0\x00\x00"},
{65504.0, "\xfa\x47\x7f\xe0\x00"},
{-4.0, "\xfa\xc0\x80\x00\x00"},
{0.00006103515625, "\xfa\x38\x80\x00\x00"},
}
func TestAppendFloat32(t *testing.T) {
for _, tc := range float32TestCases {
s := enc.AppendFloat32([]byte{}, tc.val)
got := string(s)
if got != tc.binary {
t.Errorf("AppendFloat32(%f)=0x%s, want: 0x%s",
tc.val, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var ipAddrTestCases = []struct {
ipaddr net.IP
text string // ASCII representation of ipaddr
binary string // CBOR representation of ipaddr
}{
{net.IP{10, 0, 0, 1}, "\"10.0.0.1\"", "\xd9\x01\x04\x44\x0a\x00\x00\x01"},
{net.IP{0x20, 0x01, 0x0d, 0xb8, 0x85, 0xa3, 0x0, 0x0, 0x0, 0x0, 0x8a, 0x2e, 0x03, 0x70, 0x73, 0x34},
"\"2001:db8:85a3::8a2e:370:7334\"",
"\xd9\x01\x04\x50\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34"},
}
func TestAppendNetworkAddr(t *testing.T) {
for _, tc := range ipAddrTestCases {
s := enc.AppendIPAddr([]byte{}, tc.ipaddr)
got := string(s)
if got != tc.binary {
t.Errorf("AppendIPAddr(%s)=0x%s, want: 0x%s",
tc.ipaddr, hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var macAddrTestCases = []struct {
macaddr net.HardwareAddr
text string // ASCII representation of macaddr
binary string // CBOR representation of macaddr
}{
{net.HardwareAddr{0x12, 0x34, 0x56, 0x78, 0x90, 0xab}, "\"12:34:56:78:90:ab\"", "\xd9\x01\x04\x46\x12\x34\x56\x78\x90\xab"},
{net.HardwareAddr{0x20, 0x01, 0x0d, 0xb8, 0x85, 0xa3}, "\"20:01:0d:b8:85:a3\"", "\xd9\x01\x04\x46\x20\x01\x0d\xb8\x85\xa3"},
}
func TestAppendMACAddr(t *testing.T) {
for _, tc := range macAddrTestCases {
s := enc.AppendMACAddr([]byte{}, tc.macaddr)
got := string(s)
if got != tc.binary {
t.Errorf("AppendMACAddr(%s)=0x%s, want: 0x%s",
tc.macaddr.String(), hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
var IPPrefixTestCases = []struct {
pfx net.IPNet
text string // ASCII representation of pfx
binary string // CBOR representation of pfx
}{
{net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.CIDRMask(0, 32)}, "\"0.0.0.0/0\"", "\xd9\x01\x05\xa1\x44\x00\x00\x00\x00\x00"},
{net.IPNet{IP: net.IP{192, 168, 0, 100}, Mask: net.CIDRMask(24, 32)}, "\"192.168.0.100/24\"",
"\xd9\x01\x05\xa1\x44\xc0\xa8\x00\x64\x18\x18"},
}
func TestAppendIPPrefix(t *testing.T) {
for _, tc := range IPPrefixTestCases {
s := enc.AppendIPPrefix([]byte{}, tc.pfx)
got := string(s)
if got != tc.binary {
t.Errorf("AppendIPPrefix(%s)=0x%s, want: 0x%s",
tc.pfx.String(), hex.EncodeToString(s),
hex.EncodeToString([]byte(tc.binary)))
}
}
}
func BenchmarkAppendInt(b *testing.B) {
type st struct {
sz byte
val int64
}
tests := map[string]st{
"int-Positive": {sz: 0, val: 10000},
"int-Negative": {sz: 0, val: -10000},
"uint8": {sz: 1, val: 100},
"uint16": {sz: 2, val: 0xfff},
"uint32": {sz: 4, val: 0xffffff},
"uint64": {sz: 8, val: 0xffffffffff},
"int8": {sz: 21, val: -120},
"int16": {sz: 22, val: -1200},
"int32": {sz: 23, val: 32000},
"int64": {sz: 24, val: 0xffffffffff},
}
for name, str := range tests {
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
switch str.sz {
case 0:
_ = enc.AppendInt(buf, int(str.val))
case 1:
_ = enc.AppendUint8(buf, uint8(str.val))
case 2:
_ = enc.AppendUint16(buf, uint16(str.val))
case 4:
_ = enc.AppendUint32(buf, uint32(str.val))
case 8:
_ = enc.AppendUint64(buf, uint64(str.val))
case 21:
_ = enc.AppendInt8(buf, int8(str.val))
case 22:
_ = enc.AppendInt16(buf, int16(str.val))
case 23:
_ = enc.AppendInt32(buf, int32(str.val))
case 24:
_ = enc.AppendInt64(buf, int64(str.val))
}
}
})
}
}
func BenchmarkAppendFloat(b *testing.B) {
type st struct {
sz byte
val float64
}
tests := map[string]st{
"Float32": {sz: 4, val: 10000.12345},
"Float64": {sz: 8, val: -10000.54321},
}
for name, str := range tests {
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
switch str.sz {
case 4:
_ = enc.AppendFloat32(buf, float32(str.val))
case 8:
_ = enc.AppendFloat64(buf, str.val)
}
}
})
}
}

View File

@ -1,19 +0,0 @@
package json
// JSONMarshalFunc is used to marshal interface to JSON encoded byte slice.
// Making it package level instead of embedded in Encoder brings
// some extra efforts at importing, but avoids value copy when the functions
// of Encoder being invoked.
// DO REMEMBER to set this variable at importing, or
// you might get a nil pointer dereference panic at runtime.
var JSONMarshalFunc func(v interface{}) ([]byte, error)
type Encoder struct{}
// AppendKey appends a new key to the output JSON.
func (e Encoder) AppendKey(dst []byte, key string) []byte {
if dst[len(dst)-1] != '{' {
dst = append(dst, ',')
}
return append(e.AppendString(dst, key), ':')
}

View File

@ -1,85 +0,0 @@
package json
import "unicode/utf8"
// AppendBytes is a mirror of appendString with []byte arg
func (Encoder) AppendBytes(dst, s []byte) []byte {
dst = append(dst, '"')
for i := 0; i < len(s); i++ {
if !noEscapeTable[s[i]] {
dst = appendBytesComplex(dst, s, i)
return append(dst, '"')
}
}
dst = append(dst, s...)
return append(dst, '"')
}
// AppendHex encodes the input bytes to a hex string and appends
// the encoded string to the input byte slice.
//
// The operation loops though each byte and encodes it as hex using
// the hex lookup table.
func (Encoder) AppendHex(dst, s []byte) []byte {
dst = append(dst, '"')
for _, v := range s {
dst = append(dst, hex[v>>4], hex[v&0x0f])
}
return append(dst, '"')
}
// appendBytesComplex is a mirror of the appendStringComplex
// with []byte arg
func appendBytesComplex(dst, s []byte, i int) []byte {
start := 0
for i < len(s) {
b := s[i]
if b >= utf8.RuneSelf {
r, size := utf8.DecodeRune(s[i:])
if r == utf8.RuneError && size == 1 {
if start < i {
dst = append(dst, s[start:i]...)
}
dst = append(dst, `\ufffd`...)
i += size
start = i
continue
}
i += size
continue
}
if noEscapeTable[b] {
i++
continue
}
// We encountered a character that needs to be encoded.
// Let's append the previous simple characters to the byte slice
// and switch our operation to read and encode the remainder
// characters byte-by-byte.
if start < i {
dst = append(dst, s[start:i]...)
}
switch b {
case '"', '\\':
dst = append(dst, '\\', b)
case '\b':
dst = append(dst, '\\', 'b')
case '\f':
dst = append(dst, '\\', 'f')
case '\n':
dst = append(dst, '\\', 'n')
case '\r':
dst = append(dst, '\\', 'r')
case '\t':
dst = append(dst, '\\', 't')
default:
dst = append(dst, '\\', 'u', '0', '0', hex[b>>4], hex[b&0xF])
}
i++
start = i
}
if start < len(s) {
dst = append(dst, s[start:]...)
}
return dst
}

View File

@ -1,84 +0,0 @@
package json
import (
"testing"
"unicode"
)
var enc = Encoder{}
func TestAppendBytes(t *testing.T) {
for _, tt := range encodeStringTests {
b := enc.AppendBytes([]byte{}, []byte(tt.in))
if got, want := string(b), tt.out; got != want {
t.Errorf("appendBytes(%q) = %#q, want %#q", tt.in, got, want)
}
}
}
func TestAppendHex(t *testing.T) {
for _, tt := range encodeHexTests {
b := enc.AppendHex([]byte{}, []byte{tt.in})
if got, want := string(b), tt.out; got != want {
t.Errorf("appendHex(%x) = %s, want %s", tt.in, got, want)
}
}
}
func TestStringBytes(t *testing.T) {
t.Parallel()
// Test that encodeState.stringBytes and encodeState.string use the same encoding.
var r []rune
for i := '\u0000'; i <= unicode.MaxRune; i++ {
r = append(r, i)
}
s := string(r) + "\xff\xff\xffhello" // some invalid UTF-8 too
encStr := string(enc.AppendString([]byte{}, s))
encBytes := string(enc.AppendBytes([]byte{}, []byte(s)))
if encStr != encBytes {
i := 0
for i < len(encStr) && i < len(encBytes) && encStr[i] == encBytes[i] {
i++
}
encStr = encStr[i:]
encBytes = encBytes[i:]
i = 0
for i < len(encStr) && i < len(encBytes) && encStr[len(encStr)-i-1] == encBytes[len(encBytes)-i-1] {
i++
}
encStr = encStr[:len(encStr)-i]
encBytes = encBytes[:len(encBytes)-i]
if len(encStr) > 20 {
encStr = encStr[:20] + "..."
}
if len(encBytes) > 20 {
encBytes = encBytes[:20] + "..."
}
t.Errorf("encodings differ at %#q vs %#q", encStr, encBytes)
}
}
func BenchmarkAppendBytes(b *testing.B) {
tests := map[string]string{
"NoEncoding": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingFirst": `"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa"aaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"`,
"MultiBytesFirst": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa❤aaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa❤`,
}
for name, str := range tests {
byt := []byte(str)
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
_ = enc.AppendBytes(buf, byt)
}
})
}
}

View File

@ -1,93 +0,0 @@
package json
import (
"testing"
)
var encodeStringTests = []struct {
in string
out string
}{
{"", `""`},
{"\\", `"\\"`},
{"\x00", `"\u0000"`},
{"\x01", `"\u0001"`},
{"\x02", `"\u0002"`},
{"\x03", `"\u0003"`},
{"\x04", `"\u0004"`},
{"\x05", `"\u0005"`},
{"\x06", `"\u0006"`},
{"\x07", `"\u0007"`},
{"\x08", `"\b"`},
{"\x09", `"\t"`},
{"\x0a", `"\n"`},
{"\x0b", `"\u000b"`},
{"\x0c", `"\f"`},
{"\x0d", `"\r"`},
{"\x0e", `"\u000e"`},
{"\x0f", `"\u000f"`},
{"\x10", `"\u0010"`},
{"\x11", `"\u0011"`},
{"\x12", `"\u0012"`},
{"\x13", `"\u0013"`},
{"\x14", `"\u0014"`},
{"\x15", `"\u0015"`},
{"\x16", `"\u0016"`},
{"\x17", `"\u0017"`},
{"\x18", `"\u0018"`},
{"\x19", `"\u0019"`},
{"\x1a", `"\u001a"`},
{"\x1b", `"\u001b"`},
{"\x1c", `"\u001c"`},
{"\x1d", `"\u001d"`},
{"\x1e", `"\u001e"`},
{"\x1f", `"\u001f"`},
{"✭", `"✭"`},
{"foo\xc2\x7fbar", `"foo\ufffd\u007fbar"`}, // invalid sequence
{"ascii", `"ascii"`},
{"\"a", `"\"a"`},
{"\x1fa", `"\u001fa"`},
{"foo\"bar\"baz", `"foo\"bar\"baz"`},
{"\x1ffoo\x1fbar\x1fbaz", `"\u001ffoo\u001fbar\u001fbaz"`},
{"emoji \u2764\ufe0f!", `"emoji ❤️!"`},
}
var encodeHexTests = []struct {
in byte
out string
}{
{0x00, `"00"`},
{0x0f, `"0f"`},
{0x10, `"10"`},
{0xf0, `"f0"`},
{0xff, `"ff"`},
}
func TestAppendString(t *testing.T) {
for _, tt := range encodeStringTests {
b := enc.AppendString([]byte{}, tt.in)
if got, want := string(b), tt.out; got != want {
t.Errorf("appendString(%q) = %#q, want %#q", tt.in, got, want)
}
}
}
func BenchmarkAppendString(b *testing.B) {
tests := map[string]string{
"NoEncoding": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingFirst": `"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa"aaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"`,
"MultiBytesFirst": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa❤aaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa❤`,
}
for name, str := range tests {
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
_ = enc.AppendString(buf, str)
}
})
}
}

View File

@ -1,113 +0,0 @@
package json
import (
"strconv"
"time"
)
const (
// Import from zlog/global.go
timeFormatUnix = ""
timeFormatUnixMs = "UNIXMS"
timeFormatUnixMicro = "UNIXMICRO"
timeFormatUnixNano = "UNIXNANO"
)
// AppendTime formats the input time with the given format
// and appends the encoded string to the input byte slice.
func (e Encoder) AppendTime(dst []byte, t time.Time, format string) []byte {
switch format {
case timeFormatUnix:
return e.AppendInt64(dst, t.Unix())
case timeFormatUnixMs:
return e.AppendInt64(dst, t.UnixNano()/1000000)
case timeFormatUnixMicro:
return e.AppendInt64(dst, t.UnixNano()/1000)
case timeFormatUnixNano:
return e.AppendInt64(dst, t.UnixNano())
}
return append(t.AppendFormat(append(dst, '"'), format), '"')
}
// AppendTimes converts the input times with the given format
// and appends the encoded string list to the input byte slice.
func (Encoder) AppendTimes(dst []byte, vals []time.Time, format string) []byte {
switch format {
case timeFormatUnix:
return appendUnixTimes(dst, vals)
case timeFormatUnixMs:
return appendUnixNanoTimes(dst, vals, 1000000)
case timeFormatUnixMicro:
return appendUnixNanoTimes(dst, vals, 1000)
case timeFormatUnixNano:
return appendUnixNanoTimes(dst, vals, 1)
}
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = append(vals[0].AppendFormat(append(dst, '"'), format), '"')
if len(vals) > 1 {
for _, t := range vals[1:] {
dst = append(t.AppendFormat(append(dst, ',', '"'), format), '"')
}
}
dst = append(dst, ']')
return dst
}
func appendUnixTimes(dst []byte, vals []time.Time) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, vals[0].Unix(), 10)
if len(vals) > 1 {
for _, t := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), t.Unix(), 10)
}
}
dst = append(dst, ']')
return dst
}
func appendUnixNanoTimes(dst []byte, vals []time.Time, div int64) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, vals[0].UnixNano()/div, 10)
if len(vals) > 1 {
for _, t := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), t.UnixNano()/div, 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendDuration formats the input duration with the given unit & format
// and appends the encoded string to the input byte slice.
func (e Encoder) AppendDuration(dst []byte, d time.Duration, unit time.Duration, useInt bool) []byte {
if useInt {
return strconv.AppendInt(dst, int64(d/unit), 10)
}
return e.AppendFloat64(dst, float64(d)/float64(unit))
}
// AppendDurations formats the input durations with the given unit & format
// and appends the encoded string list to the input byte slice.
func (e Encoder) AppendDurations(dst []byte, vals []time.Duration, unit time.Duration, useInt bool) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = e.AppendDuration(dst, vals[0], unit, useInt)
if len(vals) > 1 {
for _, d := range vals[1:] {
dst = e.AppendDuration(append(dst, ','), d, unit, useInt)
}
}
dst = append(dst, ']')
return dst
}

View File

@ -1,414 +0,0 @@
package json
import (
"fmt"
"math"
"net"
"reflect"
"strconv"
)
// AppendNil inserts a 'Nil' object into the dst byte array.
func (Encoder) AppendNil(dst []byte) []byte {
return append(dst, "null"...)
}
// AppendBeginMarker inserts a map start into the dst byte array.
func (Encoder) AppendBeginMarker(dst []byte) []byte {
return append(dst, '{')
}
// AppendEndMarker inserts a map end into the dst byte array.
func (Encoder) AppendEndMarker(dst []byte) []byte {
return append(dst, '}')
}
// AppendLineBreak appends a line break.
func (Encoder) AppendLineBreak(dst []byte) []byte {
return append(dst, '\n')
}
// AppendArrayStart adds markers to indicate the start of an array.
func (Encoder) AppendArrayStart(dst []byte) []byte {
return append(dst, '[')
}
// AppendArrayEnd adds markers to indicate the end of an array.
func (Encoder) AppendArrayEnd(dst []byte) []byte {
return append(dst, ']')
}
// AppendArrayDelim adds markers to indicate end of a particular array element.
func (Encoder) AppendArrayDelim(dst []byte) []byte {
if len(dst) > 0 {
return append(dst, ',')
}
return dst
}
// AppendBool converts the input bool to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendBool(dst []byte, val bool) []byte {
return strconv.AppendBool(dst, val)
}
// AppendBools encodes the input bools to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendBools(dst []byte, vals []bool) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendBool(dst, vals[0])
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendBool(append(dst, ','), val)
}
}
dst = append(dst, ']')
return dst
}
// AppendInt converts the input int to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendInt(dst []byte, val int) []byte {
return strconv.AppendInt(dst, int64(val), 10)
}
// AppendInts encodes the input ints to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendInts(dst []byte, vals []int) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, int64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), int64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendInt8 converts the input []int8 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendInt8(dst []byte, val int8) []byte {
return strconv.AppendInt(dst, int64(val), 10)
}
// AppendInts8 encodes the input int8s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendInts8(dst []byte, vals []int8) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, int64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), int64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendInt16 converts the input int16 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendInt16(dst []byte, val int16) []byte {
return strconv.AppendInt(dst, int64(val), 10)
}
// AppendInts16 encodes the input int16s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendInts16(dst []byte, vals []int16) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, int64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), int64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendInt32 converts the input int32 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendInt32(dst []byte, val int32) []byte {
return strconv.AppendInt(dst, int64(val), 10)
}
// AppendInts32 encodes the input int32s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendInts32(dst []byte, vals []int32) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, int64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), int64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendInt64 converts the input int64 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendInt64(dst []byte, val int64) []byte {
return strconv.AppendInt(dst, val, 10)
}
// AppendInts64 encodes the input int64s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendInts64(dst []byte, vals []int64) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendInt(dst, vals[0], 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendInt(append(dst, ','), val, 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendUint converts the input uint to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendUint(dst []byte, val uint) []byte {
return strconv.AppendUint(dst, uint64(val), 10)
}
// AppendUints encodes the input uints to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendUints(dst []byte, vals []uint) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendUint(dst, uint64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendUint(append(dst, ','), uint64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendUint8 converts the input uint8 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendUint8(dst []byte, val uint8) []byte {
return strconv.AppendUint(dst, uint64(val), 10)
}
// AppendUints8 encodes the input uint8s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendUints8(dst []byte, vals []uint8) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendUint(dst, uint64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendUint(append(dst, ','), uint64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendUint16 converts the input uint16 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendUint16(dst []byte, val uint16) []byte {
return strconv.AppendUint(dst, uint64(val), 10)
}
// AppendUints16 encodes the input uint16s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendUints16(dst []byte, vals []uint16) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendUint(dst, uint64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendUint(append(dst, ','), uint64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendUint32 converts the input uint32 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendUint32(dst []byte, val uint32) []byte {
return strconv.AppendUint(dst, uint64(val), 10)
}
// AppendUints32 encodes the input uint32s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendUints32(dst []byte, vals []uint32) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendUint(dst, uint64(vals[0]), 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendUint(append(dst, ','), uint64(val), 10)
}
}
dst = append(dst, ']')
return dst
}
// AppendUint64 converts the input uint64 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendUint64(dst []byte, val uint64) []byte {
return strconv.AppendUint(dst, val, 10)
}
// AppendUints64 encodes the input uint64s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendUints64(dst []byte, vals []uint64) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = strconv.AppendUint(dst, vals[0], 10)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = strconv.AppendUint(append(dst, ','), val, 10)
}
}
dst = append(dst, ']')
return dst
}
func appendFloat(dst []byte, val float64, bitSize int) []byte {
// JSON does not permit NaN or Infinity. A typical JSON encoder would fail
// with an error, but a logging library wants the data to get through so we
// make a tradeoff and store those types as string.
switch {
case math.IsNaN(val):
return append(dst, `"NaN"`...)
case math.IsInf(val, 1):
return append(dst, `"+Inf"`...)
case math.IsInf(val, -1):
return append(dst, `"-Inf"`...)
}
return strconv.AppendFloat(dst, val, 'f', -1, bitSize)
}
// AppendFloat32 converts the input float32 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendFloat32(dst []byte, val float32) []byte {
return appendFloat(dst, float64(val), 32)
}
// AppendFloats32 encodes the input float32s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendFloats32(dst []byte, vals []float32) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = appendFloat(dst, float64(vals[0]), 32)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = appendFloat(append(dst, ','), float64(val), 32)
}
}
dst = append(dst, ']')
return dst
}
// AppendFloat64 converts the input float64 to a string and
// appends the encoded string to the input byte slice.
func (Encoder) AppendFloat64(dst []byte, val float64) []byte {
return appendFloat(dst, val, 64)
}
// AppendFloats64 encodes the input float64s to json and
// appends the encoded string list to the input byte slice.
func (Encoder) AppendFloats64(dst []byte, vals []float64) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = appendFloat(dst, vals[0], 64)
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = appendFloat(append(dst, ','), val, 64)
}
}
dst = append(dst, ']')
return dst
}
// AppendInterface marshals the input interface to a string and
// appends the encoded string to the input byte slice.
func (e Encoder) AppendInterface(dst []byte, i interface{}) []byte {
marshaled, err := JSONMarshalFunc(i)
if err != nil {
return e.AppendString(dst, fmt.Sprintf("marshaling error: %v", err))
}
return append(dst, marshaled...)
}
// AppendType appends the parameter type (as a string) to the input byte slice.
func (e Encoder) AppendType(dst []byte, i interface{}) []byte {
if i == nil {
return e.AppendString(dst, "<nil>")
}
return e.AppendString(dst, reflect.TypeOf(i).String())
}
// AppendObjectData takes in an object that is already in a byte array
// and adds it to the dst.
func (Encoder) AppendObjectData(dst []byte, o []byte) []byte {
// Three conditions apply here:
// 1. new content starts with '{' - which should be dropped OR
// 2. new content starts with '{' - which should be replaced with ','
// to separate with existing content OR
// 3. existing content has already other fields
if o[0] == '{' {
if len(dst) > 1 {
dst = append(dst, ',')
}
o = o[1:]
} else if len(dst) > 1 {
dst = append(dst, ',')
}
return append(dst, o...)
}
// AppendIPAddr adds IPv4 or IPv6 address to dst.
func (e Encoder) AppendIPAddr(dst []byte, ip net.IP) []byte {
return e.AppendString(dst, ip.String())
}
// AppendIPPrefix adds IPv4 or IPv6 Prefix (address & mask) to dst.
func (e Encoder) AppendIPPrefix(dst []byte, pfx net.IPNet) []byte {
return e.AppendString(dst, pfx.String())
}
// AppendMACAddr adds MAC address to dst.
func (e Encoder) AppendMACAddr(dst []byte, ha net.HardwareAddr) []byte {
return e.AppendString(dst, ha.String())
}

View File

@ -1,209 +0,0 @@
package json
import (
"math"
"net"
"reflect"
"testing"
)
func TestAppendType(t *testing.T) {
w := map[string]func(interface{}) []byte{
"AppendInt": func(v interface{}) []byte { return enc.AppendInt([]byte{}, v.(int)) },
"AppendInt8": func(v interface{}) []byte { return enc.AppendInt8([]byte{}, v.(int8)) },
"AppendInt16": func(v interface{}) []byte { return enc.AppendInt16([]byte{}, v.(int16)) },
"AppendInt32": func(v interface{}) []byte { return enc.AppendInt32([]byte{}, v.(int32)) },
"AppendInt64": func(v interface{}) []byte { return enc.AppendInt64([]byte{}, v.(int64)) },
"AppendUint": func(v interface{}) []byte { return enc.AppendUint([]byte{}, v.(uint)) },
"AppendUint8": func(v interface{}) []byte { return enc.AppendUint8([]byte{}, v.(uint8)) },
"AppendUint16": func(v interface{}) []byte { return enc.AppendUint16([]byte{}, v.(uint16)) },
"AppendUint32": func(v interface{}) []byte { return enc.AppendUint32([]byte{}, v.(uint32)) },
"AppendUint64": func(v interface{}) []byte { return enc.AppendUint64([]byte{}, v.(uint64)) },
"AppendFloat32": func(v interface{}) []byte { return enc.AppendFloat32([]byte{}, v.(float32)) },
"AppendFloat64": func(v interface{}) []byte { return enc.AppendFloat64([]byte{}, v.(float64)) },
}
tests := []struct {
name string
fn string
input interface{}
want []byte
}{
{"AppendInt8(math.MaxInt8)", "AppendInt8", int8(math.MaxInt8), []byte("127")},
{"AppendInt16(math.MaxInt16)", "AppendInt16", int16(math.MaxInt16), []byte("32767")},
{"AppendInt32(math.MaxInt32)", "AppendInt32", int32(math.MaxInt32), []byte("2147483647")},
{"AppendInt64(math.MaxInt64)", "AppendInt64", int64(math.MaxInt64), []byte("9223372036854775807")},
{"AppendUint8(math.MaxUint8)", "AppendUint8", uint8(math.MaxUint8), []byte("255")},
{"AppendUint16(math.MaxUint16)", "AppendUint16", uint16(math.MaxUint16), []byte("65535")},
{"AppendUint32(math.MaxUint32)", "AppendUint32", uint32(math.MaxUint32), []byte("4294967295")},
{"AppendUint64(math.MaxUint64)", "AppendUint64", uint64(math.MaxUint64), []byte("18446744073709551615")},
{"AppendFloat32(-Inf)", "AppendFloat32", float32(math.Inf(-1)), []byte(`"-Inf"`)},
{"AppendFloat32(+Inf)", "AppendFloat32", float32(math.Inf(1)), []byte(`"+Inf"`)},
{"AppendFloat32(NaN)", "AppendFloat32", float32(math.NaN()), []byte(`"NaN"`)},
{"AppendFloat32(0)", "AppendFloat32", float32(0), []byte(`0`)},
{"AppendFloat32(-1.1)", "AppendFloat32", float32(-1.1), []byte(`-1.1`)},
{"AppendFloat32(1e20)", "AppendFloat32", float32(1e20), []byte(`100000000000000000000`)},
{"AppendFloat32(1e21)", "AppendFloat32", float32(1e21), []byte(`1000000000000000000000`)},
{"AppendFloat64(-Inf)", "AppendFloat64", float64(math.Inf(-1)), []byte(`"-Inf"`)},
{"AppendFloat64(+Inf)", "AppendFloat64", float64(math.Inf(1)), []byte(`"+Inf"`)},
{"AppendFloat64(NaN)", "AppendFloat64", float64(math.NaN()), []byte(`"NaN"`)},
{"AppendFloat64(0)", "AppendFloat64", float64(0), []byte(`0`)},
{"AppendFloat64(-1.1)", "AppendFloat64", float64(-1.1), []byte(`-1.1`)},
{"AppendFloat64(1e20)", "AppendFloat64", float64(1e20), []byte(`100000000000000000000`)},
{"AppendFloat64(1e21)", "AppendFloat64", float64(1e21), []byte(`1000000000000000000000`)},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := w[tt.fn](tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("got %s, want %s", got, tt.want)
}
})
}
}
func Test_appendMAC(t *testing.T) {
MACtests := []struct {
input string
want []byte
}{
{"01:23:45:67:89:ab", []byte(`"01:23:45:67:89:ab"`)},
{"cd:ef:11:22:33:44", []byte(`"cd:ef:11:22:33:44"`)},
}
for _, tt := range MACtests {
t.Run("MAC", func(t *testing.T) {
ha, _ := net.ParseMAC(tt.input)
if got := enc.AppendMACAddr([]byte{}, ha); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendMACAddr() = %s, want %s", got, tt.want)
}
})
}
}
func Test_appendIP(t *testing.T) {
IPv4tests := []struct {
input net.IP
want []byte
}{
{net.IP{0, 0, 0, 0}, []byte(`"0.0.0.0"`)},
{net.IP{192, 0, 2, 200}, []byte(`"192.0.2.200"`)},
}
for _, tt := range IPv4tests {
t.Run("IPv4", func(t *testing.T) {
if got := enc.AppendIPAddr([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendIPAddr() = %s, want %s", got, tt.want)
}
})
}
IPv6tests := []struct {
input net.IP
want []byte
}{
{net.IPv6zero, []byte(`"::"`)},
{net.IPv6linklocalallnodes, []byte(`"ff02::1"`)},
{net.IP{0x20, 0x01, 0x0d, 0xb8, 0x85, 0xa3, 0x00, 0x00, 0x00, 0x00, 0x8a, 0x2e, 0x03, 0x70, 0x73, 0x34}, []byte(`"2001:db8:85a3::8a2e:370:7334"`)},
}
for _, tt := range IPv6tests {
t.Run("IPv6", func(t *testing.T) {
if got := enc.AppendIPAddr([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendIPAddr() = %s, want %s", got, tt.want)
}
})
}
}
func Test_appendIPPrefix(t *testing.T) {
IPv4Prefixtests := []struct {
input net.IPNet
want []byte
}{
{net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPv4Mask(0, 0, 0, 0)}, []byte(`"0.0.0.0/0"`)},
{net.IPNet{IP: net.IP{192, 0, 2, 200}, Mask: net.IPv4Mask(255, 255, 255, 0)}, []byte(`"192.0.2.200/24"`)},
}
for _, tt := range IPv4Prefixtests {
t.Run("IPv4", func(t *testing.T) {
if got := enc.AppendIPPrefix([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendIPPrefix() = %s, want %s", got, tt.want)
}
})
}
IPv6Prefixtests := []struct {
input net.IPNet
want []byte
}{
{net.IPNet{IP: net.IPv6zero, Mask: net.CIDRMask(0, 128)}, []byte(`"::/0"`)},
{net.IPNet{IP: net.IPv6linklocalallnodes, Mask: net.CIDRMask(128, 128)}, []byte(`"ff02::1/128"`)},
{net.IPNet{IP: net.IP{0x20, 0x01, 0x0d, 0xb8, 0x85, 0xa3, 0x00, 0x00, 0x00, 0x00, 0x8a, 0x2e, 0x03, 0x70, 0x73, 0x34},
Mask: net.CIDRMask(64, 128)},
[]byte(`"2001:db8:85a3::8a2e:370:7334/64"`)},
}
for _, tt := range IPv6Prefixtests {
t.Run("IPv6", func(t *testing.T) {
if got := enc.AppendIPPrefix([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendIPPrefix() = %s, want %s", got, tt.want)
}
})
}
}
func Test_appendMac(t *testing.T) {
MACtests := []struct {
input net.HardwareAddr
want []byte
}{
{net.HardwareAddr{0x12, 0x34, 0x56, 0x78, 0x90, 0xab}, []byte(`"12:34:56:78:90:ab"`)},
{net.HardwareAddr{0x12, 0x34, 0x00, 0x00, 0x90, 0xab}, []byte(`"12:34:00:00:90:ab"`)},
}
for _, tt := range MACtests {
t.Run("MAC", func(t *testing.T) {
if got := enc.AppendMACAddr([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendMAC() = %s, want %s", got, tt.want)
}
})
}
}
func Test_appendType(t *testing.T) {
typeTests := []struct {
label string
input interface{}
want []byte
}{
{"int", 42, []byte(`"int"`)},
{"MAC", net.HardwareAddr{0x12, 0x34, 0x00, 0x00, 0x90, 0xab}, []byte(`"net.HardwareAddr"`)},
{"float64", float64(2.50), []byte(`"float64"`)},
{"nil", nil, []byte(`"<nil>"`)},
{"bool", true, []byte(`"bool"`)},
}
for _, tt := range typeTests {
t.Run(tt.label, func(t *testing.T) {
if got := enc.AppendType([]byte{}, tt.input); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendType() = %s, want %s", got, tt.want)
}
})
}
}
func Test_appendObjectData(t *testing.T) {
tests := []struct {
dst []byte
obj []byte
want []byte
}{
{[]byte{}, []byte(`{"foo":"bar"}`), []byte(`"foo":"bar"}`)},
{[]byte(`{"qux":"quz"`), []byte(`{"foo":"bar"}`), []byte(`{"qux":"quz","foo":"bar"}`)},
{[]byte{}, []byte(`"foo":"bar"`), []byte(`"foo":"bar"`)},
{[]byte(`{"qux":"quz"`), []byte(`"foo":"bar"`), []byte(`{"qux":"quz","foo":"bar"`)},
}
for _, tt := range tests {
t.Run("ObjectData", func(t *testing.T) {
if got := enc.AppendObjectData(tt.dst, tt.obj); !reflect.DeepEqual(got, tt.want) {
t.Errorf("appendObjectData() = %s, want %s", got, tt.want)
}
})
}
}

View File

@ -1,121 +0,0 @@
//go:build !windows
// +build !windows
// Package journald provides a io.Writer to send the logs
// to journalD component of systemd.
package journald
// This file provides a zlog writer so that logs printed
// using zlog library can be sent to a journalD.
// Zerolog's Top level key/Value Pairs are translated to
// journald's args - all Values are sent to journald as strings.
// And all key strings are converted to uppercase before sending
// to journald (as required by journald).
// In addition, entire log message (all Key Value Pairs), is also
// sent to journald under the key "JSON".
import (
"bytes"
"encoding/json"
"fmt"
"io"
"strings"
"github.com/coreos/go-systemd/v22/journal"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/internal/cbor"
)
const defaultJournalDPrio = journal.PriNotice
// NewJournalDWriter returns a zlog log destination
// to be used as parameter to New() calls. Writing logs
// to this writer will send the log messages to journalD
// running in this system.
func NewJournalDWriter() io.Writer {
return journalWriter{}
}
type journalWriter struct {
}
// levelToJPrio converts zlog Level string into
// journalD's priority values. JournalD has more
// priorities than zlog.
func levelToJPrio(zLevel string) journal.Priority {
lvl, _ := zlog.ParseLevel(zLevel)
switch lvl {
case zlog.TraceLevel:
return journal.PriDebug
case zlog.DebugLevel:
return journal.PriDebug
case zlog.InfoLevel:
return journal.PriInfo
case zlog.WarnLevel:
return journal.PriWarning
case zlog.ErrorLevel:
return journal.PriErr
case zlog.FatalLevel:
return journal.PriCrit
case zlog.PanicLevel:
return journal.PriEmerg
case zlog.NoLevel:
return journal.PriNotice
}
return defaultJournalDPrio
}
func (w journalWriter) Write(p []byte) (n int, err error) {
var event map[string]interface{}
origPLen := len(p)
p = cbor.DecodeIfBinaryToBytes(p)
d := json.NewDecoder(bytes.NewReader(p))
d.UseNumber()
err = d.Decode(&event)
jPrio := defaultJournalDPrio
args := make(map[string]string)
if err != nil {
return
}
if l, ok := event[zlog.LevelFieldName].(string); ok {
jPrio = levelToJPrio(l)
}
msg := ""
for key, value := range event {
jKey := strings.ToUpper(key)
switch key {
case zlog.LevelFieldName, zlog.TimestampFieldName:
continue
case zlog.MessageFieldName:
msg, _ = value.(string)
continue
}
switch v := value.(type) {
case string:
args[jKey] = v
case json.Number:
args[jKey] = fmt.Sprint(value)
default:
b, err := zlog.InterfaceMarshalFunc(value)
if err != nil {
args[jKey] = fmt.Sprintf("[error: %v]", err)
} else {
args[jKey] = string(b)
}
}
}
args["JSON"] = string(p)
err = journal.Send(msg, jPrio, args)
if err == nil {
n = origPLen
}
return
}

View File

@ -1,86 +0,0 @@
// +build linux
package journald_test
import (
"bytes"
"io"
"testing"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/journald"
)
func ExampleNewJournalDWriter() {
log := zlog.New(journald.NewJournalDWriter())
log.Info().Str("foo", "bar").Uint64("small", 123).Float64("float", 3.14).Uint64("big", 1152921504606846976).Msg("Journal Test")
// Output:
}
/*
There is no automated way to verify the output - since the output is sent
to journald process and method to retrieve is journalctl. Will find a way
to automate the process and fix this test.
$ journalctl -o verbose -f
Thu 2018-04-26 22:30:20.768136 PDT [s=3284d695bde946e4b5017c77a399237f;i=329f0;b=98c0dca0debc4b98a5b9534e910e7dd6;m=7a702e35dd4;t=56acdccd2ed0a;x=4690034cf0348614]
PRIORITY=6
_AUDIT_LOGINUID=1000
_BOOT_ID=98c0dca0debc4b98a5b9534e910e7dd6
_MACHINE_ID=926ed67eb4744580948de70fb474975e
_HOSTNAME=sprint
_UID=1000
_GID=1000
_CAP_EFFECTIVE=0
_SYSTEMD_SLICE=-.slice
_TRANSPORT=journal
_SYSTEMD_CGROUP=/
_AUDIT_SESSION=2945
MESSAGE=Journal Test
FOO=bar
BIG=1152921504606846976
_COMM=journald.test
SMALL=123
FLOAT=3.14
JSON={"level":"info","foo":"bar","small":123,"float":3.14,"big":1152921504606846976,"message":"Journal Test"}
_PID=27103
_SOURCE_REALTIME_TIMESTAMP=1524807020768136
*/
func TestWriteReturnsNoOfWrittenBytes(t *testing.T) {
input := []byte(`{"level":"info","time":1570912626,"message":"Starting..."}`)
wr := journald.NewJournalDWriter()
want := len(input)
got, err := wr.Write(input)
if err != nil {
t.Errorf("Unexpected error %v", err)
}
if want != got {
t.Errorf("Expected %d bytes to be written got %d", want, got)
}
}
func TestMultiWrite(t *testing.T) {
var (
w1 = new(bytes.Buffer)
w2 = new(bytes.Buffer)
w3 = journald.NewJournalDWriter()
)
zlog.ErrorHandler = func(err error) {
if err == io.ErrShortWrite {
t.Errorf("Unexpected ShortWriteError")
t.FailNow()
}
}
log := zlog.New(io.MultiWriter(w1, w2, w3)).With().Logger()
for i := 0; i < 10; i++ {
log.Info().Msg("Tick!")
}
}

View File

@ -1,47 +1,19 @@
package json package zerolog
import ( import "unicode/utf8"
"fmt"
"unicode/utf8"
)
const hex = "0123456789abcdef" const hex = "0123456789abcdef"
var noEscapeTable = [256]bool{} // appendJSONString encodes the input string to json and appends
func init() {
for i := 0; i <= 0x7e; i++ {
noEscapeTable[i] = i >= 0x20 && i != '\\' && i != '"'
}
}
// AppendStrings encodes the input strings to json and
// appends the encoded string list to the input byte slice.
func (e Encoder) AppendStrings(dst []byte, vals []string) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = e.AppendString(dst, vals[0])
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = e.AppendString(append(dst, ','), val)
}
}
dst = append(dst, ']')
return dst
}
// AppendString encodes the input string to json and appends
// the encoded string to the input byte slice. // the encoded string to the input byte slice.
// //
// The operation loops though each byte in the string looking // The operation loops though each byte in the string looking
// for characters that need json or utf8 encoding. If the string // for characters that need json or utf8 encoding. If the string
// does not need encoding, then the string is appended in its // does not need encoding, then the string is appended in it's
// entirety to the byte slice. // entirety to the byte slice.
// If we encounter a byte that does need encoding, switch up // If we encounter a byte that does need encoding, switch up
// the operation and perform a byte-by-byte read-encode-append. // the operation and perform a byte-by-byte read-encode-append.
func (Encoder) AppendString(dst []byte, s string) []byte { func appendJSONString(dst []byte, s string) []byte {
// Start with a double quote. // Start with a double quote.
dst = append(dst, '"') dst = append(dst, '"')
// Loop through each character in the string. // Loop through each character in the string.
@ -49,49 +21,24 @@ func (Encoder) AppendString(dst []byte, s string) []byte {
// Check if the character needs encoding. Control characters, slashes, // Check if the character needs encoding. Control characters, slashes,
// and the double quote need json encoding. Bytes above the ascii // and the double quote need json encoding. Bytes above the ascii
// boundary needs utf8 encoding. // boundary needs utf8 encoding.
if !noEscapeTable[s[i]] { if s[i] < 0x20 || s[i] > 0x7e || s[i] == '\\' || s[i] == '"' {
// We encountered a character that needs to be encoded. Switch // We encountered a character that needs to be encoded. Switch
// to complex version of the algorithm. // to complex version of the algorithm.
dst = appendStringComplex(dst, s, i) dst = appendJSONStringComplex(dst, s, i)
return append(dst, '"') return append(dst, '"')
} }
} }
// The string has no need for encoding and therefore is directly // The string has no need for encoding an therefore is directly
// appended to the byte slice. // appended to the byte slice.
dst = append(dst, s...) dst = append(dst, s...)
// End with a double quote // End with a double quote
return append(dst, '"') return append(dst, '"')
} }
// AppendStringers encodes the provided Stringer list to json and // appendJSONStringComplex is used by appendJSONString to take over an in
// appends the encoded Stringer list to the input byte slice.
func (e Encoder) AppendStringers(dst []byte, vals []fmt.Stringer) []byte {
if len(vals) == 0 {
return append(dst, '[', ']')
}
dst = append(dst, '[')
dst = e.AppendStringer(dst, vals[0])
if len(vals) > 1 {
for _, val := range vals[1:] {
dst = e.AppendStringer(append(dst, ','), val)
}
}
return append(dst, ']')
}
// AppendStringer encodes the input Stringer to json and appends the
// encoded Stringer value to the input byte slice.
func (e Encoder) AppendStringer(dst []byte, val fmt.Stringer) []byte {
if val == nil {
return e.AppendInterface(dst, nil)
}
return e.AppendString(dst, val.String())
}
//// appendStringComplex is used by appendString to take over an in
// progress JSON string encoding that encountered a character that needs // progress JSON string encoding that encountered a character that needs
// to be encoded. // to be encoded.
func appendStringComplex(dst []byte, s string, i int) []byte { func appendJSONStringComplex(dst []byte, s string, i int) []byte {
start := 0 start := 0
for i < len(s) { for i < len(s) {
b := s[i] b := s[i]
@ -99,7 +46,7 @@ func appendStringComplex(dst []byte, s string, i int) []byte {
r, size := utf8.DecodeRuneInString(s[i:]) r, size := utf8.DecodeRuneInString(s[i:])
if r == utf8.RuneError && size == 1 { if r == utf8.RuneError && size == 1 {
// In case of error, first append previous simple characters to // In case of error, first append previous simple characters to
// the byte slice if any and append a replacement character code // the byte slice if any and append a remplacement character code
// in place of the invalid sequence. // in place of the invalid sequence.
if start < i { if start < i {
dst = append(dst, s[start:i]...) dst = append(dst, s[start:i]...)
@ -112,7 +59,7 @@ func appendStringComplex(dst []byte, s string, i int) []byte {
i += size i += size
continue continue
} }
if noEscapeTable[b] { if b >= 0x20 && b <= 0x7e && b != '\\' && b != '"' {
i++ i++
continue continue
} }

82
json_test.go Normal file
View File

@ -0,0 +1,82 @@
package zerolog
import (
"testing"
)
func TestAppendJSONString(t *testing.T) {
encodeStringTests := []struct {
in string
out string
}{
{"", `""`},
{"\\", `"\\"`},
{"\x00", `"\u0000"`},
{"\x01", `"\u0001"`},
{"\x02", `"\u0002"`},
{"\x03", `"\u0003"`},
{"\x04", `"\u0004"`},
{"\x05", `"\u0005"`},
{"\x06", `"\u0006"`},
{"\x07", `"\u0007"`},
{"\x08", `"\b"`},
{"\x09", `"\t"`},
{"\x0a", `"\n"`},
{"\x0b", `"\u000b"`},
{"\x0c", `"\f"`},
{"\x0d", `"\r"`},
{"\x0e", `"\u000e"`},
{"\x0f", `"\u000f"`},
{"\x10", `"\u0010"`},
{"\x11", `"\u0011"`},
{"\x12", `"\u0012"`},
{"\x13", `"\u0013"`},
{"\x14", `"\u0014"`},
{"\x15", `"\u0015"`},
{"\x16", `"\u0016"`},
{"\x17", `"\u0017"`},
{"\x18", `"\u0018"`},
{"\x19", `"\u0019"`},
{"\x1a", `"\u001a"`},
{"\x1b", `"\u001b"`},
{"\x1c", `"\u001c"`},
{"\x1d", `"\u001d"`},
{"\x1e", `"\u001e"`},
{"\x1f", `"\u001f"`},
{"✭", `"✭"`},
{"foo\xc2\x7fbar", `"foo\ufffd\u007fbar"`}, // invalid sequence
{"ascii", `"ascii"`},
{"\"a", `"\"a"`},
{"\x1fa", `"\u001fa"`},
{"foo\"bar\"baz", `"foo\"bar\"baz"`},
{"\x1ffoo\x1fbar\x1fbaz", `"\u001ffoo\u001fbar\u001fbaz"`},
{"emoji \u2764\ufe0f!", `"emoji ❤️!"`},
}
for _, tt := range encodeStringTests {
b := appendJSONString([]byte{}, tt.in)
if got, want := string(b), tt.out; got != want {
t.Errorf("appendJSONString(%q) = %#q, want %#q", tt.in, got, want)
}
}
}
func BenchmarkAppendJSONString(b *testing.B) {
tests := map[string]string{
"NoEncoding": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingFirst": `"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa"aaaaaaaaaaaaaaaaaaaaaaaa`,
"EncodingLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"`,
"MultiBytesFirst": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesMiddle": `aaaaaaaaaaaaaaaaaaaaaaaaa❤aaaaaaaaaaaaaaaaaaaaaaaa`,
"MultiBytesLast": `aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa❤`,
}
for name, str := range tests {
b.Run(name, func(b *testing.B) {
buf := make([]byte, 0, 100)
for i := 0; i < b.N; i++ {
_ = appendJSONString(buf, str)
}
})
}
}

359
log.go
View File

@ -1,13 +1,13 @@
// Package zlog provides a lightweight logging library dedicated to JSON logging. // Package zerolog provides a lightweight logging library dedicated to JSON logging.
// //
// A global Logger can be use for simple logging: // A global Logger can be use for simple logging:
// //
// import "tuxpa.in/a/zlog/log" // import "github.com/rs/zerolog/log"
// //
// log.Info().Msg("hello world") // log.Info().Msg("hello world")
// // Output: {"time":1494567715,"level":"info","message":"hello world"} // // Output: {"time":1494567715,"level":"info","message":"hello world"}
// //
// NOTE: To import the global logger, import the "log" subpackage "tuxpa.in/a/zlog/log". // NOTE: To import the global logger, import the "log" subpackage "github.com/rs/zerolog/log".
// //
// Fields can be added to log messages: // Fields can be added to log messages:
// //
@ -16,7 +16,7 @@
// //
// Create logger instance to manage different outputs: // Create logger instance to manage different outputs:
// //
// logger := zlog.New(os.Stderr).With().Timestamp().Logger() // logger := zerolog.New(os.Stderr).With().Timestamp().Logger()
// logger.Info(). // logger.Info().
// Str("foo", "bar"). // Str("foo", "bar").
// Msg("hello world") // Msg("hello world")
@ -30,7 +30,7 @@
// //
// Level logging // Level logging
// //
// zlog.SetGlobalLevel(zlog.InfoLevel) // zerolog.SetGlobalLevel(zerolog.InfoLevel)
// //
// log.Debug().Msg("filtered out message") // log.Debug().Msg("filtered out message")
// log.Info().Msg("routed message") // log.Info().Msg("routed message")
@ -62,53 +62,20 @@
// //
// Sample logs: // Sample logs:
// //
// sampled := log.Sample(&zlog.BasicSampler{N: 10}) // sampled := log.Sample(10)
// sampled.Info().Msg("will be logged every 10 messages") // sampled.Info().Msg("will be logged every 10 messages")
// //
// Log with contextual hooks: package zerolog
//
// // Create the hook:
// type SeverityHook struct{}
//
// func (h SeverityHook) Run(e *zlog.Event, level zlog.Level, msg string) {
// if level != zlog.NoLevel {
// e.Str("severity", level.String())
// }
// }
//
// // And use it:
// var h SeverityHook
// log := zlog.New(os.Stdout).Hook(h)
// log.Warn().Msg("")
// // Output: {"level":"warn","severity":"warn"}
//
// # Caveats
//
// There is no fields deduplication out-of-the-box.
// Using the same key multiple times creates new key in final JSON each time.
//
// logger := zlog.New(os.Stderr).With().Timestamp().Logger()
// logger.Info().
// Timestamp().
// Msg("dup")
// // Output: {"level":"info","time":1494567715,"time":1494567715,"message":"dup"}
//
// In this case, many consumers will take the last value,
// but this is not guaranteed; check yours if in doubt.
package zlog
import ( import (
"errors"
"fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"os" "os"
"strconv" "sync/atomic"
"strings"
) )
// Level defines log levels. // Level defines log levels.
type Level int8 type Level uint8
const ( const (
// DebugLevel defines debug log level. // DebugLevel defines debug log level.
@ -123,100 +90,50 @@ const (
FatalLevel FatalLevel
// PanicLevel defines panic log level. // PanicLevel defines panic log level.
PanicLevel PanicLevel
// NoLevel defines an absent log level.
NoLevel
// Disabled disables the logger. // Disabled disables the logger.
Disabled Disabled
// TraceLevel defines trace log level.
TraceLevel Level = -1
// Values less than TraceLevel are handled as numbers.
) )
func (l Level) String() string { func (l Level) String() string {
switch l { switch l {
case TraceLevel:
return LevelTraceValue
case DebugLevel: case DebugLevel:
return LevelDebugValue return "debug"
case InfoLevel: case InfoLevel:
return LevelInfoValue return "info"
case WarnLevel: case WarnLevel:
return LevelWarnValue return "warn"
case ErrorLevel: case ErrorLevel:
return LevelErrorValue return "error"
case FatalLevel: case FatalLevel:
return LevelFatalValue return "fatal"
case PanicLevel: case PanicLevel:
return LevelPanicValue return "panic"
case Disabled: }
return "disabled"
case NoLevel:
return "" return ""
} }
return strconv.Itoa(int(l))
}
// ParseLevel converts a level string into a zlog Level value. const (
// returns an error if the input string does not match known values. // Often samples log every 10 events.
func ParseLevel(levelStr string) (Level, error) { Often = 10
switch { // Sometimes samples log every 100 events.
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(TraceLevel)): Sometimes = 100
return TraceLevel, nil // Rarely samples log every 1000 events.
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(DebugLevel)): Rarely = 1000
return DebugLevel, nil )
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(InfoLevel)):
return InfoLevel, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(WarnLevel)):
return WarnLevel, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(ErrorLevel)):
return ErrorLevel, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(FatalLevel)):
return FatalLevel, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(PanicLevel)):
return PanicLevel, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(Disabled)):
return Disabled, nil
case strings.EqualFold(levelStr, LevelFieldMarshalFunc(NoLevel)):
return NoLevel, nil
}
i, err := strconv.Atoi(levelStr)
if err != nil {
return NoLevel, fmt.Errorf("Unknown Level String: '%s', defaulting to NoLevel", levelStr)
}
if i > 127 || i < -128 {
return NoLevel, fmt.Errorf("Out-Of-Bounds Level: '%d', defaulting to NoLevel", i)
}
return Level(i), nil
}
// UnmarshalText implements encoding.TextUnmarshaler to allow for easy reading from toml/yaml/json formats var disabledEvent = newEvent(levelWriterAdapter{ioutil.Discard}, 0, false)
func (l *Level) UnmarshalText(text []byte) error {
if l == nil {
return errors.New("can't unmarshal a nil *Level")
}
var err error
*l, err = ParseLevel(string(text))
return err
}
// MarshalText implements encoding.TextMarshaler to allow for easy writing into toml/yaml/json formats
func (l Level) MarshalText() ([]byte, error) {
return []byte(LevelFieldMarshalFunc(l)), nil
}
// A Logger represents an active logging object that generates lines // A Logger represents an active logging object that generates lines
// of JSON output to an io.Writer. Each logging operation makes a single // of JSON output to an io.Writer. Each logging operation makes a single
// call to the Writer's Write method. There is no guarantee on access // call to the Writer's Write method. There is no guaranty on access
// serialization to the Writer. If your Writer is not thread safe, // serialization to the Writer. If your Writer is not thread safe,
// you may consider a sync wrapper. // you may consider a sync wrapper.
type Logger struct { type Logger struct {
w LevelWriter w LevelWriter
level Level level Level
sampler Sampler sample uint32
counter *uint32
context []byte context []byte
hooks []Hook
stack bool
} }
// New creates a root logger with given output writer. If the output writer implements // New creates a root logger with given output writer. If the output writer implements
@ -224,7 +141,7 @@ type Logger struct {
// one. // one.
// //
// Each logging operation makes a single call to the Writer's Write method. There is no // Each logging operation makes a single call to the Writer's Write method. There is no
// guarantee on access serialization to the Writer. If your Writer is not thread safe, // guaranty on access serialization to the Writer. If your Writer is not thread safe,
// you may consider using sync wrapper. // you may consider using sync wrapper.
func New(w io.Writer) Logger { func New(w io.Writer) Logger {
if w == nil { if w == nil {
@ -234,7 +151,7 @@ func New(w io.Writer) Logger {
if !ok { if !ok {
lw = levelWriterAdapter{w} lw = levelWriterAdapter{w}
} }
return Logger{w: lw, level: TraceLevel} return Logger{w: lw}
} }
// Nop returns a disabled logger for which all operation are no-op. // Nop returns a disabled logger for which all operation are no-op.
@ -242,22 +159,6 @@ func Nop() Logger {
return New(nil).Level(Disabled) return New(nil).Level(Disabled)
} }
// Output duplicates the current logger and sets w as its output.
func (l Logger) Output(w io.Writer) Logger {
l2 := New(w)
l2.level = l.level
l2.sampler = l.sampler
l2.stack = l.stack
if len(l.hooks) > 0 {
l2.hooks = append(l2.hooks, l.hooks...)
}
if l.context != nil {
l2.context = make([]byte, len(l.context), cap(l.context))
copy(l2.context, l.context)
}
return l2
}
// With creates a child logger with the field added to its context. // With creates a child logger with the field added to its context.
func (l Logger) With() Context { func (l Logger) With() Context {
context := l.context context := l.context
@ -265,170 +166,94 @@ func (l Logger) With() Context {
if context != nil { if context != nil {
l.context = append(l.context, context...) l.context = append(l.context, context...)
} else { } else {
// This is needed for AppendKey to not check len of input // first byte of context is presence of timestamp or not
// thus making it inlinable l.context = append(l.context, 0)
l.context = enc.AppendBeginMarker(l.context)
} }
return Context{l} return Context{l}
} }
// UpdateContext updates the internal logger's context.
//
// Use this method with caution. If unsure, prefer the With method.
func (l *Logger) UpdateContext(update func(c Context) Context) {
if l == disabledLogger {
return
}
if cap(l.context) == 0 {
l.context = make([]byte, 0, 500)
}
if len(l.context) == 0 {
l.context = enc.AppendBeginMarker(l.context)
}
c := update(Context{*l})
l.context = c.l.context
}
// Level creates a child logger with the minimum accepted level set to level. // Level creates a child logger with the minimum accepted level set to level.
func (l Logger) Level(lvl Level) Logger { func (l Logger) Level(lvl Level) Logger {
l.level = lvl return Logger{
return l w: l.w,
level: lvl,
sample: l.sample,
counter: l.counter,
context: l.context,
}
} }
// GetLevel returns the current Level of l. // Sample returns a logger that only let one message out of every to pass thru.
func (l Logger) GetLevel() Level { func (l Logger) Sample(every int) Logger {
return l.level if every == 0 {
// Create a child with no sampling.
return Logger{
w: l.w,
level: l.level,
context: l.context,
} }
// Sample returns a logger with the s sampler.
func (l Logger) Sample(s Sampler) Logger {
l.sampler = s
return l
} }
return Logger{
// Hook returns a logger with the h Hook. w: l.w,
func (l Logger) Hook(h Hook) Logger { level: l.level,
newHooks := make([]Hook, len(l.hooks), len(l.hooks)+1) sample: uint32(every),
copy(newHooks, l.hooks) counter: new(uint32),
l.hooks = append(newHooks, h) context: l.context,
return l
} }
// Trace starts a new message with trace level.
//
// You must call Msg on the returned event in order to send the event.
func (l *Logger) Trace() *Event {
return l.newEvent(TraceLevel, nil)
} }
// Debug starts a new message with debug level. // Debug starts a new message with debug level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Debug() *Event { func (l Logger) Debug() *Event {
return l.newEvent(DebugLevel, nil) return l.newEvent(DebugLevel, true, nil)
} }
// Info starts a new message with info level. // Info starts a new message with info level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Info() *Event { func (l Logger) Info() *Event {
return l.newEvent(InfoLevel, nil) return l.newEvent(InfoLevel, true, nil)
} }
// Warn starts a new message with warn level. // Warn starts a new message with warn level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Warn() *Event { func (l Logger) Warn() *Event {
return l.newEvent(WarnLevel, nil) return l.newEvent(WarnLevel, true, nil)
} }
// Error starts a new message with error level. // Error starts a new message with error level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Error() *Event { func (l Logger) Error() *Event {
return l.newEvent(ErrorLevel, nil) return l.newEvent(ErrorLevel, true, nil)
}
// Err starts a new message with error level with err as a field if not nil or
// with info level if err is nil.
//
// You must call Msg on the returned event in order to send the event.
func (l *Logger) Err(err error) *Event {
if err != nil {
return l.Error().Err(err)
}
return l.Info()
} }
// Fatal starts a new message with fatal level. The os.Exit(1) function // Fatal starts a new message with fatal level. The os.Exit(1) function
// is called by the Msg method, which terminates the program immediately. // is called by the Msg method.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Fatal() *Event { func (l Logger) Fatal() *Event {
return l.newEvent(FatalLevel, func(msg string) { os.Exit(1) }) return l.newEvent(FatalLevel, true, func(msg string) { os.Exit(1) })
} }
// Panic starts a new message with panic level. The panic() function // Panic starts a new message with panic level. The message is also sent
// is called by the Msg method, which stops the ordinary flow of a goroutine. // to the panic function.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Panic() *Event { func (l Logger) Panic() *Event {
return l.newEvent(PanicLevel, func(msg string) { panic(msg) }) return l.newEvent(PanicLevel, true, func(msg string) { panic(msg) })
}
// WithLevel starts a new message with level. Unlike Fatal and Panic
// methods, WithLevel does not terminate the program or stop the ordinary
// flow of a goroutine when used with their respective levels.
//
// You must call Msg on the returned event in order to send the event.
func (l *Logger) WithLevel(level Level) *Event {
switch level {
case TraceLevel:
return l.Trace()
case DebugLevel:
return l.Debug()
case InfoLevel:
return l.Info()
case WarnLevel:
return l.Warn()
case ErrorLevel:
return l.Error()
case FatalLevel:
return l.newEvent(FatalLevel, nil)
case PanicLevel:
return l.newEvent(PanicLevel, nil)
case NoLevel:
return l.Log()
case Disabled:
return nil
default:
return l.newEvent(level, nil)
}
} }
// Log starts a new message with no level. Setting GlobalLevel to Disabled // Log starts a new message with no level. Setting GlobalLevel to Disabled
// will still disable events produced by this method. // will still disable events produced by this method.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func (l *Logger) Log() *Event { func (l Logger) Log() *Event {
return l.newEvent(NoLevel, nil) // We use panic level with addLevelField=false to make Log passthrough all
} // levels except Disabled.
return l.newEvent(PanicLevel, false, nil)
// Print sends a log event using debug level and no extra field.
// Arguments are handled in the manner of fmt.Print.
func (l *Logger) Print(v ...interface{}) {
if e := l.Debug(); e.Enabled() {
e.CallerSkipFrame(1).Msg(fmt.Sprint(v...))
}
}
// Printf sends a log event using debug level and no extra field.
// Arguments are handled in the manner of fmt.Printf.
func (l *Logger) Printf(format string, v ...interface{}) {
if e := l.Debug(); e.Enabled() {
e.CallerSkipFrame(1).Msg(fmt.Sprintf(format, v...))
}
} }
// Write implements the io.Writer interface. This is useful to set as a writer // Write implements the io.Writer interface. This is useful to set as a writer
@ -439,40 +264,48 @@ func (l Logger) Write(p []byte) (n int, err error) {
// Trim CR added by stdlog. // Trim CR added by stdlog.
p = p[0 : n-1] p = p[0 : n-1]
} }
l.Log().CallerSkipFrame(1).Msg(string(p)) l.Log().Msg(string(p))
return return
} }
func (l *Logger) newEvent(level Level, done func(string)) *Event { func (l Logger) newEvent(level Level, addLevelField bool, done func(string)) *Event {
enabled := l.should(level) enabled := l.should(level)
if !enabled { if !enabled {
if done != nil { return disabledEvent
done("")
} }
return nil lvl := InfoLevel
if addLevelField {
lvl = level
} }
e := newEvent(l.w, level) e := newEvent(l.w, lvl, enabled)
e.done = done e.done = done
e.ch = l.hooks if l.context != nil && len(l.context) > 0 && l.context[0] > 0 {
if level != NoLevel && LevelFieldName != "" { // first byte of context is ts flag
e.Str(LevelFieldName, LevelFieldMarshalFunc(level)) e.buf = appendTimestamp(e.buf)
}
if addLevelField {
e.Str(LevelFieldName, level.String())
}
if l.sample > 0 && SampleFieldName != "" {
e.Uint32(SampleFieldName, l.sample)
} }
if l.context != nil && len(l.context) > 1 { if l.context != nil && len(l.context) > 1 {
e.buf = enc.AppendObjectData(e.buf, l.context) if len(e.buf) > 1 {
e.buf = append(e.buf, ',')
} }
if l.stack { e.buf = append(e.buf, l.context[1:]...)
e.Stack()
} }
return e return e
} }
// should returns true if the log event should be logged. // should returns true if the log event should be logged.
func (l *Logger) should(lvl Level) bool { func (l Logger) should(lvl Level) bool {
if lvl < l.level || lvl < GlobalLevel() { if lvl < l.level || lvl < globalLevel() {
return false return false
} }
if l.sampler != nil && !samplingDisabled() { if l.sample > 0 && l.counter != nil && !samplingDisabled() {
return l.sampler.Sample(lvl) c := atomic.AddUint32(l.counter, 1)
return c%l.sample == 0
} }
return true return true
} }

View File

@ -1,168 +1,85 @@
// Package log provides a global logger for zlog. // Package log provides a global logger for zerolog.
package log package log
import ( import (
"context" "context"
"fmt"
"io"
"os" "os"
"time"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"tuxpa.in/a/zlog"
) )
// Logger is the global logger. // Logger is the global logger.
var Logger = zlog.New(nil).Output( var Logger = zerolog.New(os.Stderr).With().Timestamp().Logger()
zerolog.ConsoleWriter{
Out: os.Stderr,
TimeFormat: time.RFC3339,
},
).With().Timestamp().Logger()
// Output duplicates the global logger and sets w as its output.
func Output(w io.Writer) zlog.Logger {
return Logger.Output(w)
}
// With creates a child logger with the field added to its context. // With creates a child logger with the field added to its context.
func With() zlog.Context { func With() zerolog.Context {
return Logger.With() return Logger.With()
} }
// Level creates a child logger with the minimum accepted level set to level. // Level crestes a child logger with the minium accepted level set to level.
func Level(level zlog.Level) zlog.Logger { func Level(level zerolog.Level) zerolog.Logger {
return Logger.Level(level) return Logger.Level(level)
} }
// Sample returns a logger with the s sampler. // Sample returns a logger that only let one message out of every to pass thru.
func Sample(s zlog.Sampler) zlog.Logger { func Sample(every int) zerolog.Logger {
return Logger.Sample(s) return Logger.Sample(every)
}
// Hook returns a logger with the h Hook.
func Hook(h zlog.Hook) zlog.Logger {
return Logger.Hook(h)
}
// Err starts a new message with error level with err as a field if not nil or
// with info level if err is nil.
//
// You must call Msg on the returned event in order to send the event.
func Err(err error) *zlog.Event {
return Logger.Err(err)
}
// Trace starts a new message with trace level.
//
// You must call Msg on the returned event in order to send the event.
func Trace() *zlog.Event {
return Logger.Trace()
} }
// Debug starts a new message with debug level. // Debug starts a new message with debug level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Debug() *zlog.Event { func Debug() *zerolog.Event {
return Logger.Debug() return Logger.Debug()
} }
// Info starts a new message with info level. // Info starts a new message with info level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Info() *zlog.Event { func Info() *zerolog.Event {
return Logger.Info() return Logger.Info()
} }
// Warn starts a new message with warn level. // Warn starts a new message with warn level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Warn() *zlog.Event { func Warn() *zerolog.Event {
return Logger.Warn() return Logger.Warn()
} }
// Error starts a new message with error level. // Error starts a new message with error level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Error() *zlog.Event { func Error() *zerolog.Event {
return Logger.Error() return Logger.Error()
} }
// Errorf sends a log event using debug level and no extra field.
// Arguments are handled in the manner of fmt.Errorf.
func Errorf(format string, v ...interface{}) {
Logger.Error().CallerSkipFrame(1).Msgf(format, v...)
}
func Errorln(args ...interface{}) {
Logger.Error().Msg(fmt.Sprintln(args...))
}
// Fatal starts a new message with fatal level. The os.Exit(1) function // Fatal starts a new message with fatal level. The os.Exit(1) function
// is called by the Msg method. // is called by the Msg method.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Fatal() *zlog.Event { func Fatal() *zerolog.Event {
return Logger.Fatal() return Logger.Fatal()
} }
func Fatalf(format string, args ...interface{}) {
Logger.Fatal().Msgf(format, args...)
}
func Fatalln(args ...interface{}) {
Logger.Fatal().Msg(fmt.Sprintln(args...))
}
// Panic starts a new message with panic level. The message is also sent // Panic starts a new message with panic level. The message is also sent
// to the panic function. // to the panic function.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func Panic() *zlog.Event { func Panic() *zerolog.Event {
return Logger.Panic() return Logger.Panic()
} }
func Panicf(format string, args ...interface{}) { // Log starts a new message with no level. Setting zerolog.GlobalLevel to
Logger.Panic().Msgf(format, args...) // zerlog.Disabled will still disable events produced by this method.
}
func Panicln(args ...interface{}) {
Logger.Panic().Msg(fmt.Sprintln(args...))
}
// WithLevel starts a new message with level.
// //
// You must call Msg on the returned event in order to send the event. // You must call Msg on the returned event in order to send the event.
func WithLevel(level zlog.Level) *zlog.Event { func Log() *zerolog.Event {
return Logger.WithLevel(level)
}
// Log starts a new message with no level. Setting zlog.GlobalLevel to
// zlog.Disabled will still disable events produced by this method.
//
// You must call Msg on the returned event in order to send the event.
func Log() *zlog.Event {
return Logger.Log() return Logger.Log()
} }
// Print sends a log event using debug level and no extra field.
// Arguments are handled in the manner of fmt.Print.
func Print(v ...interface{}) {
Logger.Debug().CallerSkipFrame(1).Msg(fmt.Sprint(v...))
}
// Printf sends a log event using debug level and no extra field.
// Arguments are handled in the manner of fmt.Printf.
func Printf(format string, v ...interface{}) {
Logger.Debug().CallerSkipFrame(1).Msgf(format, v...)
}
func Println(args ...interface{}) {
Logger.Debug().Msg(fmt.Sprintln(args...))
}
// Ctx returns the Logger associated with the ctx. If no logger // Ctx returns the Logger associated with the ctx. If no logger
// is associated, a disabled logger is returned. // is associated, a disabled logger is returned.
func Ctx(ctx context.Context) *zlog.Logger { func Ctx(ctx context.Context) zerolog.Logger {
return zlog.Ctx(ctx) return zerolog.Ctx(ctx)
} }

View File

@ -1,163 +0,0 @@
//go:build !binary_log
// +build !binary_log
package log_test
import (
"errors"
"flag"
"os"
"time"
"tuxpa.in/a/zlog"
"tuxpa.in/a/zlog/log"
)
// setup would normally be an init() function, however, there seems
// to be something awry with the testing framework when we set the
// global Logger from an init()
func setup() {
// UNIX Time is faster and smaller than most timestamps
// If you set zlog.TimeFieldFormat to an empty string,
// logs will write with UNIX time
zlog.TimeFieldFormat = ""
// In order to always output a static time to stdout for these
// examples to pass, we need to override zlog.TimestampFunc
// and log.Logger globals -- you would not normally need to do this
zlog.TimestampFunc = func() time.Time {
return time.Date(2008, 1, 8, 17, 5, 05, 0, time.UTC)
}
log.Logger = zlog.New(os.Stdout).With().Timestamp().Logger()
}
// Simple logging example using the Print function in the log package
// Note that both Print and Printf are at the debug log level by default
func ExamplePrint() {
setup()
log.Print("hello world")
// Output: {"level":"debug","time":1199811905,"message":"hello world"}
}
// Simple logging example using the Printf function in the log package
func ExamplePrintf() {
setup()
log.Printf("hello %s", "world")
// Output: {"level":"debug","time":1199811905,"message":"hello world"}
}
// Example of a log with no particular "level"
func ExampleLog() {
setup()
log.Log().Msg("hello world")
// Output: {"time":1199811905,"message":"hello world"}
}
// Example of a conditional level based on the presence of an error.
func ExampleErr() {
setup()
err := errors.New("some error")
log.Err(err).Msg("hello world")
log.Err(nil).Msg("hello world")
// Output: {"level":"error","error":"some error","time":1199811905,"message":"hello world"}
// {"level":"info","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "trace")
func ExampleTrace() {
setup()
log.Trace().Msg("hello world")
// Output: {"level":"trace","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "debug")
func ExampleDebug() {
setup()
log.Debug().Msg("hello world")
// Output: {"level":"debug","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "info")
func ExampleInfo() {
setup()
log.Info().Msg("hello world")
// Output: {"level":"info","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "warn")
func ExampleWarn() {
setup()
log.Warn().Msg("hello world")
// Output: {"level":"warn","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "error")
func ExampleError() {
setup()
log.Error().Msg("hello world")
// Output: {"level":"error","time":1199811905,"message":"hello world"}
}
// Example of a log at a particular "level" (in this case, "fatal")
func ExampleFatal() {
setup()
err := errors.New("A repo man spends his life getting into tense situations")
service := "myservice"
log.Fatal().
Err(err).
Str("service", service).
Msgf("Cannot start %s", service)
// Outputs: {"level":"fatal","time":1199811905,"error":"A repo man spends his life getting into tense situations","service":"myservice","message":"Cannot start myservice"}
}
// TODO: Panic
// This example uses command-line flags to demonstrate various outputs
// depending on the chosen log level.
func Example() {
setup()
debug := flag.Bool("debug", false, "sets log level to debug")
flag.Parse()
// Default level for this example is info, unless debug flag is present
zlog.SetGlobalLevel(zlog.InfoLevel)
if *debug {
zlog.SetGlobalLevel(zlog.DebugLevel)
}
log.Debug().Msg("This message appears only when log level set to Debug")
log.Info().Msg("This message appears when log level set to Debug or Info")
if e := log.Debug(); e.Enabled() {
// Compute log output only if enabled.
value := "bar"
e.Str("foo", value).Msg("some debug message")
}
// Output: {"level":"info","time":1199811905,"message":"This message appears when log level set to Debug or Info"}
}
// TODO: Output
// TODO: With
// TODO: Level
// TODO: Sample
// TODO: Hook
// TODO: WithLevel
// TODO: Ctx

View File

@ -1,27 +1,24 @@
// +build !binary_log package zerolog_test
package zlog_test
import ( import (
"errors" "errors"
"fmt"
stdlog "log" stdlog "log"
"net"
"os" "os"
"time" "time"
"tuxpa.in/a/zlog" "github.com/rs/zerolog"
) )
func ExampleNew() { func ExampleNew() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Info().Msg("hello world") log.Info().Msg("hello world")
// Output: {"level":"info","message":"hello world"} // Output: {"level":"info","message":"hello world"}
} }
func ExampleLogger_With() { func ExampleLogger_With() {
log := zlog.New(os.Stdout). log := zerolog.New(os.Stdout).
With(). With().
Str("foo", "bar"). Str("foo", "bar").
Logger() Logger()
@ -32,7 +29,7 @@ func ExampleLogger_With() {
} }
func ExampleLogger_Level() { func ExampleLogger_Level() {
log := zlog.New(os.Stdout).Level(zlog.WarnLevel) log := zerolog.New(os.Stdout).Level(zerolog.WarnLevel)
log.Info().Msg("filtered out message") log.Info().Msg("filtered out message")
log.Error().Msg("kept message") log.Error().Msg("kept message")
@ -41,73 +38,19 @@ func ExampleLogger_Level() {
} }
func ExampleLogger_Sample() { func ExampleLogger_Sample() {
log := zlog.New(os.Stdout).Sample(&zlog.BasicSampler{N: 2}) log := zerolog.New(os.Stdout).Sample(2)
log.Info().Msg("message 1") log.Info().Msg("message 1")
log.Info().Msg("message 2") log.Info().Msg("message 2")
log.Info().Msg("message 3") log.Info().Msg("message 3")
log.Info().Msg("message 4") log.Info().Msg("message 4")
// Output: {"level":"info","message":"message 1"} // Output: {"level":"info","sample":2,"message":"message 2"}
// {"level":"info","message":"message 3"} // {"level":"info","sample":2,"message":"message 4"}
}
type LevelNameHook struct{}
func (h LevelNameHook) Run(e *zlog.Event, l zlog.Level, msg string) {
if l != zlog.NoLevel {
e.Str("level_name", l.String())
} else {
e.Str("level_name", "NoLevel")
}
}
type MessageHook string
func (h MessageHook) Run(e *zlog.Event, l zlog.Level, msg string) {
e.Str("the_message", msg)
}
func ExampleLogger_Hook() {
var levelNameHook LevelNameHook
var messageHook MessageHook = "The message"
log := zlog.New(os.Stdout).Hook(levelNameHook).Hook(messageHook)
log.Info().Msg("hello world")
// Output: {"level":"info","level_name":"info","the_message":"hello world","message":"hello world"}
}
func ExampleLogger_Print() {
log := zlog.New(os.Stdout)
log.Print("hello world")
// Output: {"level":"debug","message":"hello world"}
}
func ExampleLogger_Printf() {
log := zlog.New(os.Stdout)
log.Printf("hello %s", "world")
// Output: {"level":"debug","message":"hello world"}
}
func ExampleLogger_Trace() {
log := zlog.New(os.Stdout)
log.Trace().
Str("foo", "bar").
Int("n", 123).
Msg("hello world")
// Output: {"level":"trace","foo":"bar","n":123,"message":"hello world"}
} }
func ExampleLogger_Debug() { func ExampleLogger_Debug() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Debug(). log.Debug().
Str("foo", "bar"). Str("foo", "bar").
@ -118,7 +61,7 @@ func ExampleLogger_Debug() {
} }
func ExampleLogger_Info() { func ExampleLogger_Info() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Info(). log.Info().
Str("foo", "bar"). Str("foo", "bar").
@ -129,7 +72,7 @@ func ExampleLogger_Info() {
} }
func ExampleLogger_Warn() { func ExampleLogger_Warn() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Warn(). log.Warn().
Str("foo", "bar"). Str("foo", "bar").
@ -139,7 +82,7 @@ func ExampleLogger_Warn() {
} }
func ExampleLogger_Error() { func ExampleLogger_Error() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Error(). log.Error().
Err(errors.New("some error")). Err(errors.New("some error")).
@ -148,17 +91,8 @@ func ExampleLogger_Error() {
// Output: {"level":"error","error":"some error","message":"error doing something"} // Output: {"level":"error","error":"some error","message":"error doing something"}
} }
func ExampleLogger_WithLevel() {
log := zlog.New(os.Stdout)
log.WithLevel(zlog.InfoLevel).
Msg("hello world")
// Output: {"level":"info","message":"hello world"}
}
func ExampleLogger_Write() { func ExampleLogger_Write() {
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Str("foo", "bar"). Str("foo", "bar").
Logger() Logger()
@ -171,7 +105,7 @@ func ExampleLogger_Write() {
} }
func ExampleLogger_Log() { func ExampleLogger_Log() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Log(). log.Log().
Str("foo", "bar"). Str("foo", "bar").
@ -182,11 +116,11 @@ func ExampleLogger_Log() {
} }
func ExampleEvent_Dict() { func ExampleEvent_Dict() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Log(). log.Log().
Str("foo", "bar"). Str("foo", "bar").
Dict("dict", zlog.Dict(). Dict("dict", zerolog.Dict().
Str("bar", "baz"). Str("bar", "baz").
Int("n", 1), Int("n", 1),
). ).
@ -195,106 +129,8 @@ func ExampleEvent_Dict() {
// Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"} // Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"}
} }
type User struct {
Name string
Age int
Created time.Time
}
func (u User) MarshalZerologObject(e *zlog.Event) {
e.Str("name", u.Name).
Int("age", u.Age).
Time("created", u.Created)
}
type Price struct {
val uint64
prec int
unit string
}
func (p Price) MarshalZerologObject(e *zlog.Event) {
denom := uint64(1)
for i := 0; i < p.prec; i++ {
denom *= 10
}
result := []byte(p.unit)
result = append(result, fmt.Sprintf("%d.%d", p.val/denom, p.val%denom)...)
e.Str("price", string(result))
}
type Users []User
func (uu Users) MarshalZerologArray(a *zlog.Array) {
for _, u := range uu {
a.Object(u)
}
}
func ExampleEvent_Array() {
log := zlog.New(os.Stdout)
log.Log().
Str("foo", "bar").
Array("array", zlog.Arr().
Str("baz").
Int(1).
Dict(zlog.Dict().
Str("bar", "baz").
Int("n", 1),
),
).
Msg("hello world")
// Output: {"foo":"bar","array":["baz",1,{"bar":"baz","n":1}],"message":"hello world"}
}
func ExampleEvent_Array_object() {
log := zlog.New(os.Stdout)
// Users implements zlog.LogArrayMarshaler
u := Users{
User{"John", 35, time.Time{}},
User{"Bob", 55, time.Time{}},
}
log.Log().
Str("foo", "bar").
Array("users", u).
Msg("hello world")
// Output: {"foo":"bar","users":[{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},{"name":"Bob","age":55,"created":"0001-01-01T00:00:00Z"}],"message":"hello world"}
}
func ExampleEvent_Object() {
log := zlog.New(os.Stdout)
// User implements zlog.LogObjectMarshaler
u := User{"John", 35, time.Time{}}
log.Log().
Str("foo", "bar").
Object("user", u).
Msg("hello world")
// Output: {"foo":"bar","user":{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},"message":"hello world"}
}
func ExampleEvent_EmbedObject() {
log := zlog.New(os.Stdout)
price := Price{val: 6449, prec: 2, unit: "$"}
log.Log().
Str("foo", "bar").
EmbedObject(price).
Msg("hello world")
// Output: {"foo":"bar","price":"$64.49","message":"hello world"}
}
func ExampleEvent_Interface() { func ExampleEvent_Interface() {
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
obj := struct { obj := struct {
Name string `json:"name"` Name string `json:"name"`
@ -311,9 +147,9 @@ func ExampleEvent_Interface() {
} }
func ExampleEvent_Dur() { func ExampleEvent_Dur() {
d := 10 * time.Second d := time.Duration(10 * time.Second)
log := zlog.New(os.Stdout) log := zerolog.New(os.Stdout)
log.Log(). log.Log().
Str("foo", "bar"). Str("foo", "bar").
@ -323,58 +159,10 @@ func ExampleEvent_Dur() {
// Output: {"foo":"bar","dur":10000,"message":"hello world"} // Output: {"foo":"bar","dur":10000,"message":"hello world"}
} }
func ExampleEvent_Durs() {
d := []time.Duration{
10 * time.Second,
20 * time.Second,
}
log := zlog.New(os.Stdout)
log.Log().
Str("foo", "bar").
Durs("durs", d).
Msg("hello world")
// Output: {"foo":"bar","durs":[10000,20000],"message":"hello world"}
}
func ExampleEvent_Fields_map() {
fields := map[string]interface{}{
"bar": "baz",
"n": 1,
}
log := zlog.New(os.Stdout)
log.Log().
Str("foo", "bar").
Fields(fields).
Msg("hello world")
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleEvent_Fields_slice() {
fields := []interface{}{
"bar", "baz",
"n", 1,
}
log := zlog.New(os.Stdout)
log.Log().
Str("foo", "bar").
Fields(fields).
Msg("hello world")
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleContext_Dict() { func ExampleContext_Dict() {
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Str("foo", "bar"). Str("foo", "bar").
Dict("dict", zlog.Dict(). Dict("dict", zerolog.Dict().
Str("bar", "baz"). Str("bar", "baz").
Int("n", 1), Int("n", 1),
).Logger() ).Logger()
@ -384,64 +172,6 @@ func ExampleContext_Dict() {
// Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"} // Output: {"foo":"bar","dict":{"bar":"baz","n":1},"message":"hello world"}
} }
func ExampleContext_Array() {
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Array("array", zlog.Arr().
Str("baz").
Int(1),
).Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","array":["baz",1],"message":"hello world"}
}
func ExampleContext_Array_object() {
// Users implements zlog.LogArrayMarshaler
u := Users{
User{"John", 35, time.Time{}},
User{"Bob", 55, time.Time{}},
}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Array("users", u).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","users":[{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},{"name":"Bob","age":55,"created":"0001-01-01T00:00:00Z"}],"message":"hello world"}
}
func ExampleContext_Object() {
// User implements zlog.LogObjectMarshaler
u := User{"John", 35, time.Time{}}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Object("user", u).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","user":{"name":"John","age":35,"created":"0001-01-01T00:00:00Z"},"message":"hello world"}
}
func ExampleContext_EmbedObject() {
price := Price{val: 6449, prec: 2, unit: "$"}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
EmbedObject(price).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","price":"$64.49","message":"hello world"}
}
func ExampleContext_Interface() { func ExampleContext_Interface() {
obj := struct { obj := struct {
Name string `json:"name"` Name string `json:"name"`
@ -449,7 +179,7 @@ func ExampleContext_Interface() {
Name: "john", Name: "john",
} }
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Str("foo", "bar"). Str("foo", "bar").
Interface("obj", obj). Interface("obj", obj).
Logger() Logger()
@ -460,9 +190,9 @@ func ExampleContext_Interface() {
} }
func ExampleContext_Dur() { func ExampleContext_Dur() {
d := 10 * time.Second d := time.Duration(10 * time.Second)
log := zlog.New(os.Stdout).With(). log := zerolog.New(os.Stdout).With().
Str("foo", "bar"). Str("foo", "bar").
Dur("dur", d). Dur("dur", d).
Logger() Logger()
@ -471,84 +201,3 @@ func ExampleContext_Dur() {
// Output: {"foo":"bar","dur":10000,"message":"hello world"} // Output: {"foo":"bar","dur":10000,"message":"hello world"}
} }
func ExampleContext_Durs() {
d := []time.Duration{
10 * time.Second,
20 * time.Second,
}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Durs("durs", d).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","durs":[10000,20000],"message":"hello world"}
}
func ExampleContext_IPAddr() {
hostIP := net.IP{192, 168, 0, 100}
log := zlog.New(os.Stdout).With().
IPAddr("HostIP", hostIP).
Logger()
log.Log().Msg("hello world")
// Output: {"HostIP":"192.168.0.100","message":"hello world"}
}
func ExampleContext_IPPrefix() {
route := net.IPNet{IP: net.IP{192, 168, 0, 0}, Mask: net.CIDRMask(24, 32)}
log := zlog.New(os.Stdout).With().
IPPrefix("Route", route).
Logger()
log.Log().Msg("hello world")
// Output: {"Route":"192.168.0.0/24","message":"hello world"}
}
func ExampleContext_MACAddr() {
mac := net.HardwareAddr{0x00, 0x14, 0x22, 0x01, 0x23, 0x45}
log := zlog.New(os.Stdout).With().
MACAddr("hostMAC", mac).
Logger()
log.Log().Msg("hello world")
// Output: {"hostMAC":"00:14:22:01:23:45","message":"hello world"}
}
func ExampleContext_Fields_map() {
fields := map[string]interface{}{
"bar": "baz",
"n": 1,
}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Fields(fields).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}
func ExampleContext_Fields_slice() {
fields := []interface{}{
"bar", "baz",
"n", 1,
}
log := zlog.New(os.Stdout).With().
Str("foo", "bar").
Fields(fields).
Logger()
log.Log().Msg("hello world")
// Output: {"foo":"bar","bar":"baz","n":1,"message":"hello world"}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +0,0 @@
// +build !go1.12
package zlog
const contextCallerSkipFrameCount = 3

View File

@ -1,82 +0,0 @@
package pkgerrors
import (
"github.com/pkg/errors"
)
var (
StackSourceFileName = "source"
StackSourceLineName = "line"
StackSourceFunctionName = "func"
)
type state struct {
b []byte
}
// Write implement fmt.Formatter interface.
func (s *state) Write(b []byte) (n int, err error) {
s.b = b
return len(b), nil
}
// Width implement fmt.Formatter interface.
func (s *state) Width() (wid int, ok bool) {
return 0, false
}
// Precision implement fmt.Formatter interface.
func (s *state) Precision() (prec int, ok bool) {
return 0, false
}
// Flag implement fmt.Formatter interface.
func (s *state) Flag(c int) bool {
return false
}
func frameField(f errors.Frame, s *state, c rune) string {
f.Format(s, c)
return string(s.b)
}
// MarshalStack implements pkg/errors stack trace marshaling.
//
// zlog.ErrorStackMarshaler = MarshalStack
func MarshalStack(err error) interface{} {
type stackTracer interface {
StackTrace() errors.StackTrace
}
var sterr stackTracer
var ok bool
for err != nil {
sterr, ok = err.(stackTracer)
if ok {
break
}
u, ok := err.(interface {
Unwrap() error
})
if !ok {
return nil
}
err = u.Unwrap()
}
if sterr == nil {
return nil
}
st := sterr.StackTrace()
s := &state{}
out := make([]map[string]string, 0, len(st))
for _, frame := range st {
out = append(out, map[string]string{
StackSourceFileName: frameField(frame, s, 's'),
StackSourceLineName: frameField(frame, s, 'd'),
StackSourceFunctionName: frameField(frame, s, 'n'),
})
}
return out
}

View File

@ -1,58 +0,0 @@
// +build !binary_log
package pkgerrors
import (
"bytes"
"fmt"
"regexp"
"testing"
"github.com/pkg/errors"
"tuxpa.in/a/zlog"
)
func TestLogStack(t *testing.T) {
zlog.ErrorStackMarshaler = MarshalStack
out := &bytes.Buffer{}
log := zlog.New(out)
err := fmt.Errorf("from error: %w", errors.New("error message"))
log.Log().Stack().Err(err).Msg("")
got := out.String()
want := `\{"stack":\[\{"func":"TestLogStack","line":"21","source":"stacktrace_test.go"\},.*\],"error":"from error: error message"\}\n`
if ok, _ := regexp.MatchString(want, got); !ok {
t.Errorf("invalid log output:\ngot: %v\nwant: %v", got, want)
}
}
func TestLogStackFromContext(t *testing.T) {
zlog.ErrorStackMarshaler = MarshalStack
out := &bytes.Buffer{}
log := zlog.New(out).With().Stack().Logger() // calling Stack() on log context instead of event
err := fmt.Errorf("from error: %w", errors.New("error message"))
log.Log().Err(err).Msg("") // not explicitly calling Stack()
got := out.String()
want := `\{"stack":\[\{"func":"TestLogStackFromContext","line":"37","source":"stacktrace_test.go"\},.*\],"error":"from error: error message"\}\n`
if ok, _ := regexp.MatchString(want, got); !ok {
t.Errorf("invalid log output:\ngot: %v\nwant: %v", got, want)
}
}
func BenchmarkLogStack(b *testing.B) {
zlog.ErrorStackMarshaler = MarshalStack
out := &bytes.Buffer{}
log := zlog.New(out)
err := errors.Wrap(errors.New("error message"), "from error")
b.ReportAllocs()
for i := 0; i < b.N; i++ {
log.Log().Stack().Err(err).Msg("")
out.Reset()
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

View File

@ -1,134 +0,0 @@
package zlog
import (
"math/rand"
"sync/atomic"
"time"
)
var (
// Often samples log every ~ 10 events.
Often = RandomSampler(10)
// Sometimes samples log every ~ 100 events.
Sometimes = RandomSampler(100)
// Rarely samples log every ~ 1000 events.
Rarely = RandomSampler(1000)
)
// Sampler defines an interface to a log sampler.
type Sampler interface {
// Sample returns true if the event should be part of the sample, false if
// the event should be dropped.
Sample(lvl Level) bool
}
// RandomSampler use a PRNG to randomly sample an event out of N events,
// regardless of their level.
type RandomSampler uint32
// Sample implements the Sampler interface.
func (s RandomSampler) Sample(lvl Level) bool {
if s <= 0 {
return false
}
if rand.Intn(int(s)) != 0 {
return false
}
return true
}
// BasicSampler is a sampler that will send every Nth events, regardless of
// their level.
type BasicSampler struct {
N uint32
counter uint32
}
// Sample implements the Sampler interface.
func (s *BasicSampler) Sample(lvl Level) bool {
n := s.N
if n == 1 {
return true
}
c := atomic.AddUint32(&s.counter, 1)
return c%n == 1
}
// BurstSampler lets Burst events pass per Period then pass the decision to
// NextSampler. If Sampler is not set, all subsequent events are rejected.
type BurstSampler struct {
// Burst is the maximum number of event per period allowed before calling
// NextSampler.
Burst uint32
// Period defines the burst period. If 0, NextSampler is always called.
Period time.Duration
// NextSampler is the sampler used after the burst is reached. If nil,
// events are always rejected after the burst.
NextSampler Sampler
counter uint32
resetAt int64
}
// Sample implements the Sampler interface.
func (s *BurstSampler) Sample(lvl Level) bool {
if s.Burst > 0 && s.Period > 0 {
if s.inc() <= s.Burst {
return true
}
}
if s.NextSampler == nil {
return false
}
return s.NextSampler.Sample(lvl)
}
func (s *BurstSampler) inc() uint32 {
now := time.Now().UnixNano()
resetAt := atomic.LoadInt64(&s.resetAt)
var c uint32
if now > resetAt {
c = 1
atomic.StoreUint32(&s.counter, c)
newResetAt := now + s.Period.Nanoseconds()
reset := atomic.CompareAndSwapInt64(&s.resetAt, resetAt, newResetAt)
if !reset {
// Lost the race with another goroutine trying to reset.
c = atomic.AddUint32(&s.counter, 1)
}
} else {
c = atomic.AddUint32(&s.counter, 1)
}
return c
}
// LevelSampler applies a different sampler for each level.
type LevelSampler struct {
TraceSampler, DebugSampler, InfoSampler, WarnSampler, ErrorSampler Sampler
}
func (s LevelSampler) Sample(lvl Level) bool {
switch lvl {
case TraceLevel:
if s.TraceSampler != nil {
return s.TraceSampler.Sample(lvl)
}
case DebugLevel:
if s.DebugSampler != nil {
return s.DebugSampler.Sample(lvl)
}
case InfoLevel:
if s.InfoSampler != nil {
return s.InfoSampler.Sample(lvl)
}
case WarnLevel:
if s.WarnSampler != nil {
return s.WarnSampler.Sample(lvl)
}
case ErrorLevel:
if s.ErrorSampler != nil {
return s.ErrorSampler.Sample(lvl)
}
}
return true
}

View File

@ -1,84 +0,0 @@
// +build !binary_log
package zlog
import (
"testing"
"time"
)
var samplers = []struct {
name string
sampler func() Sampler
total int
wantMin int
wantMax int
}{
{
"BasicSampler_1",
func() Sampler {
return &BasicSampler{N: 1}
},
100, 100, 100,
},
{
"BasicSampler_5",
func() Sampler {
return &BasicSampler{N: 5}
},
100, 20, 20,
},
{
"RandomSampler",
func() Sampler {
return RandomSampler(5)
},
100, 10, 30,
},
{
"BurstSampler",
func() Sampler {
return &BurstSampler{Burst: 20, Period: time.Second}
},
100, 20, 20,
},
{
"BurstSamplerNext",
func() Sampler {
return &BurstSampler{Burst: 20, Period: time.Second, NextSampler: &BasicSampler{N: 5}}
},
120, 40, 40,
},
}
func TestSamplers(t *testing.T) {
for i := range samplers {
s := samplers[i]
t.Run(s.name, func(t *testing.T) {
sampler := s.sampler()
got := 0
for t := s.total; t > 0; t-- {
if sampler.Sample(0) {
got++
}
}
if got < s.wantMin || got > s.wantMax {
t.Errorf("%s.Sample(0) == true %d on %d, want [%d, %d]", s.name, got, s.total, s.wantMin, s.wantMax)
}
})
}
}
func BenchmarkSamplers(b *testing.B) {
for i := range samplers {
s := samplers[i]
b.Run(s.name, func(b *testing.B) {
sampler := s.sampler()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
sampler.Sample(0)
}
})
})
}
}

View File

@ -1,15 +1,8 @@
// +build !windows // +build !windows
// +build !binary_log
package zlog package zerolog
import ( import "io"
"io"
)
// See http://cee.mitre.org/language/1.0-beta1/clt.html#syslog
// or https://www.rsyslog.com/json-elasticsearch/
const ceePrefix = "@cee:"
// SyslogWriter is an interface matching a syslog.Writer struct. // SyslogWriter is an interface matching a syslog.Writer struct.
type SyslogWriter interface { type SyslogWriter interface {
@ -24,57 +17,36 @@ type SyslogWriter interface {
type syslogWriter struct { type syslogWriter struct {
w SyslogWriter w SyslogWriter
prefix string
} }
// SyslogLevelWriter wraps a SyslogWriter and call the right syslog level // SyslogLevelWriter wraps a SyslogWriter and call the right syslog level
// method matching the zlog level. // method matching the zerolog level.
func SyslogLevelWriter(w SyslogWriter) LevelWriter { func SyslogLevelWriter(w SyslogWriter) LevelWriter {
return syslogWriter{w, ""} return syslogWriter{w}
}
// SyslogCEEWriter wraps a SyslogWriter with a SyslogLevelWriter that adds a
// MITRE CEE prefix for JSON syslog entries, compatible with rsyslog
// and syslog-ng JSON logging support.
// See https://www.rsyslog.com/json-elasticsearch/
func SyslogCEEWriter(w SyslogWriter) LevelWriter {
return syslogWriter{w, ceePrefix}
} }
func (sw syslogWriter) Write(p []byte) (n int, err error) { func (sw syslogWriter) Write(p []byte) (n int, err error) {
var pn int return sw.w.Write(p)
if sw.prefix != "" {
pn, err = sw.w.Write([]byte(sw.prefix))
if err != nil {
return pn, err
}
}
n, err = sw.w.Write(p)
return pn + n, err
} }
// WriteLevel implements LevelWriter interface. // WriteLevel implements LevelWriter interface.
func (sw syslogWriter) WriteLevel(level Level, p []byte) (n int, err error) { func (sw syslogWriter) WriteLevel(level Level, p []byte) (n int, err error) {
switch level { switch level {
case TraceLevel:
case DebugLevel: case DebugLevel:
err = sw.w.Debug(sw.prefix + string(p)) err = sw.w.Debug(string(p))
case InfoLevel: case InfoLevel:
err = sw.w.Info(sw.prefix + string(p)) err = sw.w.Info(string(p))
case WarnLevel: case WarnLevel:
err = sw.w.Warning(sw.prefix + string(p)) err = sw.w.Warning(string(p))
case ErrorLevel: case ErrorLevel:
err = sw.w.Err(sw.prefix + string(p)) err = sw.w.Err(string(p))
case FatalLevel: case FatalLevel:
err = sw.w.Emerg(sw.prefix + string(p)) err = sw.w.Emerg(string(p))
case PanicLevel: case PanicLevel:
err = sw.w.Crit(sw.prefix + string(p)) err = sw.w.Crit(string(p))
case NoLevel:
err = sw.w.Info(sw.prefix + string(p))
default: default:
panic("invalid level") panic("invalid level")
} }
// Any CEE prefix is not part of the message, so we don't include its length
n = len(p) n = len(p)
return return
} }

View File

@ -1,14 +1,7 @@
// +build !binary_log package zerolog
// +build !windows
package zlog import "testing"
import "reflect"
import (
"bytes"
"reflect"
"strings"
"testing"
)
type syslogEvent struct { type syslogEvent struct {
level string level string
@ -21,10 +14,6 @@ type syslogTestWriter struct {
func (w *syslogTestWriter) Write(p []byte) (int, error) { func (w *syslogTestWriter) Write(p []byte) (int, error) {
return 0, nil return 0, nil
} }
func (w *syslogTestWriter) Trace(m string) error {
w.events = append(w.events, syslogEvent{"Trace", m})
return nil
}
func (w *syslogTestWriter) Debug(m string) error { func (w *syslogTestWriter) Debug(m string) error {
w.events = append(w.events, syslogEvent{"Debug", m}) w.events = append(w.events, syslogEvent{"Debug", m})
return nil return nil
@ -53,56 +42,17 @@ func (w *syslogTestWriter) Crit(m string) error {
func TestSyslogWriter(t *testing.T) { func TestSyslogWriter(t *testing.T) {
sw := &syslogTestWriter{} sw := &syslogTestWriter{}
log := New(SyslogLevelWriter(sw)) log := New(SyslogLevelWriter(sw))
log.Trace().Msg("trace")
log.Debug().Msg("debug") log.Debug().Msg("debug")
log.Info().Msg("info") log.Info().Msg("info")
log.Warn().Msg("warn") log.Warn().Msg("warn")
log.Error().Msg("error") log.Error().Msg("error")
log.Log().Msg("nolevel")
want := []syslogEvent{ want := []syslogEvent{
{"Debug", `{"level":"debug","message":"debug"}` + "\n"}, {"Debug", `{"level":"debug","message":"debug"}` + "\n"},
{"Info", `{"level":"info","message":"info"}` + "\n"}, {"Info", `{"level":"info","message":"info"}` + "\n"},
{"Warning", `{"level":"warn","message":"warn"}` + "\n"}, {"Warning", `{"level":"warn","message":"warn"}` + "\n"},
{"Err", `{"level":"error","message":"error"}` + "\n"}, {"Err", `{"level":"error","message":"error"}` + "\n"},
{"Info", `{"message":"nolevel"}` + "\n"},
} }
if got := sw.events; !reflect.DeepEqual(got, want) { if got := sw.events; !reflect.DeepEqual(got, want) {
t.Errorf("Invalid syslog message routing: want %v, got %v", want, got) t.Errorf("Invalid syslog message routing: want %v, got %v", want, got)
} }
} }
type testCEEwriter struct {
buf *bytes.Buffer
}
// Only implement one method as we're just testing the prefixing
func (c testCEEwriter) Debug(m string) error { return nil }
func (c testCEEwriter) Info(m string) error {
_, err := c.buf.Write([]byte(m))
return err
}
func (c testCEEwriter) Warning(m string) error { return nil }
func (c testCEEwriter) Err(m string) error { return nil }
func (c testCEEwriter) Emerg(m string) error { return nil }
func (c testCEEwriter) Crit(m string) error { return nil }
func (c testCEEwriter) Write(b []byte) (int, error) {
return c.buf.Write(b)
}
func TestSyslogWriter_WithCEE(t *testing.T) {
var buf bytes.Buffer
sw := testCEEwriter{&buf}
log := New(SyslogCEEWriter(sw))
log.Info().Str("key", "value").Msg("message string")
got := string(buf.Bytes())
want := "@cee:{"
if !strings.HasPrefix(got, want) {
t.Errorf("Bad CEE message start: want %v, got %v", want, got)
}
}

View File

@ -1,12 +1,7 @@
package zlog package zerolog
import ( import (
"bytes"
"io" "io"
"path"
"runtime"
"strconv"
"strings"
"sync" "sync"
) )
@ -31,9 +26,11 @@ type syncWriter struct {
} }
// SyncWriter wraps w so that each call to Write is synchronized with a mutex. // SyncWriter wraps w so that each call to Write is synchronized with a mutex.
// This syncer can be used to wrap the call to writer's Write method if it is // This syncer can be the call to writer's Write method is not thread safe.
// not thread safe. Note that you do not need this wrapper for os.File Write // Note that os.File Write operation is using write() syscall which is supposed
// operations on POSIX and Windows systems as they are already thread-safe. // to be thread-safe on POSIX systems. So there is no need to use this with
// os.File on such systems as zerolog guaranties to issue a single Write call
// per log event.
func SyncWriter(w io.Writer) io.Writer { func SyncWriter(w io.Writer) io.Writer {
if lw, ok := w.(LevelWriter); ok { if lw, ok := w.(LevelWriter); ok {
return &syncWriter{lw: lw} return &syncWriter{lw: lw}
@ -61,30 +58,30 @@ type multiLevelWriter struct {
func (t multiLevelWriter) Write(p []byte) (n int, err error) { func (t multiLevelWriter) Write(p []byte) (n int, err error) {
for _, w := range t.writers { for _, w := range t.writers {
if _n, _err := w.Write(p); err == nil { n, err = w.Write(p)
n = _n if err != nil {
if _err != nil { return
err = _err }
} else if _n != len(p) { if n != len(p) {
err = io.ErrShortWrite err = io.ErrShortWrite
return
} }
} }
} return len(p), nil
return n, err
} }
func (t multiLevelWriter) WriteLevel(l Level, p []byte) (n int, err error) { func (t multiLevelWriter) WriteLevel(l Level, p []byte) (n int, err error) {
for _, w := range t.writers { for _, w := range t.writers {
if _n, _err := w.WriteLevel(l, p); err == nil { n, err = w.WriteLevel(l, p)
n = _n if err != nil {
if _err != nil { return
err = _err }
} else if _n != len(p) { if n != len(p) {
err = io.ErrShortWrite err = io.ErrShortWrite
return
} }
} }
} return len(p), nil
return n, err
} }
// MultiLevelWriter creates a writer that duplicates its writes to all the // MultiLevelWriter creates a writer that duplicates its writes to all the
@ -101,54 +98,3 @@ func MultiLevelWriter(writers ...io.Writer) LevelWriter {
} }
return multiLevelWriter{lwriters} return multiLevelWriter{lwriters}
} }
// TestingLog is the logging interface of testing.TB.
type TestingLog interface {
Log(args ...interface{})
Logf(format string, args ...interface{})
Helper()
}
// TestWriter is a writer that writes to testing.TB.
type TestWriter struct {
T TestingLog
// Frame skips caller frames to capture the original file and line numbers.
Frame int
}
// NewTestWriter creates a writer that logs to the testing.TB.
func NewTestWriter(t TestingLog) TestWriter {
return TestWriter{T: t}
}
// Write to testing.TB.
func (t TestWriter) Write(p []byte) (n int, err error) {
t.T.Helper()
n = len(p)
// Strip trailing newline because t.Log always adds one.
p = bytes.TrimRight(p, "\n")
// Try to correct the log file and line number to the caller.
if t.Frame > 0 {
_, origFile, origLine, _ := runtime.Caller(1)
_, frameFile, frameLine, ok := runtime.Caller(1 + t.Frame)
if ok {
erase := strings.Repeat("\b", len(path.Base(origFile))+len(strconv.Itoa(origLine))+3)
t.T.Logf("%s%s:%d: %s", erase, path.Base(frameFile), frameLine, p)
return n, err
}
}
t.T.Log(string(p))
return n, err
}
// ConsoleTestWriter creates an option that correctly sets the file frame depth for testing.TB log.
func ConsoleTestWriter(t TestingLog) func(w *ConsoleWriter) {
return func(w *ConsoleWriter) {
w.Out = TestWriter{T: t, Frame: 6}
}
}

View File

@ -1,13 +1,6 @@
//go:build !binary_log && !windows package zerolog
// +build !binary_log,!windows
package zlog
import ( import (
"bytes"
"errors"
"fmt"
"io"
"reflect" "reflect"
"testing" "testing"
) )
@ -19,159 +12,13 @@ func TestMultiSyslogWriter(t *testing.T) {
log.Info().Msg("info") log.Info().Msg("info")
log.Warn().Msg("warn") log.Warn().Msg("warn")
log.Error().Msg("error") log.Error().Msg("error")
log.Log().Msg("nolevel")
want := []syslogEvent{ want := []syslogEvent{
{"Debug", `{"level":"debug","message":"debug"}` + "\n"}, {"Debug", `{"level":"debug","message":"debug"}` + "\n"},
{"Info", `{"level":"info","message":"info"}` + "\n"}, {"Info", `{"level":"info","message":"info"}` + "\n"},
{"Warning", `{"level":"warn","message":"warn"}` + "\n"}, {"Warning", `{"level":"warn","message":"warn"}` + "\n"},
{"Err", `{"level":"error","message":"error"}` + "\n"}, {"Err", `{"level":"error","message":"error"}` + "\n"},
{"Info", `{"message":"nolevel"}` + "\n"},
} }
if got := sw.events; !reflect.DeepEqual(got, want) { if got := sw.events; !reflect.DeepEqual(got, want) {
t.Errorf("Invalid syslog message routing: want %v, got %v", want, got) t.Errorf("Invalid syslog message routing: want %v, got %v", want, got)
} }
} }
var writeCalls int
type mockedWriter struct {
wantErr bool
}
func (c mockedWriter) Write(p []byte) (int, error) {
writeCalls++
if c.wantErr {
return -1, errors.New("Expected error")
}
return len(p), nil
}
// Tests that a new writer is only used if it actually works.
func TestResilientMultiWriter(t *testing.T) {
tests := []struct {
name string
writers []io.Writer
}{
{
name: "All valid writers",
writers: []io.Writer{
mockedWriter{
wantErr: false,
},
mockedWriter{
wantErr: false,
},
},
},
{
name: "All invalid writers",
writers: []io.Writer{
mockedWriter{
wantErr: true,
},
mockedWriter{
wantErr: true,
},
},
},
{
name: "First invalid writer",
writers: []io.Writer{
mockedWriter{
wantErr: true,
},
mockedWriter{
wantErr: false,
},
},
},
{
name: "First valid writer",
writers: []io.Writer{
mockedWriter{
wantErr: false,
},
mockedWriter{
wantErr: true,
},
},
},
}
for _, tt := range tests {
writers := tt.writers
multiWriter := MultiLevelWriter(writers...)
logger := New(multiWriter).With().Timestamp().Logger().Level(InfoLevel)
logger.Info().Msg("Test msg")
if len(writers) != writeCalls {
t.Errorf("Expected %d writers to have been called but only %d were.", len(writers), writeCalls)
}
writeCalls = 0
}
}
type testingLog struct {
testing.TB
buf bytes.Buffer
}
func (t *testingLog) Log(args ...interface{}) {
if _, err := t.buf.WriteString(fmt.Sprint(args...)); err != nil {
t.Error(err)
}
}
func (t *testingLog) Logf(format string, args ...interface{}) {
if _, err := t.buf.WriteString(fmt.Sprintf(format, args...)); err != nil {
t.Error(err)
}
}
func TestTestWriter(t *testing.T) {
tests := []struct {
name string
write []byte
want []byte
}{{
name: "newline",
write: []byte("newline\n"),
want: []byte("newline"),
}, {
name: "oneline",
write: []byte("oneline"),
want: []byte("oneline"),
}, {
name: "twoline",
write: []byte("twoline\n\n"),
want: []byte("twoline"),
}}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tb := &testingLog{TB: t} // Capture TB log buffer.
w := TestWriter{T: tb}
n, err := w.Write(tt.write)
if err != nil {
t.Error(err)
}
if n != len(tt.write) {
t.Errorf("Expected %d write length but got %d", len(tt.write), n)
}
p := tb.buf.Bytes()
if !bytes.Equal(tt.want, p) {
t.Errorf("Expected %q, got %q.", tt.want, p)
}
log := New(NewConsoleWriter(ConsoleTestWriter(t)))
log.Info().Str("name", tt.name).Msg("Success!")
tb.buf.Reset()
})
}
}