lumberjack is a log rolling package for Go

lumberjack GoDoc Build Status Build status Coverage Status

Lumberjack is a Go package for writing logs to rolling files.

Package lumberjack provides a rolling logger.

Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly:

import "gopkg.in/natefinch/lumberjack.v2"

The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch.

Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written.

Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package.

Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior.

Example

To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.

Code:

log.SetOutput(&lumberjack.Logger{
    Filename:   "/var/log/myapp/foo.log",
    MaxSize:    500, // megabytes
    MaxBackups: 3,
    MaxAge:     28, //days
    Compress:   true, // disabled by default
})

type Logger

type Logger struct {
    // Filename is the file to write logs to.  Backup log files will be retained
    // in the same directory.  It uses <processname>-lumberjack.log in
    // os.TempDir() if empty.
    Filename string `json:"filename" yaml:"filename"`

    // MaxSize is the maximum size in megabytes of the log file before it gets
    // rotated. It defaults to 100 megabytes.
    MaxSize int `json:"maxsize" yaml:"maxsize"`

    // MaxAge is the maximum number of days to retain old log files based on the
    // timestamp encoded in their filename.  Note that a day is defined as 24
    // hours and may not exactly correspond to calendar days due to daylight
    // savings, leap seconds, etc. The default is not to remove old log files
    // based on age.
    MaxAge int `json:"maxage" yaml:"maxage"`

    // MaxBackups is the maximum number of old log files to retain.  The default
    // is to retain all old log files (though MaxAge may still cause them to get
    // deleted.)
    MaxBackups int `json:"maxbackups" yaml:"maxbackups"`

    // LocalTime determines if the time used for formatting the timestamps in
    // backup files is the computer's local time.  The default is to use UTC
    // time.
    LocalTime bool `json:"localtime" yaml:"localtime"`

    // Compress determines if the rotated log files should be compressed
    // using gzip. The default is not to perform compression.
    Compress bool `json:"compress" yaml:"compress"`
    // contains filtered or unexported fields
}

Logger is an io.WriteCloser that writes to the specified filename.

Logger opens or creates the logfile on first Write. If the file exists and is less than MaxSize megabytes, lumberjack will open and append to that file. If the file exists and its size is >= MaxSize megabytes, the file is renamed by putting the current time in a timestamp in the name immediately before the file's extension (or the end of the filename if there's no extension). A new log file is then created using original filename.

Whenever a write would cause the current log file exceed MaxSize megabytes, the current file is closed, renamed, and a new log file created with the original name. Thus, the filename you give Logger is always the "current" log file.

Backups use the log file name given to Logger, in the form name-timestamp.ext where name is the filename without the extension, timestamp is the time at which the log was rotated formatted with the time.Time format of 2006-01-02T15-04-05.000 and the extension is the original extension. For example, if your Logger.Filename is /var/log/foo/server.log, a backup created at 6:30pm on Nov 11 2016 would use the filename /var/log/foo/server-2016-11-04T18-30-00.000.log

Cleaning Up Old Log Files

Whenever a new logfile gets created, old log files may be deleted. The most recent files according to the encoded timestamp will be retained, up to a number equal to MaxBackups (or all of them if MaxBackups is 0). Any files with an encoded timestamp older than MaxAge days are deleted, regardless of MaxBackups. Note that the time encoded in the timestamp is the rotation time, which may differ from the last time that file was written to.

If MaxBackups and MaxAge are both 0, no old log files will be deleted.

func (*Logger) Close

func (l *Logger) Close() error

Close implements io.Closer, and closes the current logfile.

func (*Logger) Rotate

func (l *Logger) Rotate() error

Rotate causes Logger to close the existing log file and immediately create a new one. This is a helper function for applications that want to initiate rotations outside of the normal rotation rules, such as in response to SIGHUP. After rotating, this initiates a cleanup of old log files according to the normal rules.

Example

Example of how to rotate in response to SIGHUP.

Code:

l := &lumberjack.Logger{}
log.SetOutput(l)
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP)

go func() {
    for {
        <-c
        l.Rotate()
    }
}()

func (*Logger) Write

func (l *Logger) Write(p []byte) (n int, err error)

Write implements io.Writer. If a write would cause the log file to be larger than MaxSize, the file is closed, renamed to include a timestamp of the current time, and a new log file is created using the original log file name. If the length of the write is greater than MaxSize, an error is returned.


Generated by godoc2md

Owner
Nate Finch
Author of gorram, lumberjack, pie, gnorm, mage, and others. https://twitter.com/natethefinch
Nate Finch
Comments
  • Adds gzip compression to backup log files

    Adds gzip compression to backup log files

    This PR adds gzip compression to backup log files via the CompressBackups config option. Tests added and passed. Please let me know if I missed anything. Note: We're using this library in production (including gzipped backups) with max size as 100MB. Compression doesn't have any noticeable performance impact aside from saving a lot of drive space.

  • Add support for log file compression

    Add support for log file compression

    This change adds support for compressing rotated log files.

    Several other clean ups and specifically test improvements are included as separate commits.

    Fixes issue #13

  • v3 Work Thread

    v3 Work Thread

    I think Lumberjack needs a v3. This package was written a long time ago, and while it's functional, there were decisions I made that were poor in hindsight. This thread will be a list of what I think needs to be done.

    1. Switch from a struct to a new function to create a new logger. This has several advantages - we can sanity check the log file name, and return an error if you've given us an invalid name. It lets us do some logic before the first write, like setting up the mill goroutine. And it means there's no way to change things after creation that really shouldn't be changed, like changing the maxsize etc.
    2. Switch from a size in megabytes to a size in bytes (many people have asked for this)
    3. Stop using yaml and toml struct tags.... if people want to write a config they can do that in their own code and we don't need to import those packages for no reason if people don't want to use them
    4. We can change the size calculation so that we just sum all the sizes of the current file and backup files, and then it won't matter if they're not all the same size, and then if people want to rotate on a time-based schedule, they can do that without worrying they'll accidentally blow up their disk.
    5. I think we can clean up the code some, so it's a bit easier to reason about.
  • RFC: Option to not move rotated files

    RFC: Option to not move rotated files

    I was wondering if an option to not move rotated files would be a patch you'd consider. I have external tooling that moves and compresses the file (thereby avoiding #124), and I'd rather not have errors show up in stdout. (lumberjack.go:223)

    Looks like pretty straightforward work!

  • Log Rotator not working when passing file as variable

    Log Rotator not working when passing file as variable

    Hi Team,

    I am passing filename as variable in function but it seems log rotator is not working .Here is the snippet of my code:

    logfile := logdir + "/monitoring.log" log.SetOutput(&lumberjack.Logger{ Filename: logfile, MaxSize: 5, // megabytes MaxBackups: 3, })

  • Write a file header on each new file created

    Write a file header on each new file created

    I need a file header to be written each time a new file is created. I don't see a way to do this, especially on rotate. Will add a PR with a suggested solution.

  • Rotation based on day

    Rotation based on day

    Hello,

    Would you be interessed by adding the possibility to rotate log file based on day please ? If I do not any mistake, rotation is only activated by the size. The idea would be to also activate it by day : rotate everyday for example.

    Thanks a lot.

  • panic: runtime error: slice bounds out of range

    panic: runtime error: slice bounds out of range

    I like the idea. I tested it and met an issue. Here is the code to reproduce.

    package main
    
    import (
        "log"
        "net/http"
        "time"
    
        "github.com/natefinch/lumberjack"
    )
    
    func main() {
        log.SetOutput(&lumberjack.Logger{
            Dir:        "log",
            NameFormat: "2006-01-02T01-01-01.000.log",
            MaxSize:    lumberjack.Megabyte,
            MaxBackups: 3,
            MaxAge:     28,
        })
    
        for {
            log.Println("----")
            time.Sleep(time.Microsecond)
        }
    }
    

    The output is:

    panic: runtime error: slice bounds out of range
    
    goroutine 16 [running]:
    runtime.panic(0x6d14c0, 0x8ba0af)
        /usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
    github.com/natefinch/lumberjack.(*Logger).cleanup(0xc208004300, 0x0, 0x0)
        .../github.com/natefinch/lumberjack/lumberjack.go:269 +0x500
    github.com/natefinch/lumberjack.(*Logger).rotate(0xc208004300, 0x0, 0x0)
        .../github.com/natefinch/lumberjack/lumberjack.go:179 +0xaf
    github.com/natefinch/lumberjack.(*Logger).Write(0xc208004300, 0xc20803faa0, 0x19, 0x20, 0x0, 0x0, 0x0)
    

    The reason is oldLogFiles() return empty slice.

  • Strange behavior in truncate

    Strange behavior in truncate

    Well, I have been using it since quite a sometime but I have got new requirement of generating a file containing a few comma separated value. I truncate it every 15 minute using cron, it worked for few days but lately I am getting strange issue of file getting to old size after app starts writing in it after truncating. So if at time of truncate file size was 700MB, it get truncated by cron but when app start writing again in it after few second, it start with 700MB with head -n1 file.log failing to show any output. nor vim works on file, Only tail -f file.log work confirming that its being written at the last, but sure what is the issue with the start/head of the file as it is not showing the output.

    I am using it with Zap Logger

                   writer := zapcore.AddSync(&lumberjack.Logger{
    			Filename: "file.log",
    			MaxSize:  10000,  //in MB
    			Compress: false, //gzip
    			MaxAge:   28,   //days
    		})
    		core := zapcore.NewCore(encoder, writer, level)
    		cores = append(cores, core)
    

    I even tried to truncate file using the app itself by creating a custom url like this and calling it from with cron instead of bash command.

          r.GET("/logs/truncate", func(ctx *fasthttp.RequestCtx) {
    
    		file, err := os.OpenFile("file.log", os.O_CREATE|os.O_WRONLY, 0666)
    		if err == nil {
    			_ = file.Truncate(0)
    			_, _ = fmt.Fprint(ctx, "success")
    		} else {
    			_, _ = fmt.Fprint(ctx, "error")
    		}
    	})
    

    But this also doesn't worked, file size kept on increasing sometime. I have this code on 3 server and this issue has been happening randomly on 2 server. Though it has been mentioned in #25 and file itself about multi process write will result in improper behavior still What you think could be the issue here? Any workaround...? Also is there way to rotate the file based on time instead of size?

  • Log file rotation is failing with exception

    Log file rotation is failing with exception

    We are using lumberjack Version 2 Log file rotation is failing with exception slice out of bound exception panic: runtime error: slice bounds out of range

    Stack trace of exception.

    goroutine 1 [running]: github.com/natefinch/lumberjack.(_Logger).cleanup(0xc0820122a0, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:269 +0x692 github.com/natefinch/lumberjack.(_Logger).rotate(0xc0820122a0, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:179 +0xbc github.com/natefinch/lumberjack.(_Logger).Write(0xc0820122a0, 0xc082036d00, 0x7a, 0xca, 0x0, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:131 +0x405 bytes.(_Buffer).WriteTo(0xc082030460, 0xc94520, 0xc0820122a0, 0x0, 0x0, 0x0) c:/go/src/bytes/buffer.go:206 +0xcf io.copyBuffer(0xc94520, 0xc0820122a0, 0xc945a8, 0xc082030460, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)

    Please help us to resolve this issue

  • panic: runtime error: slice bounds out of range in lumberjack.go:269

    panic: runtime error: slice bounds out of range in lumberjack.go:269

    Hi,

    After about 20-ish hours of working my application crashes with the following panic() in lumberjack.

    Could there be some kind of out of order condition that could cause files[] to panic?

    Using version:

                        "ImportPath": "github.com/natefinch/lumberjack",
                        "Comment": "v1.0-2-ga6f35ba",
                        "Rev": "a6f35bab25c9df007f78aa90c441922062451979"
    

    panic: runtime error: slice bounds out of range

    goroutine 61 [running]:
    github.com/natefinch/lumberjack.(*Logger).cleanup(0xc208082d20, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:269 +0x513
    github.com/natefinch/lumberjack.(*Logger).rotate(0xc208082d20, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:179 +0xb0
    github.com/natefinch/lumberjack.(*Logger).Write(0xc208082d20, 0xc208e45900, 0x1cf, 0x247, 0x0, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/github.com/natefinch/lumberjack/lumberjack.go:131 +0x316
    bytes.(*Buffer).WriteTo(0xc20b172e00, 0x7f5f3f4a57e0, 0xc208082d20, 0x0, 0x0, 0x0)
        /usr/src/go/src/bytes/buffer.go:202 +0xda
    io.Copy(0x7f5f3f4a57e0, 0xc208082d20, 0x7f5f3f4a5790, 0xc20b172e00, 0x0, 0x0, 0x0)
        /usr/src/go/src/io/io.go:354 +0xb2
    github.com/Sirupsen/logrus.(*Entry).log(0xc2097d4140, 0x4, 0xc209018030, 0x2c)
        /obfuscated/Godeps/_workspace/src/github.com/Sirupsen/logrus/entry.go:94 +0x4d1
    github.com/Sirupsen/logrus.(*Entry).Info(0xc2097d4140, 0xc217d53ba8, 0x1, 0x1)
        /obfuscated/Godeps/_workspace/src/github.com/Sirupsen/logrus/entry.go:119 +0x7f
    

    This looks like this area of code for me:

        if l.MaxBackups > 0 {
            deletes = files[l.MaxBackups:]
            files = files[:l.MaxBackups]
        }
    
  • fix(sec): upgrade gopkg.in/yaml.v2 to 2.2.8

    fix(sec): upgrade gopkg.in/yaml.v2 to 2.2.8

    What happened?

    There are 1 security vulnerabilities found in gopkg.in/yaml.v2 v2.2.2

    What did I do?

    Upgrade gopkg.in/yaml.v2 from v2.2.2 to 2.2.8 for vulnerability fix

    What did you expect to happen?

    Ideally, no insecure libs should be used.

    The specification of the pull request

    PR Specification from OSCS

  • When will the V3 official version be released?

    When will the V3 official version be released?

    The V2 official version has been released for several years. When will the V3 official version be released?

    https://github.com/natefinch/lumberjack/issues/170#tasklist-block-a0e535d2-9fdf-4593-8943-1a7d0ca4068a

  • extend logger to include compression rate and capacity

    extend logger to include compression rate and capacity

    add compression rate and capacity to logger configuration to avoid consuming all disk IO which might lead to unpredictable behavior.

    Signed-off-by: cardy.tang [email protected]

Rolling writer is an IO util for auto rolling write in go.

RollingWriter RollingWriter is an auto rotate io.Writer implementation. It can works well with logger. Awesome Go popular log helper New Version v2.0

Dec 20, 2022
Log-server - Implement log server for gwaylib/log/adapter/rmsq

Implement server of github.com/gwaylib/log Base on https://github.com/gwaycc/lserver Build . env.sh cd cmd/web go build Deploy Install supd(Debian sy

Jan 3, 2022
An golang log lib, supports tracking and level, wrap by standard log lib

Logex An golang log lib, supports tracing and level, wrap by standard log lib How To Get shell go get gopkg.in/logex.v1 source code import "gopkg.in/

Nov 27, 2022
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.

Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer, used to analyze Nginx access logs for myself.

Nov 29, 2022
Distributed-Log-Service - Distributed Log Service With Golang
Distributed-Log-Service - Distributed Log Service With Golang

Distributed Log Service This project is essentially a result of my attempt to un

Jun 1, 2022
Log-analyzer - Log analyzer with golang

Log Analyzer what do we have here? Objective Installation and Running Applicatio

Jan 27, 2022
a golang log lib supports level and multi handlers

go-log a golang log lib supports level and multi handlers Use import "github.com/siddontang/go-log/log" //log with different level log.Info("hello wo

Dec 29, 2022
Structured log interface

Structured log interface Package log provides the separation of the logging interface from its implementation and decouples the logger backend from yo

Sep 26, 2022
CoLog is a prefix-based leveled execution log for Go
CoLog is a prefix-based leveled execution log for Go

What's CoLog? CoLog is a prefix-based leveled execution log for Go. It's heavily inspired by Logrus and aims to offer similar features by parsing the

Dec 14, 2022
OpenTelemetry log collection library

opentelemetry-log-collection Status This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTeleme

Sep 15, 2022
A simple web service for storing text log files

logpaste A minimalist web service for uploading and sharing log files. Run locally go run main.go Run in local Docker container The Docker container a

Dec 30, 2022
exo: a process manager & log viewer for dev
 exo: a process manager & log viewer for dev

exo: a process manager & log viewer for dev exo- prefix – external; from outside. Features Procfile compatible process manager.

Dec 28, 2022
Write log entries, get X-Ray traces.

logtoxray Write to logs, get X-Ray traces. No distributed tracing instrumenation library required. ?? ?? ?? THIS PROJECT IS A WORK-IN-PROGRESS PROTOTY

Apr 24, 2022
Binalyze logger is an easily customizable wrapper for logrus with log rotation

logger logger is an easily customizable wrapper for logrus with log rotation Usage There is only one function to initialize logger. logger.Init() When

Oct 2, 2022
Log-structured virtual disk in Ceph
Log-structured virtual disk in Ceph

lsd_ceph Log-structured virtual disk in Ceph 1. Vision and Goals of the Project Implement the basic librbd API to work with the research block device

Dec 13, 2021
Multi-level logger based on go std log

mlog the mlog is multi-level logger based on go std log. It is: Simple Easy to use NOTHING ELSE package main import ( log "github.com/ccpaging/lo

May 18, 2022
Simple log parser written in Golang

Simple log parser written in Golang

Oct 31, 2021
Nginx JSON Log Analyze

Nginx-JSON-Log-Analyze Nginx Configuration log_format json_log escape=json '{"time_iso8601":"$time_iso8601",' '"remote

Nov 29, 2022
A Log merging tool for linux.

logmerge A Log merging tool for linux. How to build make build How to run --files or -f will allow you to specify multiple log files (comma-seperated)

Nov 4, 2021