Skip to content

Deadlocks with version 1.93 #1440

@mdantonio

Description

@mdantonio

Hey, we are using logrus on a service deployed on kubernetes that produces a pretty big amount of logs (around 4 millions per hour). We never observed similar issues in several months, but today we experienced several deadlocks, without any change on our stack (service was stable and running since many days then it started to completely freeze every 15-20 minutes).
We are using latest version (1.9.3)

	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:321
github.com/sirupsen/logrus.(*Entry).Info(...)
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:304 +0x48 fp=0xc0001cb7c8 sp=0xc0001cb798 pc=0x7dc1e8
github.com/sirupsen/logrus.(*Entry).Log(0xc0059b9b90, 0x4, {0xc0001cb7f8?, 0xc0001cb808?, 0x41535b?})
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:233 +0x2d1 fp=0xc0001cb798 sp=0xc0001cb6a0 pc=0x7dba91
github.com/sirupsen/logrus.(*Entry).log(0xc0059b9b90, 0x4, {0xc004dbbc98, 0x11})
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/logger.go:61
github.com/sirupsen/logrus.(*MutexWrap).Lock(...)
	/usr/local/go/src/sync/mutex.go:90
sync.(*Mutex).Lock(...)
	/usr/local/go/src/sync/mutex.go:171 +0x15d fp=0xc0001cb6a0 sp=0xc0001cb650 pc=0x48ad5d
sync.(*Mutex).lockSlow(0x24228f0)
	/usr/local/go/src/runtime/sema.go:77 +0x25 fp=0xc0001cb650 sp=0xc0001cb618 pc=0x47b065
sync.runtime_SemacquireMutex(0x0?, 0x0?, 0xc0059b9c00?)
	/usr/local/go/src/runtime/sema.go:160 +0x225 fp=0xc0001cb618 sp=0xc0001cb5b0 pc=0x459fe5
runtime.semacquire1(0x24228f4, 0x0, 0x3, 0x1, 0x15)
	/usr/local/go/src/runtime/proc.go:408
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0001cb5b0 sp=0xc0001cb590 pc=0x44716e
runtime.gopark(0x3631905e71dab8?, 0x11?, 0x10?, 0x0?, 0xc0001cb600?)
goroutine 848 gp=0xc003eb4000 m=nil [sync.Mutex.Lock, 218 minutes]:

I see similar issues reported in the past (#1201, that is closed, but apparently not resolved).

What is the reason for such deadlocks? Can be caused by some dirty data being logged? Any hint to prevent similar deadlocks?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions