Lockgate is a cross-platform locking library for Go with distributed locks using Kubernetes or lockgate HTTP lock server as well as the OS file locks support.

Lockgate

Lockgate is a locking library for Go.

  • Classical interface:
    • 2 types of locks: shared and exclusive;
    • 2 modes of locking: blocking and non-blocking.
  • File locks on the single host are supported.
  • Kubernetes-based distributed locks are supported:
    • Kubernetes locker is configured by an arbitrary Kubernetes resource;
    • locks are stored in the annotations of the specified resource;
    • properly use native Kubernetes optimistic locking to handle simultaneous access to the resource.
  • Locks using HTTP server are supported:
    • lockgate lock server may be run as a standalone process or multiple Kubernetes-backed processes:
      • lockgate locks server uses in-memory or Kubernetes key-value storage with optimistic locks;
    • user specifies URL of a lock server instance in the userspace code to use locks over HTTP.

This library is used in the werf CI/CD tool to implement synchronization of multiple werf build and deploy processes running from single or multiple hosts using Kubernetes or local file locks.

If you have an Open Source project using lockgate, feel free to list it here via PR.

Installation

go get -u github.com/werf/lockgate

Usage

Select a locker

The main interface of the library which user interacts with is lockgate.Locker. There are multiple implementations of the locker available:

File locker

This is a simple OS filesystem locks based locker. It can be used by multiple processes on the single host filesystem.

Create a file locker as follows:

import "github.com/werf/lockgate"

...

locker, err := lockgate.NewFileLocker("/var/lock/myapp")

All cooperating processes should use the same locks directory.

Kubernetes locker

This locker uses specified Kubernetes resource as a storage for locker data. Multiple processes which use this locker should have an access to the same Kubernetes cluster.

This locker allows distributed locking over multiple hosts.

Create a Kubernetes locker as follows:

import "github.com/werf/lockgate"

...

// Initialize kubeDynamicClient from https://github.com/kubernetes/client-go.
locker, err := lockgate.NewKubernetesLocker(
	kubeDynamicClient, schema.GroupVersionResource{
		Group:    "",
		Version:  "v1",
		Resource: "configmaps",
	}, "mycm", "myns",
)

All cooperating processes should use the same Kubernetes params. In this example, locks data will be stored in the mycm ConfigMap in the myns namespace.

HTTP locker

This locker uses lockgate HTTP server to organize locks and allows distributed locking over multiple hosts.

Create a HTTP locker as follows:

import "github.com/werf/lockgate"

...

locker := lockgate.NewHttpLocker("http://localhost:55589")

All cooperating processes should use the same URL endpoint of the lockgate HTTP lock server. In this example, there should be a lockgate HTTP lock server available at localhost:55589 address. See below how to run such a server.

Lockgate HTTP lock server

Lockgate HTTP server can use memory-storage or kubernetes-storage:

  • There can be only 1 instance of the lockgate server that uses memory storage.
  • There can be an arbitrary number of servers using kubernetes-storage.

Run a lockgate HTTP lock server as follows:

import "github.com/werf/lockgate"
import "github.com/werf/lockgate/pkg/distributed_locker"
import "github.com/werf/lockgate/pkg/distributed_locker/optimistic_locking_store"

...
store := optimistic_locking_store.NewInMemoryStore()
// OR
// store := optimistic_locking_store.NewKubernetesResourceAnnotationsStore(
//	kube.DynamicClient, schema.GroupVersionResource{
//		Group:    "",
//		Version:  "v1",
//		Resource: "configmaps",
//	}, "mycm", "myns",
//)
backend := distributed_locker.NewOptimisticLockingStorageBasedBackend(store)
distributed_locker.RunHttpBackendServer("0.0.0.0", "55589", backend)

Locker usage example

In the following example, a locker object instance is created using one of the ways documented above — user should select the required locker implementation. The rest of the sample uses generic lockgate.Locker interface to acquire and release locks.

import "github.com/werf/lockgate"

func main() {
	// Create Kubernetes based locker in ns/mynamespace cm/myconfigmap.
	// Initialize kubeDynamicClient using https://github.com/kubernetes/client-go.
        locker := lockgate.NewKubernetesLocker(
                kubeDynamicClient, schema.GroupVersionResource{
                        Group:    "",
                        Version:  "v1",
                        Resource: "configmaps",
                }, "myconfigmap", "mynamespace",
        )
	
	// OR create file based locker backed by /var/locks/mylocks_service_dir directory
    	locker, err := lockgate.NewFileLocker("/var/locks/mylocks_service_dir")
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: failed to create file locker: %s\n", err)
		os.Exit(1)
	}

	// Case 1: simple blocking lock

	acquired, lock, err := locker.Acquire("myresource", lockgate.AcquireOptions{Shared: false, Timeout: 30*time.Second}
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: failed to lock myresource: %s\n", err)
		os.Exit(1)
	}

	// ...

	if err := locker.Release(lock); err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: failed to unlock myresource: %s\n", err)
		os.Exit(1)
	}

	// Case 2: WithAcquire wrapper

	if err := lockgate.WithAcquire(locker, "myresource", lockgate.AcquireOptions{Shared: false, Timeout: 30*time.Second}, func(acquired bool) error {
		// ...
	}); err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: failed to perform an operation with locker myresource: %s\n", err)
		os.Exit(1)
	}
	
	// Case 3: non-blocking

	acquired, lock, err := locker.Acquire("myresource", lockgate.AcquireOptions{Shared: false, NonBlocking: true})
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: failed to lock myresource: %s\n", err)
		os.Exit(1)
	}

	if acquired {
		// ...

		if err := locker.Release(lock); err != nil {
			fmt.Fprintf(os.Stderr, "ERROR: failed to unlock myresource: %s\n", err)
			os.Exit(1)
		}
	} else {
		// ...
	}
}

Community

Please feel free to reach us via Flant Open Source forums (based on Discourse). They have a special category dedicated to werf & its subprojects.

You're also welcome to follow @werf_io to stay informed about all important news, articles, etc.

Owner
werf
werf CI/CD tool and related projects
werf
Comments
  • Looking for documentation for the effects/load of using the Kubernetes resource lock

    Looking for documentation for the effects/load of using the Kubernetes resource lock

    Hey all,

    This project looks really neat and conceptually, I love the idea of just making a ConfigMap or similar as my locked resource.

    Has there been any testing/verification on the scalability of an approach like this? How long does it take to acquire a lock? How does the control plane behave when you have multiple services trying to ask for the same lock?

    Has the feature been extensively tested?

    Thanks!

  • Example not working

    Example not working

    not sure what the issue is, but the example doesn't work, I have to do this:

    import (
    	"github.com/werf/lockgate/pkg/distributed_locker"
    )
    
    
    func Init() {
    	locker := distributed_locker.NewKubernetesLocker(
    		k8s.ClientsetDynamic, schema.GroupVersionResource{
    			Group:    "",
    			Version:  "v1",
    			Resource: "configmaps",
    		}, "locker", "default",
    	)
    
    

    otherwise I get:

    undefined: lockgate.NewKubernetesLocker
    
  • The new http distributed locker and big refactor

    The new http distributed locker and big refactor

    • distributed_locker package contains kubernetes or http locker implementations;
    • distributed_locker.DistributedLocker is a generic Locker interface implementation for any distributed locker;
    • user may select implementation of distributed locker between:
      • kubernetes locker — locker connects to the kubernetes directly;
      • http locker with in-memory store — locker connects to the special lockgate locker server, which stores locks data in memory (server cannot be horizontally scaled);
      • http locker with kubernetes store — locker connects to the special lockgate locker server, which stores locks data in kubernetes (server can be horizontally scaled, but needs a connection to the kubernetes cluster);
    • lockgate locker http server implementation included;
    • distributed locker uses the same protocol to connect to any backend (kubernetes locker or http server locker): acquire, renew-lease and release operations — all operations are non-blocking;
    • distributed locker periodically calls renew-lease on backend, or lock lease will be lost otherwise.
    • distributed locker implements polling procedure when waiting for a lock to be released.

    README: Update info about locker creation and fix usage examples

  • Fix kubernetes locker hanging when there are many 'optimistic locking' conflicts

    Fix kubernetes locker hanging when there are many 'optimistic locking' conflicts

    The hanging was due to double locking of mutex (classic) in some situation when sending a signal to the channel. The channel is unbuffered (zero capacity), which caused blocking when sending a done-chan signal. Due to blocking on the channel send mutex remained locked. Lockgate now will release mutex first, before sending a signal to this done-chan.

    Also:

    • Added more debug messages.
    • Added throttling for lease renew operation: do not perform lease renew more than once in a 3 seconds period (this was possible due to the lag/unlag in the golang-ticker).
    • Added sleep before retrying to change resource when optimistic locking error occured.
  • Acquire OnWaitFunc callback support for kubernetes locker

    Acquire OnWaitFunc callback support for kubernetes locker

    OnWaitFunc signature changed from func(lock LockHandle, doWait func() error) error to func(lockName string, doWait func() error) error, because at the wait moment lock-handle is not yet fully avaiable for the kubernetes locker.

  • Refactor and fix kubernetes cm key name

    Refactor and fix kubernetes cm key name

    • Renamed LockHandle.ID to LockHandle.UUID to be more clear.
    • Use "lockgate.io/SHA3_224(lock-name)" for ConfigMap key names due to ConfigMap names restrictions.
    • Mv file_lock package into pkg/, add pkg/util.
  • Implement kubernetes based locker, refactor Locker interface

    Implement kubernetes based locker, refactor Locker interface

    • Implemented Kubernetes based locker using annotations and optimistic locking.
    • Changed Locker interface: Acquire method returns lock-handle which should be passed to the Release method.
    • Both Kubernetes and File locker implementations now threadsafe.
  • Rename project to lockgate and refactor

    Rename project to lockgate and refactor

    • define new Locker interface for all locks implmenetations;
    • use lockgate.NewFileLocker for file-locks;
    • use lockgate.NewKubernetesLocker for kubernetes-locks;
    • KubernetesLocker not implemented yet.
  • Simple CLI integration

    Simple CLI integration

    Thank you for your hard work. Been reading through this and the few other implementations of distributed locking systems. A uniquely simple design in this case. Effectively you take the lock for (default) 10 seconds and keep renewing it preventing expiration (every 3 seconds). Simple design and K8S services while they can be a bit jittery should be reliable on that scale. Competition between acquiring workers being handled by a race to insert.

    I don't do anything normally with golang so correct me if I am wrong.

    Any chance that an (example?) of a CLI application could be developed that could be usable from scripting languages? This would make your work accessible from sh/bash (or any other scripting language supporting shell commands - i.e most of them)? I suspect even languages like PHP and python could make use of it via proc open type interfaces.

    My thoughts regarding API:

    #!/bin/bash
    
    keyname="our key"
    
    # take lock $keyname
    k8lock $keyname &
    lockpid=$!
    
    do_stuff
    
    # signal for clean exit
    kill -sHUP $lockpid
    wait $lockpid
    

    distlock should also monitor it's parent pid for exit (and release the lock accordingly i.e in case of crash).

    In C I would do this with a signal handler on SIGHUP and use PR_SET_PDEATHSIG to ensure a SIGHUP is received on parent death for that graceful cleanup.

  • Try-lock for file-locks may not working properly

    Try-lock for file-locks may not working properly

    Multiple calls to try-lock from multiple goroutines results in exclusive locks being taken in multiple goroutines simultaneously.

    Steps to reproduce needed.

A distributed locking library built on top of Cloud Spanner and TrueTime.

A distributed locking library built on top of Cloud Spanner and TrueTime.

Sep 13, 2022
Distributed lock manager. Warning: very hard to use it properly. Not because it's broken, but because distributed systems are hard. If in doubt, do not use this.

What Dlock is a distributed lock manager [1]. It is designed after flock utility but for multiple machines. When client disconnects, all his locks are

Dec 24, 2019
A distributed lock service in Go using etcd

locker A distributed lock service client for etcd. What? Why? A distributed lock service is somewhat self-explanatory. Locking (mutexes) as a service

Sep 27, 2022
MySQL Backed Locking Primitive

go-mysql-lock go-mysql-lock provides locking primitive based on MySQL's GET_LOCK Lock names are strings and MySQL enforces a maximum length on lock na

Dec 21, 2022
A simple but powerful distributed lock

nlock A simple but powerful distributed lock Get Started Download go get github.com/inuggets/nlock Usage Redis lock import lock "github.com/inuggets/

Nov 14, 2021
Distributed-Services - Distributed Systems with Golang to consequently build a fully-fletched distributed service

Distributed-Services This project is essentially a result of my attempt to under

Jun 1, 2022
Collection of high performance, thread-safe, lock-free go data structures

Garr - Go libs in a Jar Collection of high performance, thread-safe, lock-free go data structures. adder - Data structure to perform highly-performant

Dec 26, 2022
Cross-platform grid-based user interface framework.

Gruid The gruid module provides packages for easily building grid-based applications in Go. The library abstracts rendering and input for different pl

Nov 23, 2022
A distributed systems library for Kubernetes deployments built on top of spindle and Cloud Spanner.

hedge A library built on top of spindle and Cloud Spanner that provides rudimentary distributed computing facilities to Kubernetes deployments. Featur

Nov 9, 2022
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The main branch may be in an unstable or even broken state during development. For stable versions, see releases. etcd is a distributed rel

Dec 30, 2022
High performance, distributed and low latency publish-subscribe platform.
High performance, distributed and low latency publish-subscribe platform.

Emitter: Distributed Publish-Subscribe Platform Emitter is a distributed, scalable and fault-tolerant publish-subscribe platform built with MQTT proto

Jan 2, 2023
A realtime distributed messaging platform
A realtime distributed messaging platform

Source: https://github.com/nsqio/nsq Issues: https://github.com/nsqio/nsq/issues Mailing List: [email protected] IRC: #nsq on freenode Docs:

Dec 29, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Jan 4, 2023
An implementation of a distributed access-control server that is based on Google Zanzibar

An implementation of a distributed access-control server that is based on Google Zanzibar - "Google's Consistent, Global Authorization System".

Dec 22, 2022
Rink is a "distributed sticky ranked ring" using etcd.

Rink is a "distributed sticky ranked ring" using etcd. A rink provides role scheduling across distributed processes, with each role only assigned

Dec 5, 2022
A Distributed Content Licensing Framework (DCLF) using Hyperledger Fabric permissioned blockchain.

A Distributed Content Licensing Framework (DCLF) using Hyperledger Fabric permissioned blockchain.

Nov 4, 2022
Golang client library for adding support for interacting and monitoring Celery workers, tasks and events.

Celeriac Golang client library for adding support for interacting and monitoring Celery workers and tasks. It provides functionality to place tasks on

Oct 28, 2022
Dec 27, 2022