Bigmachine is a library for self-managing serverless computing in Go

Bigmachine

Bigmachine is a toolkit for building self-managing serverless applications in Go. Bigmachine provides an API that lets a driver process form an ad-hoc cluster of machines to which user code is transparently distributed.

User code is exposed through services, which are stateful Go objects associated with each machine. Services expose one or more Go methods that may be dispatched remotely. User services can call remote user services; the driver process may also make service calls.

Programs built using Bigmachine are agnostic to the underlying machine implementation, allowing distributed systems to be easily tested through an in-process implementation, or inspected during development using local Unix processes.

Bigmachine currently supports instantiating clusters of EC2 machines; other systems may be implemented with a relatively compact Go interface.

Help wanted!

A walkthrough of a simple Bigmachine program

Command bigpi is a relatively silly use of cluster computing, but illustrative nonetheless. Bigpi estimates the value of $\pi$ by sampling $N$ random coordinates inside of the unit square, counting how many $C \le N$ fall inside of the unit circle. Our estimate is then $\pi = 4*C/N$.

This is inherently parallelizable: we can generate samples across a large number of nodes, and then when we're done, they can be summed up to produce our estimate of $\pi$.

To do this in Bigmachine, we first define a service that samples some $n$ points and reports how many fell inside the unit circle.

type circlePI struct{}

// Sample generates n points inside the unit square and reports
// how many of these fall inside the unit circle.
func (circlePI) Sample(ctx context.Context, n uint64, m *uint64) error {
	r := rand.New(rand.NewSource(rand.Int63()))
	for i := uint64(0); i < n; i++ {
		if i%1e7 == 0 {
			log.Printf("%d/%d", i, n)
		}
		x, y := r.Float64(), r.Float64()
		if (x-0.5)*(x-0.5)+(y-0.5)*(y-0.5) < 0.25 {
			*m++
		}
	}
	return nil
}

The only notable aspect of this code is the signature of Sample, which follows the schema below: methods that follow this convention may be dispatched remotely by Bigmachine, as we shall see soon.

func (service) Name(ctx context.Context, arg argtype, reply *replytype) error

Next follows the program's func main. First, we do the regular kind of setup a main might: define some flags, parse them, set up logging. Afterwards, a driver must call driver.Start, which initializes Bigmachine and sets up the process so that it may be bootstrapped properly on remote nodes. (Package driver provides high-level facilities for configuring and bootstrapping Bigmachine; adventurous users may use the lower-level facilitied in package bigmachine to accomplish the same.) driver.Start() returns a *bigmachine.B which can be used to start new machines.

func main() {
	var (
		nsamples = flag.Int("n", 1e10, "number of samples to make")
		nmachine = flag.Int("nmach", 5, "number of machines to provision for the task")
	)
	log.AddFlags()
	flag.Parse()
	b := driver.Start()
	defer b.Shutdown()

Next, we start a number of machines (as configured by flag nmach), wait for them to finish launching, and then distribute our sampling among them, using a simple "scatter-gather" RPC pattern. First, let's look at the code that starts the machines and waits for them to be ready.

// Start the desired number of machines,
// each with the circlePI service.
machines, err := b.Start(ctx, *nmachine, bigmachine.Services{
	"PI": circlePI{},
})
if err != nil {
	log.Fatal(err)
}
log.Print("waiting for machines to come online")
for _, m := range machines {
	<-m.Wait(bigmachine.Running)
	log.Printf("machine %s %s", m.Addr, m.State())
	if err := m.Err(); err != nil {
		log.Fatal(err)
	}
}
log.Print("all machines are ready")

Machines are started with (*B).Start, to which we provide the set of services that should be installed on each machine. (The service object is provided is serialized and initialized on the remote machine, so it may include any desired parameters.) Start returns with with a slice of Machine instances, representing each machine that was launched. Machines can be in a number of states. In this case, we keep it simple and just wait for them to enter their running states, after which the underlying machines are fully bootstrapped and the services have been installed and initialized. At this point, all of the machines are ready to receive RPC calls.

The remainder of main distributes a portion of the total samples to be taken to each machine, waits for them to complete, and then prints with the precision warranted by the number of samples taken. Note that this code further subdivides the work by calling PI.Sample once for each processor available on the underlying machines as defined by Machine.Maxprocs, which depends on the physical machine configuration.

// Number of samples per machine
numPerMachine := uint64(*nsamples) / uint64(*nmachine)

// Divide the total number of samples among all the processors on
// each machine. Aggregate the counts and then report the estimate.
var total uint64
var cores int
g, ctx := errgroup.WithContext(ctx)
for _, m := range machines {
	m := m
	for i := 0; i < m.Maxprocs; i++ {
		cores++
		g.Go(func() error {
			var count uint64
			err := m.Call(ctx, "PI.Sample", numPerMachine/uint64(m.Maxprocs), &count)
			if err == nil {
				atomic.AddUint64(&total, count)
			}
			return err
		})
	}
}
log.Printf("distributing work among %d cores", cores)
if err := g.Wait(); err != nil {
	log.Fatal(err)
}
log.Printf("total=%d nsamples=%d", total, *nsamples)
var (
	pi   = big.NewRat(int64(4*total), int64(*nsamples))
	prec = int(math.Log(float64(*nsamples)) / math.Log(10))
)
fmt.Printf("π = %s\n", pi.FloatString(prec))

We can now build and run our binary like an ordinary Go binary.

$ go build
$ ./bigpi
2019/10/01 16:31:20 waiting for machines to come online
2019/10/01 16:31:24 machine https://localhost:42409/ RUNNING
2019/10/01 16:31:24 machine https://localhost:44187/ RUNNING
2019/10/01 16:31:24 machine https://localhost:41618/ RUNNING
2019/10/01 16:31:24 machine https://localhost:41134/ RUNNING
2019/10/01 16:31:24 machine https://localhost:34078/ RUNNING
2019/10/01 16:31:24 all machines are ready
2019/10/01 16:31:24 distributing work among 5 cores
2019/10/01 16:32:05 total=7853881995 nsamples=10000000000
π = 3.1415527980

Here, Bigmachine distributed computation across logical machines, each corresponding to a single core on the host system. Each machine ran in its own Unix process (with its own address space), and RPC happened through mutually authenticated HTTP/2 connections.

Package driver provides some convenient flags that helps configure the Bigmachine runtime. Using these, we can configure Bigmachine to launch machines into EC2 instead:

$ ./bigpi -bigm.system=ec2
2019/10/01 16:38:10 waiting for machines to come online
2019/10/01 16:38:43 machine https://ec2-54-244-211-104.us-west-2.compute.amazonaws.com/ RUNNING
2019/10/01 16:38:43 machine https://ec2-54-189-82-173.us-west-2.compute.amazonaws.com/ RUNNING
2019/10/01 16:38:43 machine https://ec2-34-221-143-119.us-west-2.compute.amazonaws.com/ RUNNING
...
2019/10/01 16:38:43 all machines are ready
2019/10/01 16:38:43 distributing work among 5 cores
2019/10/01 16:40:19 total=7853881995 nsamples=10000000000
π = 3.1415527980

Once the program is running, we can use standard Go tooling to examine its behavior. For example, expvars are aggregated across all of the machines managed by Bigmachine, and the various profiles (CPU, memory, contention, etc.) are available as merged profiles through /debug/bigmachine/pprof. For example, in the first version of bigpi, the CPU profile highlighted a problem: we were using the global rand.Float64 which requires a lock; the resulting contention was easily identifiable through the CPU profile:

$ go tool pprof localhost:3333/debug/bigmachine/pprof/profile
Fetching profile over HTTP from http://localhost:3333/debug/bigmachine/pprof/profile
Saved profile in /Users/marius/pprof/pprof.045821636.samples.cpu.001.pb.gz
File: 045821636
Type: cpu
Time: Mar 16, 2018 at 3:17pm (PDT)
Duration: 2.51mins, Total samples = 16.80mins (669.32%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 779.47s, 77.31% of 1008.18s total
Dropped 51 nodes (cum <= 5.04s)
Showing top 10 nodes out of 58
      flat  flat%   sum%        cum   cum%
   333.11s 33.04% 33.04%    333.11s 33.04%  runtime.procyield
   116.71s 11.58% 44.62%    469.55s 46.57%  runtime.lock
    76.35s  7.57% 52.19%    347.21s 34.44%  sync.(*Mutex).Lock
    65.79s  6.53% 58.72%     65.79s  6.53%  runtime.futex
    41.48s  4.11% 62.83%    202.05s 20.04%  sync.(*Mutex).Unlock
    34.10s  3.38% 66.21%    364.36s 36.14%  runtime.findrunnable
       33s  3.27% 69.49%        33s  3.27%  runtime.cansemacquire
    32.72s  3.25% 72.73%     51.01s  5.06%  runtime.runqgrab
    24.88s  2.47% 75.20%     57.72s  5.73%  runtime.unlock
    21.33s  2.12% 77.31%     21.33s  2.12%  math/rand.(*rngSource).Uint64

And after the fix, it looks much healthier:

$ go tool pprof localhost:3333/debug/bigmachine/pprof/profile
...
      flat  flat%   sum%        cum   cum%
    29.09s 35.29% 35.29%     82.43s   100%  main.circlePI.Sample
    22.95s 27.84% 63.12%     52.16s 63.27%  math/rand.(*Rand).Float64
    16.09s 19.52% 82.64%     16.09s 19.52%  math/rand.(*rngSource).Uint64
     9.05s 10.98% 93.62%     25.14s 30.49%  math/rand.(*rngSource).Int63
     4.07s  4.94% 98.56%     29.21s 35.43%  math/rand.(*Rand).Int63
     1.17s  1.42%   100%      1.17s  1.42%  math/rand.New
         0     0%   100%     82.43s   100%  github.com/grailbio/bigmachine/rpc.(*Server).ServeHTTP
         0     0%   100%     82.43s   100%  github.com/grailbio/bigmachine/rpc.(*Server).ServeHTTP.func2
         0     0%   100%     82.43s   100%  golang.org/x/net/http2.(*serverConn).runHandler
         0     0%   100%     82.43s   100%  net/http.(*ServeMux).ServeHTTP

GOOS, GOARCH, and Bigmachine

When using Bigmachine's EC2 machine implementation, the process is bootstrapped onto remote EC2 instances. Currently, the only supported GOOS/GOARCH combination for these are linux/amd64. Because of this, the driver program must also be linux/amd64. However, Bigmachine also understands the fatbin format, so that users can compile fat binaries using the gofat tool. For example, the above can be run on a macOS driver if the binary is built using gofat instead of 'go':

macOS $ GO111MODULE=on go get github.com/grailbio/base/cmd/gofat
go: finding github.com/grailbio/base/cmd/gofat latest
go: finding github.com/grailbio/base/cmd latest
macOS $ gofat build
macOS $ ./bigpi -bigm.system=ec2
...
Owner
GRAIL
Source code created or maintained by GRAIL, Inc.
GRAIL
Comments
  • ec2system: change handling of default ec2boot binaries

    ec2system: change handling of default ec2boot binaries

    Instead of hardcoding ec2boot in the config (which materializes to, e.g., $HOME/.bigslice/config), we set the default to an empty string, and fill in the default binary associated with the current release.

    We also rewrite binaries with the official prefix to match the current version.

  • Migrate from CoreOS to Flatcar

    Migrate from CoreOS to Flatcar

  • Value of a state machine or flow diagram

    Value of a state machine or flow diagram

    Please only consider this if you've received interest from others (and they support this). I'm mostly undertaking this as a pet project and don't wish to unduly burden you for my sake.

    I've been away for 10 days and, returning to bigmachine, I'm having to rely upon extensive debugging statements in an ongoing attempt to try to grok the underlying mechanism of the solution. I still only have an admittedly loose grasp of this and am fumbling through.

    The GCE implementation is able to create remote (containerized) nodes and, I believe, basic (HTTP non-TLS) RPC is working. I'm challenged debugging (particularly go routines) and because I don't have a good overall perspective.

    Is|Are there (a) state diagram(s) for bigmachine? I think there are 2-3 different diagrams that are of interest:

    1. The internal state diagram of an individual machine
    2. The networking state diagram for the process by which the local node drives the remotes
  • bigmachine: use ec2 ssh keys

    bigmachine: use ec2 ssh keys

    EC2 instances, in some environments, need to have an ssh key pair specified when they are created so that that ssh key appears in the instance metadata. This PR scans the set of scavenged ssh keys (e.g. those in the ssh agent) and those available in ec2 and finds the first overlap and uses that key. A future PR will allow for setting this key specifically since this heuristic may fail in some setups where the user has multiple overlapping keys.

  • Add (System).Serve to simplify serving

    Add (System).Serve to simplify serving

    Add a (System).Serve interface method to allow serving on a given listener. This allows us to get rid of the getFreeTCPPort method and mirrors the http.Listen and http.ListenAndServe methods.

    I did this while I was trying to fix a non-deterministically failing test, and I think it makes the code a bit easier to understand, as it eliminates the somewhat odd listen-then-close thing that getFreeTCPPort did.

  • Prevent reboot of EC2 instances

    Prevent reboot of EC2 instances

    Prevent reboot of EC2 instances. There are a few scenarios in which the instance may try to reboot:

    • EC2 rebooting when encountering underlying hardware issues.
    • EC2 scheduled maintenance.
    • Services running that cause reboot, e.g. locksmithd. (This one is avoidable by using an appropriate AMI).

    We prevent rebooting because bringing the instance back to a working state would add a lot of complexity, as we would need restore both internal state and the state of defined services.

  • Include GUID in machine addresses

    Include GUID in machine addresses

    Currently Bigmachine machines are named an opaque "Addr". These are https urls, e.g., https://ec2-.../. Addrs however do not necessarily uniquely correspond to a logical machine. In systems like AWS EC2, DNS names may be recycled, and thus Addrs need not be stable. The solution to this can be fairly simple: just suffix the URL with some sort of GUID, e.g., https://ec2-.../unique-id. We should think about whether we want this unique id to have other properties as well.

  • Consider SemVer for releases

    Consider SemVer for releases

    The initial release v0.5 does not use SemVer and -- IIUC -- this is problematic for Go Modules:

    go get github.com/grailbio/[email protected]
    go get github.com/grailbio/[email protected]: no matching versions for query "v0.5"
    

    Whereas getting it by its commit, works:

    go get github.com/grailbio/bigmachine@6ab40c5aa9bcc7030513289106a2fa20697ed731
    go: finding github.com/grailbio/bigmachine 6ab40c5aa9bcc7030513289106a2fa20697ed731
    

    and:

    go list -f "{{ .Dir }}" -m github.com/grailbio/bigmachine
    ${GOPATH}/pkg/mod/github.com/grailbio/[email protected]
    
  • Fail fast when service cannot be gob-encoded

    Fail fast when service cannot be gob-encoded

    Fail fast when service cannot be gob-encoded. In general, we should fail fast if service registration fails fatally. The current behavior is that we will retry for several minutes.

  • Use in-memory PEM instead of re-reading from file

    Use in-memory PEM instead of re-reading from file

    Use in-memory PEM instead of re-reading PEM from file. This eliminates a race when multiple bigmachine programs are reading/writing to the cached authority, e.g.:

    1. A sees no file at /tmp/bigmachine.pem.
    2. B sees no file at /tmp/bigmachine.pem.
    3. A generates a new authority and writes it to /tmp/bigmachine.pem.
    4. B generates a new authority and writes it to /tmp/bigmachine.pem, overwriting A's authority.
    5. A re-reads the authority contents, which is then passed to machines.
    6. A client certificate cannot be used to connect to the machines.
  • Close all connections when resetting clients

    Close all connections when resetting clients

    Close all connections when "resetting" clients. We reset clients by disposing of the clients and backing transports on error. Transports are only garbage collected when all connections, including idle connections, are closed. The existing code was not handling the case in which successful requests followed failed requests. We would eject the client from our cache, but subsequent successful requests would not close the remaining transport connections. The end result is that client resets could cause transport leaks. This is problematic for very long-running computations.

    Change this to close idle connections on clients/transports that have been reset.

  • build(deps): bump github.com/aws/aws-sdk-go from 1.29.24 to 1.33.0

    build(deps): bump github.com/aws/aws-sdk-go from 1.29.24 to 1.33.0

    Bumps github.com/aws/aws-sdk-go from 1.29.24 to 1.33.0.

    Changelog

    Sourced from github.com/aws/aws-sdk-go's changelog.

    Release v1.33.0 (2020-07-01)

    Service Client Updates

    • service/appsync: Updates service API and documentation
    • service/chime: Updates service API and documentation
      • This release supports third party emergency call routing configuration for Amazon Chime Voice Connectors.
    • service/codebuild: Updates service API and documentation
      • Support build status config in project source
    • service/imagebuilder: Updates service API and documentation
    • service/rds: Updates service API
      • This release adds the exceptions KMSKeyNotAccessibleFault and InvalidDBClusterStateFault to the Amazon RDS ModifyDBInstance API.
    • service/securityhub: Updates service API and documentation

    SDK Features

    • service/s3/s3crypto: Introduces EncryptionClientV2 and DecryptionClientV2 encryption and decryption clients which support a new key wrapping algorithm kms+context. (#3403)
      • DecryptionClientV2 maintains the ability to decrypt objects encrypted using the EncryptionClient.
      • Please see s3crypto documentation for migration details.

    Release v1.32.13 (2020-06-30)

    Service Client Updates

    • service/codeguru-reviewer: Updates service API and documentation
    • service/comprehendmedical: Updates service API
    • service/ec2: Updates service API and documentation
      • Added support for tag-on-create for CreateVpc, CreateEgressOnlyInternetGateway, CreateSecurityGroup, CreateSubnet, CreateNetworkInterface, CreateNetworkAcl, CreateDhcpOptions and CreateInternetGateway. You can now specify tags when creating any of these resources. For more information about tagging, see AWS Tagging Strategies.
    • service/ecr: Updates service API and documentation
      • Add a new parameter (ImageDigest) and a new exception (ImageDigestDoesNotMatchException) to PutImage API to support pushing image by digest.
    • service/rds: Updates service documentation
      • Documentation updates for rds

    Release v1.32.12 (2020-06-29)

    Service Client Updates

    • service/autoscaling: Updates service documentation and examples
      • Documentation updates for Amazon EC2 Auto Scaling.
    • service/codeguruprofiler: Updates service API, documentation, and paginators
    • service/codestar-connections: Updates service API, documentation, and paginators
    • service/ec2: Updates service API, documentation, and paginators
      • Virtual Private Cloud (VPC) customers can now create and manage their own Prefix Lists to simplify VPC configurations.

    Release v1.32.11 (2020-06-26)

    Service Client Updates

    • service/cloudformation: Updates service API and documentation
      • ListStackInstances and DescribeStackInstance now return a new StackInstanceStatus object that contains DetailedStatus values: a disambiguation of the more generic Status value. ListStackInstances output can now be filtered on DetailedStatus using the new Filters parameter.
    • service/cognito-idp: Updates service API

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • (*System) Shutdown signature && bigmachine.Machine unique IDs

    (*System) Shutdown signature && bigmachine.Machine unique IDs

    The signature of Start is:

    (*System) Start(ctx context.Context, count int) ([]*bigmachine.Machine, error)
    

    Whereas (its converse) Shutdown is:

    (*System) Shutdown()
    

    It feels as though it would be more consistent , if Shutdown's signature included both context.Context and []*bigmachine.Machine, also returning an error.

    Even then, bigmachine.Machine's type does not include a unique ID for the machine (beyond an IP address; often not used as a key), would it make sense to add one?

    I'm not retaining the list of machines created by (Start in) the GCE implementation and so, conversely when asked to Shutdown, I must first enumerate all the instances that (I think) have been created (I'm doing this by tag, could potentially use IP) and then make a call to delete these.

  • circlePI example problems

    circlePI example problems

    I'm attempting to use bigmachine with your circlePI example, but:

    go run main.go
    2019/10/04 10:07:44 waiting for machines to come online
    2019/10/04 10:07:44 resetting http client https://localhost:46237/ while calling to Supervisor.Ping: temporary network error
    2019/10/04 10:07:45 https://localhost:46237/ Supervisor.Ping: succeeded after 1 retries
    2019/10/04 10:07:45 https://localhost:46237/: zip: not a valid zip file
    2019/10/04 10:07:45 machine https://localhost:46237/ STOPPED
    2019/10/04 10:07:45 zip: not a valid zip file
    

    Will try digging into this myself but it's discouraging :-)

    I was unable to find this example published in the repo. It would be useful as I could then determine more quickly whether this is my error.

    Perhaps "Getting Started..."?

  • Kubernetes System

    Kubernetes System

    I think it would be interesting to have a Kubernetes backed implementation.

    This would provide a more generic solution than per Cloud implementations and could facilitate cross-Cloud deployments too.

Genetic Algorithms library written in Go / golang

Description Genetic Algorithms for Go/Golang Install $ go install git://github.com/thoj/go-galib.git Compiling examples: $ git clone git://github.com

Sep 27, 2022
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 30, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Jan 3, 2023
onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.
onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

This is a Go Interface to Open Neural Network Exchange (ONNX). Overview onnx-go contains primitives to decode a onnx binary model into a computation b

Dec 24, 2022
A neural network library built in Go

go-mind A neural network library built in Go. Usage import "github.com/stevenmiller888/go-mind" m := mind.New(0.7, 10000, 3, "sigmoid") m.Learn([][]

Aug 27, 2022
Gorgonia is a library that helps facilitate machine learning in Go.
Gorgonia is a library that helps facilitate machine learning in Go.

Gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily

Dec 27, 2022
A Go library implementing an FST (finite state transducer)
A Go library implementing an FST (finite state transducer)

vellum A Go library implementing an FST (finite state transducer) capable of: mapping between keys ([]byte) and a value (uint64) enumerating keys in l

Jan 8, 2023
A High-level Machine Learning Library for Go
A High-level Machine Learning Library for Go

Overview Goro is a high-level machine learning library for Go built on Gorgonia. It aims to have the same feel as Keras. Usage import ( . "github.

Nov 20, 2022
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.
Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.

Mab Multi-Armed Bandits Go Library Description Installation Usage Creating a bandit and selecting arms Numerical integration with numint Documentation

Jan 2, 2023
A high performance go implementation of Wappalyzer Technology Detection Library

wappalyzergo A high performance port of the Wappalyzer Technology Detection Library to Go. Inspired by https://github.com/rverton/webanalyze. Features

Jan 8, 2023
A high-performance timeline tracing library for Golang, used by TiDB

Minitrace-Go A high-performance, ergonomic timeline tracing library for Golang. Basic Usage package main import ( "context" "fmt" "strcon

Nov 28, 2022
go-paddle is a Go client library for accessing the Paddle API.

go-paddle go-paddle is a Go client library for accessing the Paddle API. Installation go get github.com/Fakerr/go-paddle Alternatively the same can be

Aug 22, 2022
Go (Golang) encrypted deep learning library; Fully homomorphic encryption over neural network graphs

DC DarkLantern A lantern is a portable case that protects light, A dark lantern is one who's light can be hidden at will. DC DarkLantern is a golang i

Oct 31, 2022
Go-hubspot - An auto-generated go client library of all of Hubspot's API

go-hubspot This is an auto-generated go client library of all of Hubspot's API (

Jan 22, 2022
Bcfm-study-case - A simple http server using the Echo library in Go language
Bcfm-study-case - A simple http server using the Echo library in Go language

Task 1 Hakkında Burada Go dilinde Echo kütüphanesini kullanarak basit bir http s

Feb 2, 2022
A serverless cluster computing system for the Go programming language

Bigslice Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing ta

Dec 14, 2022
🦖 Streaming-Serverless Framework for Low-latency Edge Computing applications, running atop QUIC protocol, engaging 5G technology.
🦖 Streaming-Serverless Framework for Low-latency Edge Computing applications, running atop QUIC protocol, engaging 5G technology.

YoMo YoMo is an open-source Streaming Serverless Framework for building Low-latency Edge Computing applications. Built atop QUIC Transport Protocol an

Dec 29, 2022
A serverless cluster computing system for the Go programming language

Bigslice Bigslice is a serverless cluster data processing system for Go. Bigslice exposes composable API that lets the user express data processing ta

Dec 14, 2022
Self-hosted web app for encoding files to a target format using distributed computing.
Self-hosted web app for encoding files to a target format using distributed computing.

What is Encodarr? Encodarr is a self-hosted web application that encodes video files to a target format using distributed computing to spread the work

Dec 30, 2022
Go-serverless-eth-event-listener - Go serverless, ethereum contract event listener with a sample contract

go-serverless-eth-event-listener This repository is for showing how to listen sm

May 19, 2022