gnark is a fast, open-source library for zero-knowledge proof protocols written in Go

gnark

License Go Report Card PkgGoDev

gnark is a framework to execute (and verify) algorithms in zero-knowledge. It offers a high-level API to easily design circuits and fast implementation of state of the art ZKP schemes.

gnark has not been audited and is provided as-is, use at your own risk. In particular, gnark makes no security guarantees such as constant time implementation or side-channel attack resistance.

gnark is optimized for amd64 targets (x86 64bits) and tested on Unix (Linux / macOS).

Get in touch: [email protected]

Proving systems

Curves

  • BLS377
  • BLS381
  • BN256
  • BW761

Getting started

Prerequisites

You'll need to install Go.

Install gnark

go get github.com/consensys/gnark

Note that if you use go modules, in go.mod the module path is case sensitive (use consensys and not ConsenSys).

Workflow

Our blog post is a good place to start. In short:

  1. Implement the algorithm using gnark API (written in Go)
  2. r1cs, err := frontend.Compile(&circuit) to compile the circuit into a R1CS
  3. pk, vk := groth16.Setup(r1cs) to generate proving and verifying keys
  4. groth16.Prove(...) to generate a proof
  5. groth16.Verify(...) to verify a proof

Documentation

You can find the documentation here. In particular:

Examples and gnark usage

Examples are located in /examples.

/examples/cubic

  1. To define a circuit, one must implement the frontend.Circuit interface:
// Circuit must be implemented by user-defined circuits
type Circuit interface {
	// Define declares the circuit's Constraints
	Define(curveID gurvy.ID, cs *ConstraintSystem) error
}
  1. Here is what x**3 + x + 5 = y looks like
// CubicCircuit defines a simple circuit
// x**3 + x + 5 == y
type CubicCircuit struct {
	// struct tags on a variable is optional
	// default uses variable name and secret visibility.
	X frontend.Variable `gnark:"x"`
	Y frontend.Variable `gnark:",public"`
}

// Define declares the circuit constraints
// x**3 + x + 5 == y
func (circuit *CubicCircuit) Define(curveID gurvy.ID, cs *frontend.ConstraintSystem) error {
	x3 := cs.Mul(circuit.X, circuit.X, circuit.X)
	cs.AssertIsEqual(circuit.Y, cs.Add(x3, circuit.X, 5))
	return nil
}
  1. The circuit is then compiled (into a R1CS)
var circuit CubicCircuit

// compiles our circuit into a R1CS
r1cs, err := frontend.Compile(gurvy.BN256, &circuit)

Using struct tags attributes (similarly to json or xml encoders in Golang), frontend.Compile() will parse the circuit structure and allocate the user secret and public inputs [TODO add godoc link for details].

  1. The circuit can be tested like so:
assert := groth16.NewAssert(t)

{
	var witness CubicCircuit
	witness.X.Assign(42)
	witness.Y.Assign(42)

	assert.ProverFailed(r1cs, &witness)
}

{
	var witness CubicCircuit
	witness.X.Assign(3)
	witness.Y.Assign(35)
	assert.ProverSucceeded(r1cs, &witness)
}
  1. The APIs to call Groth16 algorithms:
pk, vk := groth16.Setup(r1cs)
proof, err := groth16.Prove(r1cs, pk, solution)
err := groth16.Verify(proof, vk, solution)

API vs DSL

While several ZKP projects chose to develop their own language and compiler for the frontend, we designed a high-level API, in plain Go.

Relying on Go ---a mature and widely used language--- and its toolchain, has several benefits.

Developers can debug, document, test and benchmark their circuits as they would with any other Go program. Circuits can be versionned, unit tested and used into standard continuous delivery workflows. IDE integration (we use VSCode) and all these features come for free and are stable across platforms.

Moreover, gnark is not a black box and exposes APIs like a conventional cryptographic library (think aes.encrypt([]byte)). Complex solutions need this flexibility --- gRPC/REST APIs, serialization protocols, monitoring, logging, ... are all few lines of code away.

Designing your circuit

Caveats

Three points to keep in mind when designing a circuit (which is close to constraint system programming):

  1. Under the hood, there is only one variable type (field element). TODO
  2. A for loop must have fix bounds. TODO
  3. if statements (named cs.Select() like in Prolog). TODO.

gnark standard library

Currently gnark provides the following components (see gnark/std):

  • The Mimc hash function
  • Merkle tree (binary, without domain separation)
  • Twisted Edwards curve arithmetic (for bn256 and bls381)
  • Signature (EdDSA Algorithm, following https://tools.ietf.org/html/rfc8032)
  • Groth16 verifier (1 layer recursive SNARK with BW761)

Benchmarks

It is difficult to fairly and precisely compare benchmarks between libraries. Some implementations may excel in conditions where others may not (available CPUs, RAM or instruction set, WebAssembly target, ...). Nonetheless, it appears that gnark, is about three time faster than existing state-of-the-art.

Here are our measurements for the Prover. These benchmarks ran on a AWS c5a.24xlarge instance, with hyperthreading disabled.

The same circuit (computing 2^(2^x)) is benchmarked using gnark, bellman (bls381, ZCash), bellman_ce (bn256, matterlabs).

BN256

number of constraints 100000 32000000 64000000
bellman_ce (s/op) 0.43 106 214.8
gnark (s/op) 0.16 33.9 63.4
speedup x2.6 x3.1 x3.4

On large circuits, that's over 1M constraints per second.

BLS381

number of constraints 100000 32000000 64000000
bellman (s/op) 0.6 158 316.8
gnark (s/op) 0.23 47.6 90.7
speedup x2.7 x3.3 x3.5

Resources requirements

Depending on the topology of your circuit(s), you'll need from 1 to 2GB of RAM per million constraint. Algorithms are very memory intensive, so hyperthreading won't help. Many physical cores will help, but at a point, throughput per core is decreasing.


Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

License

This project is licensed under the Apache 2 License - see the LICENSE file for details

Owner
ConsenSys Software
ConsenSys is the software engineering leader of the blockchain space. Our full-stack Ethereum products help developers build next-generation networks.
ConsenSys Software
Comments
  • feat: split field in field emulation into Field and FieldAPI

    feat: split field in field emulation into Field and FieldAPI

    Previously there was a single type for exposing field emulation which implemented frontend.API. For writing circuits it was good enough, but it was slightly inconvenient for writing gadgets and defining types on top of non-native Element, as we had to type assert the variables and didn't have some specialised methods (multiplication by constant, for example). Now separated into two Field[T] and FieldAPI[T]. FieldAPI[T] should work as previously (plus I fixed some small bugs here and there on the go) and Field[T] allows to work directly over *Element[T] types.

    Internally, changed the implementation such that FieldAPI[T] depends on Field[T] by doing type checking/asserting on the fly. Added a few methods to Field[T] such as MulConst (multiplication by a small constant, very useful for elliptic curves), MulMod (mul+reduce, while testing the circuits made it looks that gives on average lesser number of constraints), MulModMutable (mul+reduce+mutable reduction of the inputs) and corresponding analogues for addition and subtraction.

    When I was at it, updated the documentation and added a few documentation examples (how to use Field[T], FieldAPI[T]).

    I'll update ECDSA on top of it and then I think I'm finally done :)

  • refactor: std/math/nonnative -> std/math/emulated

    refactor: std/math/nonnative -> std/math/emulated

    This PR refactors std/math/nonnative -> std/math/emulated.

    emulated.Field has no pointer receiver operations, and only usage is through NewField . Emulated arithmetic is now parametrized with the emulated field constants; usage is changed to:

    type Circuit struct {
    	// Limbs of non-native elements X, Y and Res
    	X, Y, Res emulated.Element[emulated.Secp256k1]
    }
    
    func (circuit *Circuit) Define(api frontend.API) error {
    	// wrap API to work in SECP256k1 scalar field
    	secp256k1, err := emulated.NewField[emulated.Secp256k1](api)
    	if err != nil {
    		return err
    	}
    
    	tmp := secp256k1.Mul(circuit.X, circuit.Y)
    	secp256k1.AssertIsEqual(tmp, circuit.Res)
    	return nil
    }
    
    // test file:
    func TestEmulatedArithmetic(t *testing.T) {
    	assert := test.NewAssert(t)
    	std.RegisterHints()
    
    	var circuit, witness Circuit
    
    	witness.X.Assign("26959946673427741531515197488526605382048662297355296634326893985793")
    	witness.Y.Assign("53919893346855483063030394977053210764097324594710593268653787971586")
    	witness.Res.Assign("485279052387156144224396168012515269674445015885648619762653195154800")
    
    	assert.ProverSucceeded(&circuit, &witness, test.WithCurves(ecc.BN254), test.WithBackends(backend.GROTH16), test.NoSerialization())
    }
    
    
  • groth16.Verify should be able to verify bellman-generated proof

    groth16.Verify should be able to verify bellman-generated proof

    See #29 for more context.

    Need to parse bellman verifying key and proof data structures, and ensure 100% compatibility with groth16.Verify (and the pairing check).

  • Max limit on `big.Int` input to circuit?

    Max limit on `big.Int` input to circuit?

    Hi there,

    I'm doing some work with large inputs, and seem to have bumped into a limit on the input size.

    I can't seem to pass in inputs larger than 32 bytes through the circuit witness and get it to show up correctly.

    Minimal reproducible code:

    type Circuit struct {
    	Input frontend.Variable
    }
    
    func (c *Circuit) Define(curveID ecc.ID, api frontend.API) error {
    	api.Println("After:")
    	api.Println(c.Input)
    
    	api.AssertIsEqual(c.Input, 0)
    
    	return nil
    }
    
    func main() {
    	var circuit Circuit
    	r1cs, err := frontend.Compile(ecc.BN254, backend.GROTH16, &circuit)
    
    	if err != nil {
    		panic(err)
    	}
    
    	pk, _, _ := groth16.Setup(r1cs)
    
    	largeInput := []byte{}
    	// OK at 32
    	// Not OK at 33
    	for i := 0; i < 33; i++ {
    		largeInput = append(largeInput, 1)
    	}
    
    	b := big.Int{}
    	b.SetBytes(largeInput)
    
    	var witness Circuit
    	witness.Input.Assign(b)
    
    	fmt.Println("Before:")
    	fmt.Println(b.String())
    
    	_, _ = groth16.Prove(r1cs, pk, &witness)
    }
    

    Output with 33 bytes:

    Before:
    116246175861776258935035969263623938864459278723152879976867221592257887011073
    main.go:62 After:
    main.go:63 6804961502579882823803940537337563421717456721072708258376200659378844532988
    

    Output with 32 bytes:

    Before:
    454086624460063511464984254936031011189294057512315937409637584344757371137
    main.go:62 After:
    main.go:63 454086624460063511464984254936031011189294057512315937409637584344757371137
    

    Any ideas on how I can get around this limit?

  • Frontend refactoring

    Frontend refactoring

    This PR is a long overdue, it cleans up the way the plonk constraint system is added, and makes the code upgradable for different new constraint systems (custom gates, generalised arithmetic circuits, square constraints, etc).

    Architecture

    The frontend is now organised like this:

    frontend/ ├── compiler ├── cs │   ├── plonk │   └── r1cs └── utils

    frontend/

    api.go provides the API interface that a constraint system should implement (with the usual fonctions like Add, Mul, etc). It also provides an interface System with inherits from API, and provides the fonctions

    type System interface {
    	API
    	NewPublicVariable(name string) Variable
    	NewSecretVariable(name string) Variable
    	Compile(curveID ecc.ID) (compiled.CompiledConstraintSystem, error)
    }
    

    circuit.go provides the interface that a circuit should implement (as before).

    compiler/

    compile.go provides the logic to build a constraint system to its target form (r1cs or sparse r1cs). It provides the same function Compile as in develop. The difference is that buildCS has been renamed bootStrap and takes as parameter the System interface. It acts exactly as in develop, recursively instantiating the inputs and calling Define afterwards.

    cs/

    cs.go provides the common data shared by each constraint system:

    type ConstraintSystem struct {
    	compiled.CS
    
    	// input wires
    	Public, Secret []string
    
    	CurveID ecc.ID
    	// BackendID backend.ID
    
    	// Coefficients in the constraints
    	Coeffs         []big.Int      // list of unique coefficients.
    	CoeffsIDsLarge map[string]int // map to check existence of a coefficient (key = coeff.Bytes())
    	CoeffsIDsInt64 map[int64]int  // map to check existence of a coefficient (key = int64 value)
    
    	// map for recording boolean constrained variables (to not constrain them twice)
    	MTBooleans map[int]struct{}
    }
    

    It is essentially a list of coefficients, to be affected to wires, it is agnostic of the constraints. It inherits from CS defined in package compile.

    r1cs/, plonk/

    Those folders contain the actual instantiation of plonk constraint systme and Groth16 constraint system (respectively sparse_r1cs and r1cs in our naming).

    r1cs/ contains the same api as in develop, the files were merely moved from frontend/ tor1cs/. 'plonk/' contains the equivalent data but for sparseR1CS. In particular an api.go has been created for handling plonk constraints, so there is no longer a conversion from r1cs to sparser1cs.

    Both constraint systems inherit from ConstraintSystem, with an additional field which is the slice of the actual constraints (ex:

    type SparseR1CS struct {
    	cs.ConstraintSystem
    
    	Constraints []compiled.SparseR1C
    }
    

    ).

    In both r1cs/and plonk/there is aconversion.go file providing a Compile function which essentially shifts the IDs of the variables in the logs, etc after a circuit is built. This function is internally called by Compile defined in frontend/compiler/compile.go.

    API breaking changes

    • frontend.Compile --> compiler.Compile
    • type Hint struct {ID hint.ID , Inputs []Variable}--> type Hint struct {ID hint.ID, Inputs []interface{}}

    Status

    Tests pass, except the circuits_stats tests for several of the circuits used in integration tests have been extended to cover more cases. There is no change in Groth16 in terms of number of constraints (Groth16 logic hasn't been modified at all, the files were merely moved around).

  • Add hint registry

    Add hint registry

    Added hint registry for registering hint functions used in gadgets. Improved the documentation to help explain the usage of hints.

    Todo:

    • [x] Prover should define all registered hint functions.
    • [x] Find usages of hint functions and document a bit more.
  • Need an example/documentation of large variable support and  integer overflow verification

    Need an example/documentation of large variable support and integer overflow verification

    Hello,

    I am trying to define a circuit which can deal with large variables and might need to split the terms used in the constraint definition into smaller terms modulo q where q<p. I have a number of questions which might be better solved by a working example. I can also help build the example with some guidance. My questions are:

    • When checking for overflow, can I use the comparison operator defined with the compiler ? Cmp(i1, i2 [Variable](https://pkg.go.dev/github.com/consensys/gnark/frontend#Variable)) [Variable](https://pkg.go.dev/github.com/consensys/gnark/frontend#Variable)
    • Same question for splitting the terms: Can that be done using the operations defined here : https://pkg.go.dev/github.com/consensys/[email protected]/frontend
    • Does this (splitting + checking overflow) need to be done after each new constraint added to the circuit ? Any guidance will be appreciated !

    Thanks !

  • Is there mimc hash implemented by solidity language

    Is there mimc hash implemented by solidity language

    we are constructing a gnark proof system on ethereum. the public input part need be verified by mimc hash on solidity contract as the circuit implemented with gnark. could you tell me if there is corresponding solidiy implement to mimc hash in gnark? many thanks!

  • feat: parametrise frontend

    feat: parametrise frontend

    A quick try at parametrising the frontend types.

    I ran into problems when working with large linear expressions which appear in zk-unfriendly circuits (for example #401). The problem was that linear expressions are huge and in additions we have to add coefficients wire-by-wire. And this created two issues:

    1. the coefficients were stored in a lookup table for avoiding redundancy and having compact terms (in linear expressions). However, when reducing LEs, then some coefficients became unused. But we cannot remove them from the lookup table as the compiler has no knowledge if the inputs are used in other operations. In #401 context it meant that the size of lookup table grew to 10M entries very quickly (after 15K constraints) and lookups became very expensive.

    2. we perform a lot of reductions which incur super-linear cost in the size of linear expressions.

    So, I was trying to solve both of the issues. To get rid of the lookup tables, I redefined Term to include the coefficient directly (not through ID and table). Additionally, I parametrised everything so that the coefficients would be the actual underlying type (fr.Element for scalar fields).

    For the second problem, I implemented a feature to record a linear expression in the circuit to replace it with a single term. I did this by creating a new variable h and then defining constraint (\sum_{i=0}^{huge} c_i a_i) * 1 = h and then returning h instead. The parameter when to do this is configurable (and the default behaviour is not to "compress" LEs) and can be provided to compiler as frontend.WithCompressThreshold option. This provides satisfactory result - for keccak-f permutation with threshold 500 the incurred overhead (in the number of constraints) is around 9%. But compared to the previous situation where I was unable to compile the circuit at all (and now compiles in 30s), is satisfactory (at least for now).

    PR is not yet ready:

    • [ ] some tests fail (but they seem easy to fix),
    • [ ] I leak the type parameters slightly into backend (I'm not sure if it actually hurts, because the type parameters are fixed for the curves. In any case would be easy to fix with code generation)
    • [ ] I lost some optimisations in the backend (I use the same Term definition as for frontend, but for the backend we can actually build the coefficient table and use it. I also lost batch-invert in plonk backend because lacking coefficient table)
    • [ ] lacking documentation for new definitions
    • [ ] could use more pool for element reuse (but need to look at the assembly to figure out when allocated on heap and when not)
    • [ ] frontend has still a lot of *big.Ints, but want to get rid of them (for constants etc.).

    I haven't benchmarked on larger circuits yet, but tests seem to be working not-too-slow (subjectively, at least). @gbotrel, can give some feedback if is worth fixing.

  • feat: implement non-native field emulation

    feat: implement non-native field emulation

    This PR adds a new standard gadget std/math/nonnative for performing non-native field operations in a circuit. It is trying to follow big.Int-like interface, but is not completely compatible.

    The PR is mostly ready, but still some things to do:

    • [x] add tests for more complex computations
    • [x] implement Select and Lookup2
    • [x] write a nice documentation which lists all the assumptions the library does to ensure the validness of the computation
  • feat(std): add LessThan() and IsEqual()

    feat(std): add LessThan() and IsEqual()

    Hi guys,

    I've found some time to give this a go :).

    I have implemented LessThan() and IsEqual() as a new math package under std. Currently, this implementation gives it great isolation and builds on top of the current frontend.api without touching it.

    I was hoping to simplify LessThan() to compute a < b, however, I found that I still needed the full functionality of Compare(). If b is greater than a, it needs to be recorded too and persisted to the end of the loop. Without this, if a has a less significant bit set where b's is not, then it will appear as though a > b incorrectly.

    As such, I wanted to ask whether you guys wanted to relax the feature from LessThan() to Compare() given both require the same amount of work, but Compare() gives more functionality to the user. I can also see the benefit of simplifying the interface and keeping it as LessThan(), so also happy to keep it just as is and include the reduction done in the final lines of the current implementation:

    	// Now, convert output of Convert() into LessThan()
    	return m.IsEqual(output, -1)
    

    Another thing I was hoping to do was to add it directly into frontend.api. However, I had some issues getting the tests to run. Here is the link to my attempted commit on my fork: LINK

    An example of the error is:

    === RUN   TestCompare/fuzz/bw6_761/plonk
        assert.go:356:
                    Error Trace:    assert.go:356
                                                            assert.go:70
                    Error:          Received unexpected error:
                                    compilation is not deterministic
                    Test:           TestCompare/fuzz/bw6_761/plonk
    

    However, this implementation also means that the code has to be duplicated thrice across r1cs, plonk and engine. I could not work out a way to prevent code duplication here if I added it straight into frontend.api.

    What do you guys think? I like the idea of having api.LessThan() and api.IsEqual(); however it may be cleaner to have these in a new std/math package.

    Open to suggestions and feedback!

  • feat: adds `api.MAC(..)`

    feat: adds `api.MAC(..)`

    	// MAC sets and return a = a + (b*c)
    	// ! may mutate a without allocating a new result
    	// ! always use MAC(...) result for correctness
    	MAC(a, b, c Variable) Variable
    

    Fixes #416 . Impact in std/math/emulated for rsh is significant (~40% less memallocs).

    	for i := 0; i < len(bits); i++ {
    		Σbi = api.MAC(Σbi, bits[i], c)
    		ΣbiRShift = api.MAC(ΣbiRShift, bits[i], cRShift)
    
    		c.Lsh(c, 1)
    		cRShift.Lsh(cRShift, 1)
    		api.AssertIsBoolean(bits[i])
    	}
    

    Also, this new api would result in less constraint in a PlonKish arithmetization, since it will create one constraint instead of 2.

  • bug: PackLimbs in field emulation assumes input is less than emulated modulus

    bug: PackLimbs in field emulation assumes input is less than emulated modulus

    PackLimbs is used to construct a new emulated element + enforce the limb widths. However, it is also used to enforce the output of QuoHint hint function, but its output may be larger than modulus.

    It is better to have two versions of this method PackElementLimbs and PackFullLimbs where former assumes that input is smaller than modulus and otherwise that limbs are just smaller than NbBits parameter.

    Ref: https://github.com/ConsenSys/gnark/discussions/420

  • GKR as API

    GKR as API

    It would be nice for the user to construct GKR circuits the same way as they construct SNARKs. To make that as seamless as possible, I propose GKR API objects that work more or less the same as regular APIs, with an extra bit of setup and takedown. The following is an example of a circuit that computes x, y -> x^2 + y:

    type xSqPlusYCircuit struct {
    	X, Y []frontend.Variable
    }
    
    func (c *xSqPlusYCircuit) Define(api frontend.API) error {
    	_gkr := gkr.NewApi()
    	var x, y frontend.Variable
    	var err error
    	if x, err = _gkr.Input(c.X); err != nil {
    		return err
    	}
    	if y, err = _gkr.Input(c.Y); err != nil {
    		return err
    	}
    	t := _gkr.Mul(x, x)
    	_gkr.Add(y, t)
    	var gkrOuts [][]frontend.Variable
    	if gkrOuts, err = _gkr.Output(api); err != nil {
    		return err
    	}
    	Z := gkrOuts[0]
    
    	for i := range c.X {
    		api.AssertIsEqual(Z[i], api.Add(api.Mul(c.X[i], c.X[i]), c.Y[i]))
    	}
    	return nil
    }
    

    The following functions are introduced:

    1. NewApi(): Creates a new GKR API, not much to it.
    2. Input([]frontend.Variable): (Import?) Creates a GKR input variable with the slice being its values across all instances.
    3. Output(frontend.API): (Compile? Export?) Compiles and finalizes the GKR circuit and spits out assignments ([]frontend.Variable) for the output variables in the order created. That is why the output variable resulting from _gkr.Add(y, t) is not captured in that line.
    4. Mul: is the part that's supposed to resemble a regular API. Except that the user has no need to capture the result in case of an output variable, instead getting its assignments from the Output function.

    Remarks

    • Not all functionalities of an API are actually implemented by gkr.API, only those directly used for constructing circuits. For example, writing _gkr.Compiler() would result in a panic.
    • It may be desirable to do away with the Input and Output functions and treat slices of frontend.Variable as GKR variables. I see three potential difficulties with this:
      1. For the user to have access to all of the GKR circuit's internal variables, a hint should be run for each variable which may incur a performance penalty.
      2. Consider a case where we are proving correctness of a Merkle tree root. Then, the GKR circuit would consist of a hash H(x_i, x'_i)=y_i and 2^d instances of it for a tree of depth d. However, due to the Merkle tree's structure there are value dependencies between inputs and outputs across different instances such as x'_3 = y_2. It is unreasonable to expect the user to compute y_2 independently just so that they can provide the input value x'_3 when defining the input wire x'. (An interface for marking these dependencies will be outlined later.)
      3. It may get a bit messy trying to use slices as map keys.
    • Under the hood, the Output method also computes the GKR proof and encodes a GKR verifier.
  • feat: add `api.AddInPlace()` and `api.MulInPlace`

    feat: add `api.AddInPlace()` and `api.MulInPlace`

    Issue; when dealing with very large linear expressions, api.Add and api.Mul perform a lot of memory allocation and memory moves, since the result is always a new variable. If we consider the following snippet though;

    // generate the roots of unity <1,ω,ω²,..,ωⁿ⁻¹>
    	rous := make([]fr.Element, cardinality)
    	rous[0].Set(&cardInverse)
    	for i := 1; i < int(cardinality); i++ {
    		rous[i].Mul(&rous[i-1], &genInv)
    	}
    	var acc frontend.Variable
    	for i := 0; i < int(cardinality); i++ {
    		acc = 0
    		for j := 0; j < int(cardinality); j++ {
    			e := (j * i) % int(cardinality)
    			tmp := api.Mul(rous[e], p[j])
    			acc = api.Add(acc, tmp)
    		}
    		api.AssertIsEqual(acc, res[i])
    	}
    

    This section:

    			tmp := api.Mul(rous[e], p[j])
    			acc = api.Add(acc, tmp)
    

    If cardinality is large is going to perform millions of memory allocations and movement, and pre-allocating a linear expression with expected capacity would help tremendously.

  • clean, feat: `DebugInfo` and `LogEntry` need `Printf` style APIs & option to expand or not linear expressions

    clean, feat: `DebugInfo` and `LogEntry` need `Printf` style APIs & option to expand or not linear expressions

    Functionally these are similar. Nice features to have may be something like;

    d := ....Printf("foo bar %lx + %l == %lx")
    // %lx would expand a linear expression as (2+3+7) in the trace
    // %l would just evaluate the linear expression as 12
    
go-zero is a web and rpc framework written in Go. It's born to ensure the stability of the busy sites with resilient design. Builtin goctl greatly improves the development productivity.
go-zero is a web and rpc framework written in Go. It's born to ensure the stability of the busy sites with resilient design. Builtin goctl greatly improves the development productivity.

go-zero English | 简体中文 0. what is go-zero go-zero is a web and rpc framework that with lots of engineering practices builtin. It’s born to ensure the

Jan 2, 2023
beego is an open-source, high-performance web framework for the Go programming language.
beego is an open-source, high-performance web framework for the Go programming language.

Beego Beego is used for rapid development of enterprise application in Go, including RESTful APIs, web apps and backend services. It is inspired by To

Jan 1, 2023
beego is an open-source, high-performance web framework for the Go programming language.
beego is an open-source, high-performance web framework for the Go programming language.

Beego Beego is used for rapid development of enterprise application in Go, including RESTful APIs, web apps and backend services. It is inspired by To

Jan 8, 2023
letgo is an open-source, high-performance web framework for the Go programming language.

high-performance Lightweight web framework for the Go programming language. golang web framework,高可用golang web框架,go语言 web框架 ,go web

Sep 23, 2022
go-zero is a web and rpc framework that with lots of engineering practices builtin.
go-zero is a web and rpc framework that with lots of engineering practices builtin.

go-zero is a web and rpc framework that with lots of engineering practices builtin. It’s born to ensure the stability of the busy services with resilience design, and has been serving sites with tens of millions users for years.

Jan 6, 2023
Muxie is a modern, fast and light HTTP multiplexer for Go. Fully compatible with the http.Handler interface. Written for everyone.
Muxie is a modern, fast and light HTTP multiplexer for Go. Fully compatible with the http.Handler interface. Written for everyone.

Muxie ?? ?? ?? ?? ?? ?? Fast trie implementation designed from scratch specifically for HTTP A small and light router for creating sturdy backend Go a

Dec 8, 2022
⚡ Rux is an simple and fast web framework. support middleware, compatible http.Handler interface. 简单且快速的 Go web 框架,支持中间件,兼容 http.Handler 接口

Rux Simple and fast web framework for build golang HTTP applications. NOTICE: v1.3.x is not fully compatible with v1.2.x version Fast route match, sup

Dec 8, 2022
Fast and Reliable Golang Web Framework
Fast and Reliable Golang Web Framework

Gramework The Good Framework Gramework long-term testing stand metrics screenshot made with Gramework Stats Dashboard and metrics middleware What is i

Dec 18, 2022
Best simple, lightweight, powerful and really fast Api with Golang (Fiber, REL, Dbmate) PostgreSqL Database and Clean Architecture

GOLANG FIBER API (CLEAN ARCHITECTURE) Best simple, lightweight, powerful and really fast Api with Golang (Fiber, REL, Dbmate) PostgreSqLDatabase using

Sep 2, 2022
Fastrest - fast restful framework for golang.

fastrest fast restful framework for golang. Create your app directory, like mkdir myapp; cd myapp; go mod init myapp; Create initial config.toml in a

Nov 8, 2022
Gin is a HTTP web framework written in Go (Golang).
Gin is a HTTP web framework written in Go (Golang).

Gin is a HTTP web framework written in Go (Golang). It features a Martini-like API with much better performance -- up to 40 times faster. If you need smashing performance, get yourself some Gin.

Jan 3, 2023
BANjO is a simple web framework written in Go (golang)

BANjO banjo it's a simple web framework for building simple web applications Install $ go get github.com/nsheremet/banjo Example Usage Simple Web App

Sep 27, 2022
⚡️ Express inspired web framework written in Go
⚡️ Express inspired web framework written in Go

Fiber is an Express inspired web framework built on top of Fasthttp, the fastest HTTP engine for Go. Designed to ease things up for fast development w

Jan 2, 2023
Gearbox :gear: is a web framework written in Go with a focus on high performance
Gearbox :gear: is a web framework written in Go with a focus on high performance

gearbox ⚙️ is a web framework for building micro services written in Go with a focus on high performance. It's built on fasthttp which is up to 10x fa

Jan 3, 2023
Tigo is an HTTP web framework written in Go (Golang).It features a Tornado-like API with better performance. Tigo是一款用Go语言开发的web应用框架,API特性类似于Tornado并且拥有比Tornado更好的性能。
Tigo is an HTTP web framework written in Go (Golang).It features a Tornado-like API with better performance.  Tigo是一款用Go语言开发的web应用框架,API特性类似于Tornado并且拥有比Tornado更好的性能。

Tigo(For English Documentation Click Here) 一个使用Go语言开发的web框架。 相关工具及插件 tiger tiger是一个专门为Tigo框架量身定做的脚手架工具,可以使用tiger新建Tigo项目或者执行其他操作。

Jan 5, 2023
QOR is a set of libraries written in Go that abstracts common features needed for business applications, CMSs, and E-commerce systems.

QOR English Chat Room: 中文聊天室: For security issues, please send us an email to [email protected] and give us time to respond BEFORE posting as an iss

Jan 2, 2023
Headless CMS with automatic JSON API. Featuring auto-HTTPS from Let's Encrypt, HTTP/2 Server Push, and flexible server framework written in Go.
Headless CMS with automatic JSON API. Featuring auto-HTTPS from Let's Encrypt, HTTP/2 Server Push, and flexible server framework written in Go.

Ponzu Watch the video introduction Ponzu is a powerful and efficient open-source HTTP server framework and CMS. It provides automatic, free, and secur

Dec 28, 2022
Gearbox :gear: is a web framework written in Go with a focus on high performance
Gearbox :gear: is a web framework written in Go with a focus on high performance

gearbox ⚙️ is a web framework for building micro services written in Go with a focus on high performance. It's built on fasthttp which is up to 10x fa

Dec 29, 2022