Go Implementation of the Spacemesh protocol full node. 💾⏰💪

Spacemesh logo  

A Programmable Cryptocurrency

Go Report Card Bors enabled CI: passing

Browse Gitcoin Bounties

go-spacemesh

💾 💪 Thanks for your interest in this open source project. This repo is the go implementation of the Spacemesh p2p full node software.

Spacemesh is a decentralized blockchain computer using a new race-free consensus protocol that doesn't involve energy-wasteful proof of work.

We aim to create a secure and scalable decentralized computer formed by a large number of desktop PCs at home.

We are designing and coding a modern blockchain platform from the ground up for scale, security and speed based on the learnings of the achievements and mistakes of previous projects in this space.

To learn more about Spacemesh head over to https://spacemesh.io.

To learn more about the Spacemesh protocol watch this video.

Motivation

Spacemesh is designed to create a decentralized blockchain smart contracts computer and a cryptocurrency that is formed by connecting the home PCs of people from around the world into one virtual computer without incurring massive energy waste and mining pools issues that are inherent in other blockchain computers, and provide a provably-secure and incentive-compatible smart contracts execution environment.

Spacemesh is designed to be ASIC-resistant and in a way that doesn’t give an unfair advantage to rich parties who can afford setting up dedicated computers on the network. We achieve this by using a novel consensus protocol and optimize the software to be most effectively be used on home PCs that are also used for interactive apps.

What is this good for?

Provide dapp and app developers with a robust way to add value exchange and other value related features to their apps at scale. Our goal is to create a truly decentralized cryptocurrency that fulfills the original vision behind bitcoin to become a secure trustless store of value as well as a transactional currency with extremely low transaction fees.

Target Users

go-spacemesh is designed to be installed and operated on users' home PCs to form one decentralized computer. It is going to be distributed in the Spacemesh App but people can also build and run it from source code.

Project Status

We are working hard towards our first major milestone - a public permissionless testnet running the Spacemesh consensus protocol.

Contributing

Thank you for considering to contribute to the go-spacemesh open source project!

We welcome contributions large and small and we actively accept contributions.

Diggin' Deeper

Please read the Spacemesh full FAQ.

go-spacemesh Architecture

High Level Design

Client Software Architecture

Getting

git clone [email protected]:spacemeshos/go-spacemesh.git

-- or --

Fork the project from https://github.com/spacemeshos/go-spacemesh

Since the project uses Go Modules it is best to place the code outside your $GOPATH. Read this for alternatives.

Setting Up Local Dev Environment

Building is supported on OS X, Linux, FreeBSD, and Windows.

Install Go 1.15 or later for your platform, if you haven't already.

On Windows you need to install make via msys2, MingGW-w64 or [mingw] (https://chocolatey.org/packages/mingw)

Ensure that $GOPATH is set correctly and that the $GOPATH/bin directory appears in $PATH.

Before building we need to set up the golang environment. Do this by running:

make install

Building

To build go-spacemesh for your current system architecture, from the project root directory, use:

make build

(On FreeBSD, you should instead use gmake build. You can install gmake with pkg install gmake if it isn't already installed.)

This will build the go-spacemesh binary, saving it in the build/ directory.

To build a binary for a specific architecture directory use:

make darwin | linux | freebsd | windows

Platform-specific binaries are saved to the /build directory.

Using go build and go test without make

To build code without using make the CGO_LDFLAGS environment variable must be set appropriately. The required value can be obtained by running make print-ldflags or make print-test-ldflags.

This can be done in 3 ways:

  1. Setting the variable in the shell environment (e.g., in bash run CGO_LDFLAGS=$(make print-ldflags)).
  2. Prefixing the key and value to the go command (e.g., CGO_LDFLAGS=$(make print-ldflags) go build).
  3. Using go env -w CGO_LDFLAGS=$(make print-ldflags), which persistently adds this value to Go's environment for any future runs.

There's a handy shortcut for the 3rd method: make go-env or make go-env-test.


Running

go-spacemesh is p2p software which is designed to form a decentralized network by connecting to other instances of go-spacemesh running on remote computers.

To run go-spacemesh you need to specify the parameters shared between all instances on a specific network.

You specify these parameters by providing go-spacemesh with a json config file. Other CLI flags control local node behavior and override default values.

Joining a Testnet (without mining)

  1. Build go-spacemesh from source code.
  2. Download the testnet's json config file. Make sure your local config file suffix is .json.
  3. Start go-spacemesh with the following arguments:
./go-spacemesh --tcp-port [a_port] --config [configFileLocation] -d [nodeDataFilesPath]
Example

Assuming tn1.json is a testnet config file saved in the same directory as go-spacemesh, use the following command to join the testnet. The data folder will be created in the same directory as go-spacemesh. The node will use TCP port 7513 and UDP port 7513 for p2p connections:

./go-spacemesh --tcp-port 7513 --config ./tn1.json -d ./sm_data
  1. Build the CLI Wallet from source code and run it:

  2. Use the CLI Wallet commands to setup accounts, start smeshing and execute transactions.

./cli_wallet

Joining a Testnet (with mining)

  1. Run go-spacemesh to join a testnet without mining (see above).
  2. Run the CLI Wallet to create a coinbase account. Save your coinbase account public address - you'll need it later.
  3. Stop go-spacemesh and start it with the following params:
./go-spacemesh --tcp-port [a_port] --config [configFileLocation] -d [nodeDataFilesPath] --coinbase [coinbase_account] --start-mining --post-datadir [dir_for_post_data]
Example
./go-spacemesh --tcp-port 7513 --config ./tn1.json -d ./sm_data --coinbase 0x36168c60e06abbb4f5df6d1dd6a1b15655d71e75 --start-mining --post-datadir ./post_data
  1. Use the CLI wallet to check your coinbase account balance and to transact

Joining Spacemesh (TweedleDee) Testnet

Find the latest Testnet release in the releases and download the precompiled binary for your platform of choice (or you can compile go-spacemesh yourself, from source, using the release tag). The release notes contain a link to a config.json file that you'll need to join the testnet.

Note that you must download (or build) precisely this version (latest Testnet release) of go-spacemesh, and the compatible config file, in order to join the current testnet. Older versions of the code may be incompatible with this testnet, and a different config file will not work.


Testing

NOTE: if tests are hanging try running ulimit -n 400. some tests require that to work.

make test

or

make cover

Continuous Integration

We've enabled continuous integration on this repository in GitHub. You can read more about our CI workflows.

Docker

A Dockerfile is included in the project allowing anyone to build and run a docker image:

docker build -t spacemesh .
docker run -d --name=spacemesh spacemesh

Windows

On Windows you will need the following prerequisites:

  • Powershell - included by in Windows by default since Windows 7 and Windows Server 2008 R2
  • Git for Windows - after installation remove C:\Program Files\Git\bin from System PATH (if present) and add C:\Program Files\Git\cmd to System PATH (if not already present)
  • Make - after installation add C:\Program Files (x86)\GnuWin32\bin to System PATH
  • Golang
  • GCC. There are several ways to install gcc on Windows, including Cygwin. Instead, we recommend tdm-gcc which we've tested.

Close and reopen powershell to load the new PATH. You can then run the command make install followed by make build as on UNIX-based systems.

Running a Local Testnet

  • You can run a local Spacemesh Testnet with 6 full nodes, 6 user accounts, and 1 POET support service on your computer using docker.
  • The local testnet full nodes are built from this repo.
  • This is a great way to get a feel for the protocol and the platform and to start hacking on Spacemesh.
  • Follow the steps in our Local Testnet Guide

Next Steps...

Got Questions?

Owner
Spacemesh
A programmable cryptocurrency. Get started here: https://product.spacemesh.io/#/platform
Spacemesh
Comments
  • [Merged by Bors] - Variable size PoST

    [Merged by Bors] - Variable size PoST

    Motivation

    https://github.com/spacemeshos/SMIPS/issues/6

    Changes

    • Move space declaration from nipst to ATX header
    • Return weights from active set calculation
    • Use weights for block eligibility

    Todo

    • [x] Implement efficient calculation of binomial cumulative distribution function using fixed decimal point
    • [x] Implement weighted hare eligibility using the binomial CDF
    • [ ] Update documentation as needed

    Test Plan

    • [ ] Add unit tests
    • [ ] Add system test for network with heterogeneously weighted miners
  • [Merged by Bors] - Self-healing

    [Merged by Bors] - Self-healing

    Motivation

    See https://github.com/spacemeshos/SMIPS/issues/46

    Requires #2394, #2393, #2357 Closes #2203 Closes #2687

    Changes

    • adds zdist param: whereas hdist is the "Hare lookback" distance (the number of layers for which we consider hare results/local input vector rather than global opinion), zdist is the "Hare result wait" distance (the number of layers we're willing to wait for Hare to finish before reverting to invalidating layers with no input vector)
    • Hare explicitly reports failed CPs to the mesh, with the results stored in memory

    Test Plan

    TBD

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [x] Finish rescoring block goodness after healing
    • [x] Rescoring block unit test
    • [x] Block voting weight unit test
    • [x] Make sure healing -> verifying tortoise handoff is working
    • [x] App test (for decay in active set size)
    • [x] Split-then-rejoin test (two way split, three way split w/o majority fork)
    • [x] Late blocks unit test(s)
    • [x] Correctly weight votes from blocks with late atxs/those with a bad beacon value (will follow up in separate issue: #2540)
    • [x] Multi tortoise unit tests

    DevOps Notes

    • [x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
    • [x] This PR does not affect public APIs
    • [x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
    • [ ] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
  • New CI infrastructure to support private ELK per test

    New CI infrastructure to support private ELK per test

    Motivation

    added new CI infra there's another PR open #2197, the difference is that in #2197 there is a code duplication of the tests (tests_elk) folder which doesn't exists here.

    Changes

    Node pool for each test ELK cluster per each test Both node pool and ELK cluster are deployed using new pytest fixtures

    Overview

    new test: When starting a new test a namespace will be created for that test and that test only. First an ELK cluster will be created over that namespace, ELK is built from Fluent-bit, Logstash, Kibana and Elasticsearch. After setting the ELK cluster we wait for ES and Logstash to be ready, this process might take a couple of minutes. When ELK is ready we add the node pool that will serve the Spacemesh nodes, this node pool is necessary in order to get all of the resources needed (for our SM clients) in advance so our deployment will happen in a shorter and more predictable time. The process of creating\deleting\altering a node pool is blocking since only one action can be made simultaneously which means that if someone just started a test and wants to create a node pool he will have to wait for previous started action to end. Node pool creation\deletion is polling every 10 seconds to see if it can start and in case it can't (for the reason mentioned above) it creates a single "Error" message for the first time it tries, this message can be ignored for it will start once GCP is ready to take another action.

    Teardown: After a test was done a process of teardown will start in the opposite order, first the tests' node pool will be deleted (where most of the tests' resources are) and after that the namespace will be deleted deleting the ELK with it.

    Logging: After a test was finished logs will be dumped to the main ES server located on 'logging' namespace. Logs will be dumped under two conditions:

    1. The test has failed.
    2. 'is_dump: True' was added to the tests' config file.

    In any other case those logs will be lost with the namespace deletion. Until namespace is deleted logs are accessible in the tests' private ELK, the IPs can be listed by running: 'kubectl get services -n YOUR_NAMESPACE'

  • new CI infra with private ELK per test

    new CI infra with private ELK per test

    Motivation

    Make the CI great again merged -->

    Changes

    added a new infra for running CI tests with private elk cluster, after test is finished all tests data is being sent to a central elk cluster. in this repo I added another folder parallel to "tests", this is only for testing the new code. in case of success "tests" will be overwritten. makefile and ci.yml were changed to run both of our CIs , the new and the old, to run in parallel to test the new code without touching the old one.

    THIS PR IS FOR TESTING the new code and not the final code!

  • [Merged by Bors] - Verifying tortoise

    [Merged by Bors] - Verifying tortoise

    Motivation

    We want to change block structure to be more compact to reduce mesh size and the size of messages on the wire. we want to implement a verifying tortoise to reduce the complexity of the tortoise algo by verifying against the hare result (input vector) when possible and the full algorithm only when needed.

    as stated in the SMIP-0008

    initial implementation and review happened at #2040

    Closes #1856

    Changes

    wrote initial algorithm with a test.

    Test Plan

    • [x] - happy flow test
    • [x] - multiple scenarios with different opinion blocks

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [x] Update documentation as needed
  • [Merged by Bors] - Minor logging improvements

    [Merged by Bors] - Minor logging improvements

    Motivation

    This PR contains a hodgepodge of small improvements to logging that have been running in the past few testnets, useful for debugging.

    Changes

    • More contextual logging
    • Adds framework for simple logging of memory stats
    • More verbose and explicit logging
    • Some minor cleanup of logs and comments

    NO logic changes.

    Test Plan

    N/A - no logic changes

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [ ] Update documentation as needed

    DevOps Notes

    • [x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
    • [x] This PR does not affect public APIs
    • [x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
    • [ ] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
  • [Merged by Bors] - Drop peers when outbound message queue is full

    [Merged by Bors] - Drop peers when outbound message queue is full

    Motivation

    Closely related to #2435 Partially addresses #1868, #2386 (and maybe #2405)

    Changes

    When the outbound message queue for a given connection is full, rather than blocking on Send, begin dropping messages and printing errors (rather than allowing the blocking to propagate up the gossip stack)

    It would be good to go one step further and drop peers that are hanging, but this is a quick, simple fix for the acute issue we're seeing on the testnets. See #2385.

    Test Plan

    Includes a new unit test for both conn and msgconn types

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [ ] Update documentation as needed

    DevOps Notes

    • [x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
    • [x] This PR does not affect public APIs
    • [x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
    • [x] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
  • Parallelize processing of incoming gossip messages

    Parallelize processing of incoming gossip messages

    Motivation

    Fixes an issue spotted in tn132: processing an inbound block may take a very long time (> 1m), during which time all other inbound blocks (messages on the same gossip channel) are blocking

    Changes

    • Adds concurrency to processing of inbound gossip messages (per protocol/channel), with a limit on the degree of concurrency
    • Add timeout to syncer layer fetch

    Test Plan

    Includes regression tests

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [ ] Update documentation as needed

    DevOps Notes

    • [x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
    • [x] This PR does not affect public APIs
    • [x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
    • [x] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
  • [Merged by Bors] - Introduce logtest.New and a env variable to control verbosity during tests

    [Merged by Bors] - Introduce logtest.New and a env variable to control verbosity during tests

    Motivation

    Closes https://github.com/spacemeshos/go-spacemesh/issues/2598

    Changes

    • it is possible to control verbosity of all tests during test run with TEST_LOG_LEVEL env variable In order to allow go test ./... TEST_LOG_LEVEL needs to be used instead of a flag. Otherwise module that do not import logtest will fail with unknown flag.
    • if tests are executed without -v then only logs from failed tests will be printed

    Example:

    go test ./mesh/ -run=TestMeshDB
    --- FAIL: TestMeshDB_GetStateProjection (0.00s)
        logger.go:130: 2021-07-28T08:36:26.971+0300	DEBUG	added block 91adc7dc3a to layer 5
        logger.go:130: 2021-07-28T08:36:26.971+0300	DEBUG	save contextual validity 91adc7dc3a true
        logger.go:130: 2021-07-28T08:36:26.972+0300	INFO	SaveLayerInputVectorByID: Saving input vector	{"layer_id": 5, "hash": "2594b6a92ebfb1c3312deb7d01c015fb95e9fbe9bd7bc6b527af07813ec7b910", "name": ""}
        meshdb_test.go:333: 
            	Error Trace:	meshdb_test.go:333
            	Error:      	Should not be: 2
            	Test:       	TestMeshDB_GetStateProjection
    FAIL
    FAIL	github.com/spacemeshos/go-spacemesh/mesh	0.102s
    FAIL
    
    go test ./mesh/ -run=TestMeshDB -log=info
    --- FAIL: TestMeshDB_GetStateProjection (0.00s)
        logger.go:130: 2021-07-28T08:39:36.374+0300	INFO	SaveLayerInputVectorByID: Saving input vector	{"layer_id": 5, "hash": "2594b6a92ebfb1c3312deb7d01c015fb95e9fbe9bd7bc6b527af07813ec7b910", "name": ""}
        meshdb_test.go:333: 
            	Error Trace:	meshdb_test.go:333
            	Error:      	Should not be: 2
            	Test:       	TestMeshDB_GetStateProjection
    FAIL
    FAIL	github.com/spacemeshos/go-spacemesh/mesh	0.104s
    FAIL
    

    Test Plan

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [ ] Update documentation as needed

    DevOps Notes

    • [x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
    • [x] This PR does not affect public APIs
    • [x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
    • [x] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
  • [Merged by Bors] - Contextual logging

    [Merged by Bors] - Contextual logging

    Motivation

    Closes #2322

    Changes

    • Threads context through network and message handling for debug purposes
    • Adds app context-related helper methods
    • Adds ability to add context to logging

    Test Plan

    • [x] Add unit tests for log context helpers
    • [x] Add an integration test that tracks a requestId from top to bottom

    TODO

    • [x] Explain motivation or link existing issue(s)
    • [x] Test changes and document test plan
    • [ ] Update documentation as needed
  • [Merged by Bors] - Tortoise Beacon

    [Merged by Bors] - Tortoise Beacon

    Motivation

    Closes #2249 Closes https://github.com/spacemeshos/pm/issues/29

    Changes

    • Implement Tortoise Beacon

    Test Plan

    To be tested when finished

    TODO

    • [x] Finish the implementation
    • [x] Implement unit tests
    • [x] Implement system tests
  • Avoid storing the list of members of a Poet round in the proof

    Avoid storing the list of members of a Poet round in the proof

    Motivation

    Currently, the Poet proof is stored in the following structure, along with a potentially huge list of all members of the respective poet round: https://github.com/spacemeshos/go-spacemesh/blob/5c7f4867ccbe1750b31862e9a4d5576621ebb3d3/common/types/activation.go#L343-L349

    It is excessive (the proof is big). It should be enough to only store the merkle proof proving the sequential work. It should be the responsibility of the NiPoST to prove the NiPostChallenge it references is actually a member of the PoET.

    Proposal

    The proposed changes are to:

    • avoid storing a list of members in a poet proof,
    • store a merkle proof proving membership in NiPoST,
    • modify NiPoST validation as follows.

    Updated structures:

    type NIPost struct {
    	// A proof that the challenge for the PoET, which is
    	// constructed from fields in the activation transaction,
    	// is a member of the Poet's proof.
    	// Proof.ProvenLeaves contains the challenge.
    	// Proof.Root must match the Poet's POSW statement.
    	Proof poetShared.MerkleProof
    
    	// Post is the proof that the prover data is still stored (or was recomputed) at
    	// the time he learned the challenge constructed from the PoET.
    	Post *Post
    
    	// PostMetadata is the Post metadata, associated with the proof.
    	// The proof should be verified upon the metadata during the syntactic validation,
    	// while the metadata should be verified during the contextual validation.
    	PostMetadata *PostMetadata
    }
    
    // PoetProof is the full PoET service proof of elapsed time. It includes the leaf count and the actual PoSW.
    type PoetProof struct {
    	poetShared.MerkleProof
    	LeafCount      uint64
    }
    

    A node should store PoetProof along with the statement in a key-value store. The Statement will be later used to verify if the nipost challenge of an ATX is actually a member of a PoSW it references. For example:

    type PoetProofWithStatement {
    	PoetProof
    	// The input to Poet's POSW.
    	// It's the root of a merkle tree built from all of the members
    	// that are included in the proof.
    	Statement []byte
    }
    

    Validation of a PoET in nipost builder

    1. Construct a merkle tree from the list of poet round's members.
    2. The root of the merkle tree is the PoET proof's statement.
    3. Validate the PoET's merkle proof (POSW) with the statement
    4. Save PoetProof in DB.

    When a PoetProof is already saved in DB we consider it validated.

    Validation of NiPoST in ATX handler

    1. verify NIPost.Proof
    2. obtain (from DB/fetch) PoetProof by its ref (NIPost.PostMetadata.Challenge) a. if poet proof needs to be fetched, validate it against NIPost.Proof.Root, which is (should be) the POSW statement b. otherwise verify if NIPost.Proof.Root == proof.Statement - this check proves the "elapsed time" part
    3. verify NIPost.Post the same way as it is done now - this check proves that the smesher still held data after "elapsed time"
  • janitor: preparing hare for changes

    janitor: preparing hare for changes

    Motivation

    preparing for hare v1 changes

    Changes

    • rename variables and reduce log msg
    • replace util.Closer with context
    • use errgroup to manage goroutine
    • remove priority queue for gossip handling
  • janitor: use gomock in activation test

    janitor: use gomock in activation test

    Motivation

    use go mock in activation_test.go

    Changes

    • change activation_test.go to use go mocks
    • minor cleanup
    • add ActivationTx.SignedByte() (preparing for https://github.com/spacemeshos/go-spacemesh/issues/3918)
  • ATX Builder early detection of stale challenges.

    ATX Builder early detection of stale challenges.

    Motivation

    When the ATX Builder tries to publish an ATX, it first creates a challenge that commits to a certain publication layer. It might happen that the challenge is not technically stale yet (the PubLayerID is in the future), but it's impossible to create a poet proof on time as the respective poet rounds have already started. In such a case, it makes no sense to submit that challenge to poets. The builder should detect it and create a fresh challenge targeting the next epoch.

    See #3917 where it happens.

    Changes

    Builder verifies if the poet round, which would yield proof on time to be able to publish ATX in the requested layer (by challenge.PubLayerID), has not started yet.

    It allows discarding stale challenges early, before prompting Poets.

    Additionally, removed challenge from Builder's state. It's now only keeped as a local var and persisted in DB for recovery.

    Test Plan

    • unit tests
  • hare: handle hare equivocation with f+1-k rule

    hare: handle hare equivocation with f+1-k rule

    Description

    4th and final part of https://github.com/spacemeshos/go-spacemesh/issues/3915

    hare

    below are the places in hare where f+1 rule is applied

    • preround: determine whether each value has at least f+1 support.
    • status round: determine whether there are at least f+1 status messages to build a safe value proof (SVP)
    • commit round: determine whether there are at least f+1 commit messages to to build a commit certificate
    • notify round: validate the commit messages in the received commit certificate
    • notify round: determine whether there are at least f+1 notify messages to terminate the hare instance the new f+1-k rule will be used wherever f+1 rule is applied currently.

    note: the k malicious identity must also be eligible for the given layer and round. this is why the HareEligibility needs to be gossiped along with the MalfeasanceProof in live hare rounds, hence MalfeasanceGossip. the HareEligibility messages also need to be kept for the duration of a hare instance, as the node will need to verify earlier round messages when validating the commit messages in the commit certificate.

    block certificate

    • generating: f+1 valid CertifyMessage wait for f+1 valid CertifyMessage before generating a block certificate. all CeritfyMessage need to be from eligible and non-malicious identity.
    • validating a synced certificate: f+1-k valid CertifyMessage accept a certificate if there are f+1 valid CeritfyMessage from eligible identities, of which the maximum number of malicious identities is k.
This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go
This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go

This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go

Aug 4, 2022
collection of tools to gleam insights from a full bitclout node's data
collection of tools to gleam insights from a full bitclout node's data

bitcloutscripts collection of tools to gleam insights from a full bitclout node's data bitcloutscripts $ ./bcs bcs posts # print all posts

Jul 11, 2021
Go language implementation of a blockchain based on the BDLS BFT protocol. The implementation was adapted from Ethereum and Sperax implementation

BDLS protocol based PoS Blockchain Most functionalities of this client is similar to the Ethereum golang implementation. If you do not find your quest

Oct 14, 2022
Celer cBridge relay node implementation in Golang

cBridge Relay Node Official implementation of cBridge relay node in Golang. Prerequisites Prepare Machine To run a cBridge relay node, it is recommend

Sep 27, 2022
Eunomia is a distributed application framework that support Gossip protocol, QuorumNWR algorithm, PBFT algorithm, PoW algorithm, and ZAB protocol and so on.

Introduction Eunomia is a distributed application framework that facilitates developers to quickly develop distributed applications and supports distr

Sep 28, 2021
Interblockchain communication protocol (IBC) implementation in Golang.

ibc-go Interblockchain communication protocol (IBC) implementation in Golang built as a SDK module. Components Core The core/ directory contains the S

Jan 7, 2023
Implementation of the Filecoin protocol, written in Go
Implementation of the Filecoin protocol, written in Go

Project Lotus - 莲 Lotus is an implementation of the Filecoin Distributed Storage Network. For more details about Filecoin, check out the Filecoin Spec

Jan 9, 2023
Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Nov 24, 2021
Security research and open source implementation of the Apple 'Wireless Accessory Configuration' (WAC) protocol
Security research and open source implementation of the Apple 'Wireless Accessory Configuration' (WAC) protocol

Apple 'Wireless Accessory Configuration' (WAC) research Introduction This repository contains some research on how the WAC protocol works. I was mostl

Jul 28, 2022
Official Go implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Jan 8, 2023
RepoETH - Official Golang implementation of the Ethereum protocol
RepoETH - Official Golang implementation of the Ethereum protocol

HANNAGAN ALEXANDRE Powershell Go Ethereum Official Golang implementation of the

Jan 3, 2022
Go-ethereum - Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated b

Jan 4, 2022
Dxc - Go implementation of DxChain3.0 protocol
Dxc - Go implementation of DxChain3.0 protocol

DxChain 3.0 The Ecosystem Powered by DxChain 3.0 Smart Contract Platform While c

Nov 10, 2022
Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Sep 20, 2022
Koisan-chain - Official Golang implementation of the Koisan protocol

Go Ethereum Official Golang implementation of the Koisan protocol. Building the

Feb 6, 2022
Ethereum go-ethereum - Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated b

Feb 17, 2022
Terra client in golang with multiple protocol implementation (anchor, astroport, prism, ...)

Terra A terra client with some protocol partial implementations (anchor, prism, terraswap type routers, ...) To be able to compile, you need to add th

Apr 11, 2022
primeiro desafio das aulas da imersão full cycle

imersao fullcycle desafio1 primeiro desafio das aulas da imersão full cycle Colinhas Colinha de Dockerfile Executar um programa e depois continuar exe

Dec 8, 2021