Mcc - The MESH Companion Container (MCC) is a p2p layer and modified Kademlia DHT that provides functionality for service discovery

MCC

The MESH Companion Container (MCC) is a p2p layer and modified Kademlia DHT that provides functionality for service discovery, data storage (including DID documents), encrypted communication channels.

Table of Contents

Quickstart

In this part, we set you up to build and run MCC.

Build

The prerequisites for building MCC are as follows:

  • Go >= 1.17
  • make
  • libsodium and libsodium-dev. Install it with your favourite package manager.

Open the terminal window and run:

git clone <ADDR>
cd mcc
make build

A binary file with the name mcc is created.

List of flags

Flag Default Description Env
-loglevel [0-6] 4 Set the logging level, 0 is lowest (less logs), 6 is highest (more logs).
-boot <addr>,<addr> A list of bootnodes that the MCC will contact to populate its routing table and register itself in the network. For example -boot 127.0.0.1:8000,127.0.0.1:8001. BOOT
-cert <path_to_cert> Path to the file with certificate base64 encoded string. If not provided or "random" is passed - random certificate will be generated. See Certificates section for more information about certificates. CERTIFICATE
-privkey <path_to_private_key> Path to the file with private key base64 encoded string. If not provided - random certificate mode is required. See Certificates section for more information about certificates. PRIVKEY
-listen <addr> 127.0.0.1:9376 Specify the UDP address on which the MCC will listen for new packets. LISTEN
-http <addr> 127.0.0.1:8080 This address is used to serve the REST-API. HTTP
-enable-cors <cors> List of addresses for which CORS would be enabled. Pass * to enable CORS for any domain. CORS
-data-folder <folder> ./.mcc/ Set a folder for storing binary data.
-keep-blobs <bool> true The node will claim itself a seeder after getting a BLOB from another node
-cpu-profile <N> 0 CPU profile write interval (seconds). 0 means profiling is disabled.
-goroutine-profile <N> 0 Goroutine profile write interval (seconds). 0 means profiling is disabled.
-heap-profile <N> 0 Heap profile write interval (seconds). 0 means profiling is disabled.
-profile-dir <folder> traces A directory to write profiles.
-enable-monitoring Monitoring using prometheus at /metrics.
-dht-alpha N 3 The concurrency factor. Larger values reduce time consumed by DHT operations, but may significantly increase the CPU and network load.
-version Print MCC version.
-help Print the usage message.
-newcert Generate, save a new random certificate, certificate request and exit.
-trust-ca <path_to_root_ca_cert> Path to the file with root CA server certificate base64 encoded string. See Certificates section for more information about certificates. TRUSTCACERT
-label <node_label> random_name Set a human-readable name for MCC node. It will be normalized to lowercase, characters which are not symbols or numbers will be trimmed. Label cannot be more than 30 byte (simplified - 30 "single-byte" symbols, see ASCII). If not provided - random label will be generated.
-config <config_file> A config file with initial setup. See the config section.

Certificates

The default option is to use MCC with random generated certificate - if no parameter passed, but you could also set it explicitly by setting -cert random. Use -privkey for private key path defining if certificate path is set. To generate a random certificate with All permissions, use the -newcert. In order to create a customized one, please use the https://github.com/staex-mcc/certlib library.

Node label

Node label is the part of node certificate. You can set your own label (use the -label flag) for the first start of the mcc node. If node certificate is defined - -label flag will be ignored. Use -label flag with -newcert flag for custom label in new random certificate. If -label is not provided - random label will be generated.

Config file

A config file provides some initial setup for services and port listeners.

services:
  - name: service3
    address: "tcp://localhost:1111"


listen:
  - service: service
    address: "tcp://localhost:5000"

  - service: service2
    address: "tcp://localhost:5001"

Starting local nodes

Start one MCC-node with automatic configuration by executing the main binary:

$ ./mcc 

Start two nodes from two shell instances that connect to each other on your local machine. Run the first node on first shell with a specified socket for incoming UDP packets and an HTTP-Server on 127.0.0.1:8080. The second node needs to listen on a different port, in the example, the first node is specified as bootnode. In order to run 2 MCC instances from the same binary (and same directory), you need to explicitly specify different data folders (-data-folder).

Shell 1: $ ./mcc -listen 127.0.0.1:9376 -http 127.0.0.1:8080 -data-folder ./.data/node1

Shell 2: $ ./mcc -listen 127.0.0.1:9375 -http 127.0.0.1:8081 -boot 127.0.0.1:9376 -data-folder ./.data/node2

Interacting with nodes

Communication and interaction with client nodes via REST

Interaction with MCC nodes is enabled via the REST API. You can either access the REST API via tools like curl or wget or you can use the SWAGGER UI. Follow the steps below to try that out:

Start a node with a REST API on 127.0.0.1:8080 and enabled Cross-origin resource sharing (CORS):

$ ./mcc -listen 127.0.0.1:9375 -http 127.0.0.1:8080 -enable-cors '*'

Then from the root directory build and run the Swagger container.

$ cd swagger
$ docker build -t mcc-swagger .
$ docker run -d -p 80:8080 -v $(pwd):/usr/share/nginx/html/swagger mcc-swagger

Then open your browser and navigate to localhost:80. The Swagger UI should appear. Select the node (localhost:8080) in the Servers-dropdown.

Swagger OpenAPI documentation

The API specification is available here.

Client library

Golang client library for MCC provides access to the REST API via the interface lib.MCCInterface (see pkg/lib/lib.go). You can find the implementation in pkg/client/mcc_client.go.

Embedded MCC

MCC can be run embedded into another golang application. Please see pkg/mcc/mcc.go for the implementation.

Logging

MCC uses the staxlog library for logging. Changing of logging parameters is available via staxlog.Init() method. For example, in order to change the log level to debug, make the following call:

staxlog.Init(&staxlog.Options{
    Level: "debug",
})

Also, you can use the build tag mcclogprefix to append the word mcc to the component name. That would clearly highlight each MCC message. In order to do that modify the build command to contain this tag: go build ... -tags mcclogprefix.

Tests

The test reference is available here.

Known Issues

Unable to reuse the same connection for another request after a forwarding.

If you make a forwarding request (either http or tcp one) to some service, please don't use the same connection to send another request to a different service or MCC. In this case please open a new HTTP connection.

It's OK if you send several requests to a single service via the same connection.

Using nginx as a reverse proxy.

The default nginx settings don't work well with MCC. ngx_http_proxy_module should be configured to use HTTP 1.1 and keep-alive connections should be disabled (see the previous known issue).

Here is the nginx location configuration to correctly proxify requests to MCC:

proxy_pass http://mcc-node-address:8080;
proxy_http_version 1.1;
keepalive_requests 0;
keepalive_timeout 0;

"sendto: invalid argument" error

You might be running out of ARP cache if you have a lot of sendto: invalid argument messages. In such a case, please consider increasing limits:

sysctl -w net.ipv4.neigh.default.gc_thresh1=1024
sysctl -w net.ipv4.neigh.default.gc_thresh2=4096
sysctl -w net.ipv4.neigh.default.gc_thresh3=8192
sysctl -w net.ipv6.neigh.default.gc_thresh1=1024
sysctl -w net.ipv6.neigh.default.gc_thresh2=4096
sysctl -w net.ipv6.neigh.default.gc_thresh3=8192

Additional Info

For more information please check the Specification.

Owner
Staex GmbH
Staex is a deeptech startup based in Berlin enabling distributed service orchestration for IoT.
Staex GmbH
Similar Resources

GO2P is a P2P framework, designed with flexibility and simplicity in mind

GO2P is a P2P framework, designed with flexibility and simplicity in mind

go2p golang p2p framework By v-braun - viktor-braun.de. Description GO2P is a P2P framework, designed with flexibility and simplicity in mind. You can

Jan 5, 2023

fetch and serve papers in p2p network

sci-hub P2P A project aims to fetch paper from the BitTorrent network. This is not a cli client of sci-hub website. English Introduction 中文简介 Contribu

Dec 13, 2022

Openp2p - an open source, free, and lightweight P2P sharing network

Openp2p - an open source, free, and lightweight P2P sharing network

It is an open source, free, and lightweight P2P sharing network. As long as any device joins in, you can access them anywhere

Dec 31, 2022

A decentralized P2P networking stack written in Go.

noise noise is an opinionated, easy-to-use P2P network stack for decentralized applications, and cryptographic protocols written in Go. noise is made

Dec 29, 2022

A major platform Remote Access Terminal Tool based by Blockchain/P2P.

A major platform Remote Access Terminal Tool based by Blockchain/P2P.

NGLite A major platform Remote Access Terminal Tool based by Blockchain/P2P. No public IP address required.More anonymity Example Detection Warning!!!

Jan 2, 2023

fetch papers from p2p network

sci-hub P2P A project aims to fetch paper from the BitTorrent network. According to this Reddit post, currently, all `sci-hub's papers are available i

Dec 13, 2022

A modular is an opinionated, easy-to-use P2P network stack for decentralized applications written in Go.

xlibp2p xlibp2p is an opinionated, easy-to-use P2P network stack for decentralized applications written in Go. xlibp2p is made to be minimal, robust,

Nov 9, 2022

A minimal IPFS replacement for P2P IPLD apps

IPFS-Nucleus IPFS-Nucleus is a minimal block daemon for IPLD based services. You could call it an IPLDaemon. It implements the following http api call

Jan 4, 2023

Educational project to build a p2p network

Template go Repository tl;dr This is a template go repository with actions already set up to create compiled releases What does this Template provide?

Dec 2, 2021
Related tags
Data Availability Sampling (DAS) on a Discovery-v5 DHT overlay

Implementing Data Availability Sampling (DAS) There's a lot of history to unpack here. Vitalik posted about the "Endgame": where ethereum could be hea

Nov 12, 2022
A service registry and service discovery implemention for kitex based on etcd

kitex etcd Introduction kitexetcd is an implemention of service registry and service discovery for kitex based on etcd. Installation go get -u github.

Feb 18, 2022
🌌 A libp2p DHT crawler that gathers information about running nodes in the network.
🌌 A libp2p DHT crawler that gathers information about running nodes in the network.

A libp2p DHT crawler that gathers information about running nodes in the network. The crawler runs every 30 minutes by connecting to the standard DHT bootstrap nodes and then recursively following all entries in the k-buckets until all peers have been visited.

Dec 27, 2022
A modified version of RoProxy made for self-hosting.

roproxy-lite A modified version of RoProxy made for self-hosting. Setup is easy, simply change the options at the top of main.go and run. Alternativel

Dec 24, 2022
Inspired by go-socks5,This package provides full functionality of socks5 protocol.
Inspired by go-socks5,This package provides full functionality of socks5 protocol.

The protocol described here is designed to provide a framework for client-server applications in both the TCP and UDP domains to conveniently and securely use the services of a network firewall.

Dec 16, 2022
MOSN is a cloud native proxy for edge or service mesh. https://mosn.io
MOSN is a cloud native proxy for edge or service mesh. https://mosn.io

中文 MOSN is a network proxy written in Golang. It can be used as a cloud-native network data plane, providing services with the following proxy functio

Dec 30, 2022
Use Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.
Use  Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.

目录 gRPC/consul/kafka简介 gRPC+kafka的Demo gRPC+kafka整体示意图 限流器 基于redis计数器生成唯一ID kafka生产消费 kafka生产消费示意图 本文kafka生产消费过程 基于pprof的性能分析Demo 使用pprof统计CPU/HEAP数据的

Jul 9, 2022
Service registration and discovery, support etcd, zookeeper, consul, etc.

discox 支持类型 zookeeper etcd consul 示例 zookeeper server package main import ( "fmt" "github.com/goeasya/discox" "os" ) func main() { cfg := discox

Aug 31, 2022
Envoy-eds-server - Envoy EDS server is a working Envoy Discovery Service implementation

envoy-eds-server Intro Envoy EDS server is a working Envoy Discovery Service imp

Apr 2, 2022
DNS service discovery library for Go

Discovery DNS service discovery library for Go Documentation see pkg.go.dev Installation

Mar 10, 2022