A Binance Smart Chain client based on the go-ethereum fork

Binance Smart Chain

The goal of Binance Smart Chain is to bring programmability and interoperability to Binance Chain. In order to embrace the existing popular community and advanced technology, it will bring huge benefits by staying compatible with all the existing smart contracts on Ethereum and Ethereum tooling. And to achieve that, the easiest solution is to develop based on go-ethereum fork, as we respect the great work of Ethereum very much.

Binance Smart Chain starts its development based on go-ethereum fork. So you may see many toolings, binaries and also docs are based on Ethereum ones, such as the name “geth”.

API Reference Discord

But from that baseline of EVM compatible, Binance Smart Chain introduces a system of 21 validators with Proof of Staked Authority (PoSA) consensus that can support short block time and lower fees. The most bonded validator candidates of staking will become validators and produce blocks. The double-sign detection and other slashing logic guarantee security, stability, and chain finality.

Cross-chain transfer and other communication are possible due to native support of interoperability. Relayers and on-chain contracts are developed to support that. Binance DEX remains a liquid venue of the exchange of assets on both chains. This dual-chain architecture will be ideal for users to take advantage of the fast trading on one side and build their decentralized apps on the other side. The Binance Smart Chain will be:

  • A self-sovereign blockchain: Provides security and safety with elected validators.
  • EVM-compatible: Supports all the existing Ethereum tooling along with faster finality and cheaper transaction fees.
  • Interoperable: Comes with efficient native dual chain communication; Optimized for scaling high-performance dApps that require fast and smooth user experience.
  • Distributed with on-chain governance: Proof of Staked Authority brings in decentralization and community participants. As the native token, BNB will serve as both the gas of smart contract execution and tokens for staking.

More details in White Paper.

Key features

Proof of Staked Authority

Although Proof-of-Work (PoW) has been approved as a practical mechanism to implement a decentralized network, it is not friendly to the environment and also requires a large size of participants to maintain the security.

Proof-of-Authority(PoA) provides some defense to 51% attack, with improved efficiency and tolerance to certain levels of Byzantine players (malicious or hacked). Meanwhile, the PoA protocol is most criticized for being not as decentralized as PoW, as the validators, i.e. the nodes that take turns to produce blocks, have all the authorities and are prone to corruption and security attacks.

Other blockchains, such as EOS and Cosmos both, introduce different types of Deputy Proof of Stake (DPoS) to allow the token holders to vote and elect the validator set. It increases the decentralization and favors community governance.

To combine DPoS and PoA for consensus, Binance Smart Chain implement a novel consensus engine called Parlia that:

  1. Blocks are produced by a limited set of validators.
  2. Validators take turns to produce blocks in a PoA manner, similar to Ethereum's Clique consensus engine.
  3. Validator set are elected in and out based on a staking based governance on Binance Chain.
  4. The validator set change is relayed via a cross-chain communication mechanism.
  5. Parlia consensus engine will interact with a set of system contracts to achieve liveness slash, revenue distributing and validator set renewing func.

Light Client of Binance Chain

To achieve the cross-chain communication from Binance Chain to Binance Smart Chain, need introduce a on-chain light client verification algorithm. It contains two parts:

  1. Stateless Precompiled contracts to do tendermint header verification and Merkle Proof verification.
  2. Stateful solidity contracts to store validator set and trusted appHash.

Native Token

BNB will run on Binance Smart Chain in the same way as ETH runs on Ethereum so that it remains as native token for BSC. This means, BNB will be used to:

  1. pay gas to deploy or invoke Smart Contract on BSC
  2. perform cross-chain operations, such as transfer token assets across Binance Smart Chain and Binance Chain.

Building the source

Many of the below are the same as or similar to go-ethereum.

For prerequisites and detailed build instructions please read the Installation Instructions.

Building geth requires both a Go (version 1.14 or later) and a C compiler. You can install them using your favourite package manager. Once the dependencies are installed, run

make geth

or, to build the full suite of utilities:

make all

Executables

The bsc project comes with several wrappers/executables found in the cmd directory.

Command Description
geth Main Binance Smart Chain client binary. It is the entry point into the BSC network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It has the same and more RPC and other interface as go-ethereum and can be used by other processes as a gateway into the BSC network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. geth --help and the CLI page for command line options.
clef Stand-alone signing tool, which can be used as a backend signer for geth.
devp2p Utilities to interact with nodes on the networking layer, without running a full blockchain.
abigen Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain Ethereum contract ABIs with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our Native DApps page for details.
bootnode Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks.
evm Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. evm --code 60ff60ff --debug run).
rlpdump Developer utility tool to convert binary RLP (Recursive Length Prefix) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. rlpdump --hex CE0183FFFFFFC4C304050583616263).

Running geth

Going through all the possible command line flags is out of scope here (please consult our CLI Wiki page), but we've enumerated a few common parameter combos to get you up to speed quickly on how you can run your own geth instance.

Hardware Requirements

The hardware must meet certain requirements to run a full node.

  • VPS running recent versions of Mac OS X or Linux.
  • 1T of SSD storage for mainnet, 500G of SSD storage for testnet.
  • 8 cores of CPU and 32 gigabytes of memory (RAM) for mainnet.
  • 4 cores of CPU and 8 gigabytes of memory (RAM) for testnet.
  • A broadband Internet connection with upload/download speeds of at least 10 megabyte per second
$ geth console

This command will:

  • Start geth in fast sync mode (default, can be changed with the --syncmode flag), causing it to download more data in exchange for avoiding processing the entire history of the Binance Smart Chain network, which is very CPU intensive.
  • Start up geth's built-in interactive JavaScript console, (via the trailing console subcommand) through which you can interact using web3 methods (note: the web3 version bundled within geth is very old, and not up to date with official docs), as well as geth's own management APIs. This tool is optional and if you leave it out you can always attach to an already running geth instance with geth attach.

A Full node on the Rialto test network

Steps:

  1. Download the binary, config and genesis files from release, or compile the binary by make geth.
  2. Init genesis state: ./geth --datadir node init genesis.json.
  3. Start your fullnode: ./geth --config ./config.toml --datadir ./node.
  4. Or start a validator node: ./geth --config ./config.toml --datadir ./node -unlock ${validatorAddr} --mine --allow-insecure-unlock. The ${validatorAddr} is the wallet account address of your running validator node.

Note: The default p2p port is 30311 and the RPC port is 8575 which is different from Ethereum.

More details about running a node and becoming a validator.

Note: Although there are some internal protective measures to prevent transactions from crossing over between the main network and test network, you should make sure to always use separate accounts for play-money and real-money. Unless you manually move accounts, geth will by default correctly separate the two networks and will not make any accounts available between them.

Configuration

As an alternative to passing the numerous flags to the geth binary, you can also pass a configuration file via:

$ geth --config /path/to/your_config.toml

To get an idea how the file should look like you can use the dumpconfig subcommand to export your existing configuration:

$ geth --your-favourite-flags dumpconfig

Programmatically interfacing geth nodes

As a developer, sooner rather than later you'll want to start interacting with geth and the Binance Smart Chain network via your own programs and not manually through the console. To aid this, geth has built-in support for a JSON-RPC based APIs (standard APIs and geth specific APIs). These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based platforms, and named pipes on Windows).

The IPC interface is enabled by default and exposes all the APIs supported by geth, whereas the HTTP and WS interfaces need to manually be enabled and only expose a subset of APIs due to security reasons. These can be turned on/off and configured as you'd expect.

HTTP based JSON-RPC API options:

  • --http Enable the HTTP-RPC server
  • --http.addr HTTP-RPC server listening interface (default: localhost)
  • --http.port HTTP-RPC server listening port (default: 8545)
  • --http.api API's offered over the HTTP-RPC interface (default: eth,net,web3)
  • --http.corsdomain Comma separated list of domains from which to accept cross origin requests (browser enforced)
  • --ws Enable the WS-RPC server
  • --ws.addr WS-RPC server listening interface (default: localhost)
  • --ws.port WS-RPC server listening port (default: 8546)
  • --ws.api API's offered over the WS-RPC interface (default: eth,net,web3)
  • --ws.origins Origins from which to accept websockets requests
  • --ipcdisable Disable the IPC-RPC server
  • --ipcapi API's offered over the IPC-RPC interface (default: admin,debug,eth,miner,net,personal,shh,txpool,web3)
  • --ipcpath Filename for IPC socket/pipe within the datadir (explicit paths escape it)

You'll need to use your own programming environments' capabilities (libraries, tools, etc) to connect via HTTP, WS or IPC to a geth node configured with the above flags and you'll need to speak JSON-RPC on all transports. You can reuse the same connection for multiple requests!

Note: Please understand the security implications of opening up an HTTP/WS based transport before doing so! Hackers on the internet are actively trying to subvert BSC nodes with exposed APIs! Further, all browser tabs can access locally running web servers, so malicious web pages could try to subvert locally available APIs!

Contribution

Thank you for considering to help out with the source code! We welcome contributions from anyone on the internet, and are grateful for even the smallest of fixes!

If you'd like to contribute to bsc, please fork, fix, commit and send a pull request for the maintainers to review and merge into the main code base. If you wish to submit more complex changes though, please check up with the core devs first on our discord channel to ensure those changes are in line with the general philosophy of the project and/or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple.

Please make sure your contributions adhere to our coding guidelines:

  • Code must adhere to the official Go formatting guidelines (i.e. uses gofmt).
  • Code must be documented adhering to the official Go commentary guidelines.
  • Pull requests need to be based on and opened against the master branch.
  • Commit messages should be prefixed with the package(s) they modify.
    • E.g. "eth, rpc: make trace configs optional"

Please see the Developers' Guide for more details on configuring your environment, managing project dependencies, and testing procedures.

License

The bsc library (i.e. all code outside of the cmd directory) is licensed under the GNU Lesser General Public License v3.0, also included in our repository in the COPYING.LESSER file.

The bsc binaries (i.e. all code inside of the cmd directory) is licensed under the GNU General Public License v3.0, also included in our repository in the COPYING file.

Owner
For Binance Smart Chain, BSC, Binance Chain
null
Comments
  • BSC synchronization issues

    BSC synchronization issues

    Description

    In the 24 hours of July 28, Binance Smart Chain (BSC) processed 12.9 million transactions. This number and the below numbers are all from the great BSC network explorer bscscan.com powered by the Etherscan team.

    This means 150 transactions per second (TPS) processed on the mainnet, not in isolated environment tests or white paper. If we zoom in, we will also notice that these were not light transactions as BNB or BEP20 transfers, but heavy transactions, as many users were "fighting" each other in the “Play and Earn”, which is majorly contributed by GameFi dApps from MVBII.

    The total gas used on July 28 was 2,052,084 million. If all these were for a simple BEP20 transaction that typically cost 50k gas, it could cover 41 millions transactions, and stand for 470 TPS.

    On the other hand, with the flood of volume, the network experienced congestion on July 28 for about 4 hours, and many low spec or old version nodes could not catch up with processing blocks in time.

    Updates

    A new version of beta client is released which has better performance in order to handle the high volume. Please feel free to upgrade and raise bug reports if you encounter any. Please note this is just a beta version, some known bug fix is on the way. Click here to download the beta client.

    To improve the performance of nodes and achieve faster block times, we recommend the following specifications.

    • validator:
      • 2T GB of free disk space, solid-state drive(SSD), gp3, 8k IOPS, 250MB/S throughput, read latency <1ms.
      • 12 cores of CPU and 48 gigabytes of memory (RAM)
      • m5zn.3xlarge instance type on AWS, or c2-standard-8 on Google cloud.
      • A broadband Internet connection with upload/download speeds of 10 megabyte per second
    • fullnode:
      • 1T GB of free disk space, solid-state drive(SSD), gp3, 3k IOPS, 125MB/S throughput, read latency <1ms. (if start with snap/fast sync, it will need NVMe SSD)
      • 8 cores of CPU and 32 gigabytes of memory (RAM).
      • c5.4xlarge instance type on AWS, c2-standard-8 on Google cloud.
      • A broadband Internet connection with upload/download speeds of 5 megabyte per second

    If you don’t need an archive node, choose the latest snapshot and rerun from scratch from there.

    Problems

    • Fast/snap sync mode cannot catch up with the current state data.
    • Full sync cannot catch up with the current block.
    • High CPU usage.

    Suggestions

    • Use the latest released binary version.
    • Don't use fast/snap sync for now, use the snapshot we provide to run full sync.
    • Confirm your hardware is sufficient, you can refer to our official documents (we will update if there are new discoveries).
    • Regularly prune data to reduce disk pressure.
    • Make sure the peer you connect to is not too slow.

    Reference PRs

    • #257
    • #333

    We will update this board, If there are any updates. If you have a suggestion or want to propose some improvements, please visit our Github. If you encounter any synchronization issues, please report them here.

  • All BSC nodes are OFF SYNC

    All BSC nodes are OFF SYNC

    Well, I have tried to sync my own node and failed. It is syncing a week already. OK, so I decided to buy access to a node in the internet.

    I have tried ankr, getblock and quiknode so far, and they ALL are OFF SYNC!!!

    Please don't tell me anything about my hardware is weak or I did something wrong. Just figure out what is going on, and fix it. A month ago everything was alright.

  • core: change ordering of txs of equal gas price from arrival time to hash

    core: change ordering of txs of equal gas price from arrival time to hash

    Description

    This pr changes the ordering of equal-gas-priced txs to be lexicographical by hash rather than by arrival time.

    Rationale

    About a year ago, version 1.10 silently merged in a change made by the Ethereum devs in a bid to mitigate spam by arb bots. They changed the sortation method of gas-equal txs from random (map key) to the time they were first seen by the node. This change has very negatively affected BSC, due to the huge number of fake spammer nodes that each bot owner now has to run in order to beat the competition.

    Before 1.10, there was indeed tx spam by arb bots on BSC, but now not only is that spam now far worse (a single bot owner very often sends hundreds of txs per block to grab the high value arbs), but a very high proportion of the "nodes" on the network aren't actual full nodes at all - they are fakes that only exist to flood the network with their own arb txs and censor all other txs and block data. This is all easily seen with some basic per-peer analysis of the txs that pass through eth/fetcher/tx_fetcher.go - the majority of "nodes" are in fact bad actors - so many of the syncing problems honest full node operators experience could well be influenced by the fact that only a small fraction of the MaxPeers they have set in the config are actually legit peers.

    Another issue with the "first seen" tx sorting method is that we have to take it on faith that the validators are indeed ordering by the time seen and not giving preferential treatment to friends. Sorting the txs by hash would eliminate a lot of doubt, as violations of the rule would be easily spotted. Yes, they could delay "unwanted" txs by a full block, but that would look very obvious.

    Sorting txs by hash would create a negative feedback loop as far as spam is concerned. As each bot hashes, the probability per second of beating the current best hash diminishes, thus therefore so does the overall rate of bot txs per arb. It would significantly reduce the block size and demands on I/O space and bandwidth, going a long way to solving the problems that have plagued full node operators for a long time.

    As discussed in 269, 911.

    Changes

    Notable changes:

    • add TxByPriceAndHash type
    • replace TxByPriceAndTime with TxByPriceAndHash in TransactionsByPriceAndNonce

    N.B. commit is tagged "eth" by mistake - it should be "core"

  • Concern: Validators may be selling special treatment and bypassing Bruno burn

    Concern: Validators may be selling special treatment and bypassing Bruno burn

    Take a look at this transaction: 0xc5fff8dfd621b964fabcd9e240d3a6d72927edb5f5195d9d41d6eb0a1859c92a

    Things to note:

    1. Gas price is 0
    2. It is the sell transaction of a front-run operation, so it was prioritized even though gas price is 0.
    3. Block was validated by MathWallet
    4. There is a BNB transfer going to MathWallet validator (Possibly as payment for the special treatment).

    There are many more of these transactions going on to the same contract address with the same behavior. At least 3 validators seem to be involved (MathWallet, TwStaking and NodeReal). The transactions are also not publicly visible from the transaction pool (Possibly they are submitted directly to the validators). Clearly, there is an agreement between the originator of the transactions and the validators and the goal is to perform front-running without risk.

    MathWallet denied this, but we can all see what's going on.

    Needless to say, because the gas price is 0, there is no burn (Introduced in Bruno upgrade). It's possible that this is another thing these validators are starting to explore to bypass the burn feature and maximize their profits.

    Is this behavior allowed from validators? I don't believe that it's healthy for the network if validators do such things for profit. Validators are held to high standards and are supposed to be trustworthy.

    For completion, here is also the buy transaction associated with the above sell transaction: 0x762e097ab15fbefeee0e91c6a444e492ec7e9e00c84cad2a4c4765e1f85efe47

  • Fast Sync State Entries Statistics

    Fast Sync State Entries Statistics

    Hi there,

    Creating this issue thread to provide information similar to https://github.com/ethereum/go-ethereum/issues/15616

    This is technically not an issue but just a thread in case people wonder why their fast sync takes 'forever'. Once you see Deallocated fast sync bloom items in your log, it means fast sync has stopped and full sync has started and the node has reached its peak sync state at that time.

    Fast Sync state entry as of 3 March 2021 ~ 163237136

    INFO [03-03|10:28:21.538] Committed new head block number=5355700 hash=0148bf…0ec9bd
    INFO [03-03|10:28:21.569] Imported new block headers count=1 elapsed=443.41µs number=5355815 hash=e3b9bb…dcd325
    INFO [03-03|10:28:21.572] Deallocated fast sync bloom items=163237136 errorrate=0.001
    

    Fast Sync Disk Usage - 3 March 2021

    /dev/sdc 491.2G 34.6G 456.5G 7% /root
    /dev/sdd 491.2G 26.5G 464.6G 5% /ancient
    

    Full Sync Disk Usage - 4 March 2021

    /dev/sdd 491.2G 189.7G 301.5G 39% /root
    /dev/sdc 491.2G 27.8G 463.3G 6% /ancient
    
  • Geth 1.10 txpool ordering

    Geth 1.10 txpool ordering

    Rationale

    With the latest update of geth 1.10, transactions with the same gas price are ordered by receive time, rather than by random order.

    This receive time ordering may not be suitable to binance smart chain. Since BSC's validation mechanism is 21 validators taking turn deterministically to wrap transactions, deterministic transaction ordering can lead to below problems :

    1. validators can GUARANTEED frontrun/ backrun pending pool transactions, as validators do not need to compete for POW. For example, a validator can guaranteed front running IDO starts at a pre-determined block number.
    2. since latency becomes critical after the update, nodes have less incentives to include light nodes and nodes located remotely (like in New Zealand) as peers. Nodes are concentrated located and connected. This causes the nodes become more and more centralized and defeats the purpose of decentralization.

    Also, I think this change is vital and wonder is that any reason not stating it in the change list?

    Implementation

    The origin of this change in ethereum mainnet is to avoid spam back running transactions occupying block spaces. To achieve the same goal in bsc, could we propose the below change instead?

    1. sort the transaction order by receive time, with a noise added so that order is sorted by T + N(0,sigma)
    2. revert the change if the spam issue not serious. Seems to me that the 60m gas limit is not fully utilized recently.

    Welcome any thoughts!

  • My node always showing 100 blocks behind

    My node always showing 100 blocks behind

    > eth.syncing

    {
      currentBlock: 7738573,
      highestBlock: 7738689,
      knownStates: 370745564,
      pulledStates: 370690961,
      startingBlock: 7716912
    }
    

    Is there any way to boostup the syncing?

  • BSC node can't sync the new block

    BSC node can't sync the new block

    eth.syncing { currentBlock: 6784288, highestBlock: 6784407, knownStates: 133873248, pulledStates: 133865063, startingBlock: 6783739 } eth.blockNumber 0

  • Syncing 3 days behind every time

    Syncing 3 days behind every time

    System information

    AWS Instance

    Instance type : c5.4xlarge
    Ram Size : 32 GB
    CPU : 16
    

    Disk Configurations :

    Disk type : GP3
    Disk Size : 4000 GiB
    IOPS : 16000
    Throughput : 500 MB/s
    

    Geth version:

    Geth
    Version: 1.1.2
    Git Commit: c4f931212903b3ee8495c36ac374340aec4ac269
    Git Commit Date: 20210825
    Architecture: amd64
    Go Version: go1.15.14
    Operating System: linux
    GOPATH=/home/ubuntu/go
    GOROOT=/usr/local/go
    

    The blockchain was synced before but when I restarted It is now 3 days behind. I don't know what is wrong with the BSC network. My entire system is suffering because of this. Kindly anyone help me to resolve this.

    I've done everything as you can see the configurations. It seems I am installing a project of NASA.

    Here is the out for eth.syncing

    instance: Geth/v1.1.2-c4f93121-20210825/linux-amd64/go1.15.14
    coinbase: 0xff82b410814736762246d9382aeccc13cacb85ce
    at block: 11428396 (Sat Oct 02 2021 18:04:03 GMT+0000 (UTC))
     datadir: /home/ubuntu/node
    
    
    > eth.syncing
    {
      currentBlock: 11428399,
      highestBlock: 11475954,
      knownStates: 297473485,
      pulledStates: 297473485,
      startingBlock: 11392128
    }
    > 
    
  • Release 1.1.0 stable cannot full sync on fast mode

    Release 1.1.0 stable cannot full sync on fast mode

    System information

    Geth version:
    geth version Geth Version: 1.1.0 Git Commit: 7822e9e2a1c11e5e9f989b740ba0166a9cd96db1 Architecture: amd64 Go Version: go1.16.3 Operating System: linux GOPATH= GOROOT=go

    OS & Version: Linux Ubuntu 20.04 Commit hash : (if develop)

    Expected behaviour

    To be fully synced and to not be behind with 100 blocks

    Actual behaviour

    Pivot Stale, receipts behind with ~100 blocks and the issue was also in 1.1.0-beta and actually the issue started to happen about 1 week and half ago after I saw that my node gets behind I did restart it and started this behaviour. Deleted and resynced same issue. I tried 2-3 different server providers all with NVME and a lot of RAM and different location same issue: eth.blockNumber on 0 and pivot getting stalled so I get the issue with 3 minutes back blocks (~100blocks).

    Steps to reproduce the behaviour

    Start a full sync ( personally I've used servers from EU countries) by using fast as syncmode

    Backtrace

    [backtrace]
    

    When submitting logs: please submit them as text and not screenshots.

  • I am tired of this

    I am tired of this

    A month ago I've pruned my node because my server was low on storage. At that time, I was using Hetzner AX51-NVME with Ryzen 7 3700X, 64GB RAM and 1TB NVMe which worked just fine, until I stopped it. Since then it never synced again. So I followed some "smart" recommendations to use snapshot. As the server had only 1TB of storage, I needed to upgrade to AX61-NVME which has Ryzen 9 3900, 128GB RAM and 1.92 TB NVMe. I've followed the snapshot instructions, downloaded it, replaced the data, started the node, and guess what. Here I am 3 days later stuck in the same fckin state, where the node imports billions of states without ever stopping. To be honest, I am sick of this. And please, don't tell me I need better hardware. That's complete bullshit. I could've run half of my country's internet from that server. Does anyone know the exact procedure to get a new node 100% synced? If not, devs should really think about it, as this is the start of the end of this network.

  • Do the validators run their own MEV bots ?

    Do the validators run their own MEV bots ?

    Hey,

    We're trying to find if there are some fair MEV opportunities on BSC. Fair means the MEV opportunities are equally accessible by anyone running some nodes and not captured by a little group.

    To check if there are suspicious behaviors, we've sent thousands batches of 2 transactions. The 2 transactions only differ on the "to" and the "from" parameters. All the other ones are exactly the same (gas, gasPrice, data ...). Here is how we make sure we send similar transactions of similar size at the same time:

    //build a slice of the 2 transactions to be sent
    func buildTxs(tx *types.Transaction, txTwo *types.Transaction){
    	AllTxsToSendBundle = make([]*types.Transaction, 2)
    	AllTxsToSendBundle[0] = tx
    	AllTxsToSendBundle[1] = txTwo
    	go sendToPeers(AllTxsToSendBundle)
    }
    
    //send the 2 transactions as goroutines to all our peers (global var updated every 60 seconds)
    func sendToPeers(AllTxsToSend []*types.Transaction) {
    	lenPeers := len(peers)
    	for j := 0; j < lenPeers-1; j++ {
    		go func(k int) {
    			peers[k].SendTransactions(AllTxsToSend)
    		}(j)
    	}
    }
    

    In most cases, our two transactions should be close to each other in the block (same gas, same size, sent at the exact same time to the same peers). When there are MEV opportunities (arbs / sandwichs), it's not the case for many validators. For validators like Legend / Legend II / Legend III, our two transactions have always been one after the other. But for many others, the transaction sorting is very different. There are even some validators where the distance between our transactions is always > 4, with, of course, some other MEV bots winning the opportunities (see details by validator below).

    We guess the validators are allowed to custom the TransactionsByPriceAndNonce function. Why not, but when we check the MEV bots performances on Eigenphi, it's always the same bots that catch most of the opportunities. Although we can't prove these MEV bots are linked to the validators, we can be very suspicious.

    Please note that this analysis excludes the validators using the 48Club Puissant API as the transactions sorting is directly linked to the gas price paid by the (1st transaction') searcher.

    So, is this situation acknowledged or accepted ? Capture d’écran 2023-01-02 à 15 26 42

    Some other issues talking about MEV being run by validators: https://github.com/bnb-chain/bsc/issues/1101 https://github.com/bnb-chain/bsc/issues/1170 https://github.com/bnb-chain/bsc/issues/911

  • gasPrice is always 5 gwei.

    gasPrice is always 5 gwei.

    I set blocks to 200 and percentile to 99 in gpo in config.toml but it didn't have effect. Still gasPrice always returns 5 gwei.

    How to enable gpo custom param?

  • ci: disable CGO_ENABLED when building binary

    ci: disable CGO_ENABLED when building binary

    Description

    ci: disable CGO_ENABLED when building binary

    Rationale

    The current release binary will depend on GLIBC, which may make some Users not easy to use, so disable CGO_ENABLED in the official release version to remove the dependency on C lib.

    image

    Example

    https://github.com/j75689/bsc/releases/tag/v1.1.19-rc3

    Changes

    Notable changes:

    • ci(release)
  • Syncing not working

    Syncing not working

    Current block: 22960161

    Log: WARN [12-15|23:35:55.863] Synchronisation failed, retrying err="peer is unknown or unhealthy" WARN [12-15|23:36:04.044] Synchronisation failed, dropping peer peer=7aaf2c89de4b84792e7b9f0d92d345f373fbc9cf258b798f862f8823be5b836b err=timeout WARN [12-15|23:36:43.567] Ancestor below allowance peer=05928829 number=0 hash=0d2184..d57b5b allowance=22,870,161 WARN [12-15|23:36:43.567] Synchronisation failed, dropping peer peer=059288294e3777e19692a53580a9f816184a3e30184e92c60c604b56948e5787 err="retrieved ancestor is invalid" WARN [12-15|23:36:43.568] Synchronisation failed, retrying err="peer is unknown or unhealthy" WARN [12-15|23:37:25.939] Multiple headers for single request peer=fb1b13da headers=0 WARN [12-15|23:37:25.939] Synchronisation failed, dropping peer peer=fb1b13da5aaa43456100e8ce5b4d8481fb98578c55cc263c6c5b517c549c08d0 err="action from bad peer ignored: multiple headers (0) for single request" WARN [12-15|23:37:25.940] Synchronisation failed, retrying err="peer is unknown or unhealthy"

  • Why the BSC block gas limit change ?

    Why the BSC block gas limit change ?

    The bsc block gas limit is always very close to 140,000,000. But it can vary a little for some blocks.

    For exemple, the block 23,934,046 with a gas limit of 140,000,000 : https://bscscan.com/block/23934046 and the block 23,934,047 with a gas limit of 139,453,126 : https://bscscan.com/block/23934047

    Why does it change and what's the logic behind this ? Why not just make it constant at 140M ?

  • [R4R] implement BEP-176: Validator Reward Model 2.0

    [R4R] implement BEP-176: Validator Reward Model 2.0

    BEP-176: Validator Reward Model 2.0

    1. Summary

    This BEP will introduce a new validator reward model on BNB Smart Chain. As a result of this BEP, the validator staking reward policy is going to be modified such that it is more accurately representative of the validator's contribution to the overall network.

    2. Abstract

    The mechanism for validator income distribution will be altered when this BEP is implemented. The specifics are still being worked out, but in general, we may divide the overall rewards from gas fees into two parts:

    • Network Verification Reward: will be distributed among all validators more equitably, according to their contributions to the whole network.
    • Block Reward: The single producer for its work on this block.

    The proportion of each part is up for debate, and validators will have the possibility to alter the value through governance.

    The idea suggests that we may favor a slightly greater ratio of the Network Verification Reward vs Block Rewards, given that the primary objective is to validate the network to enable real and secure transactions. Our viewpoint is that transactions are resources shared among validators; a validator should not consider them to be their property.

    3. Status

    Draft

    4.Motivation

    Before this BEP, the validator’s reward mostly relied on the transaction gas fee it received, as their only source of rewards is the gas fee for the transaction in the blocks they produce. A validator will get more rewards if it packs more transactions into the block it produces. It was designed to encourage validators to add more transactions, but it didn’t represent the overall contribution that a validator made. In general opinion, a validator should be rewarded even if it only produces empty blocks, because it helps build the network, e.g. verify blocks, keep a copy of the latest network storage state, broadcast blocks and transactions… But validator will lose its dedicated block reward if it produces empty block, so it still encourages validator to add more transactions in its block.

    The rewards logic will serve multiple objectives. In addition to transaction inclusion, it is necessary to consider efforts to validate blocks and maintain network security.

    Absolute fairness is not an easy goal to accomplish, but we do our best to bring about some improvements with this BEP. We name it Validator Reward Model 2.0, it could have further revisions to keep improving it.

    5. Specification

    Currently, as defined by the Parlia consensus, block producers will collect the gas fee of each transaction as its reward and distribute it to a system contract temporarily. The system contract will forward the reward information to Beacon Chain periodically to settle the reward along with the staking information kept on Beacon Chain.

    The general procedure will not be changed in this BEP, but 2 steps will be added:

    • For each block, part of the block reward will be put into the shared network contribution funding pool. Initially, the ratio is set to 50%, i.e. half of the rewards will be shared with the other validators.
    • Before the system contracts forward the reward information to Beacon Chain, the funding pool will be distributed according to the network contribution of each validator. The contribution could be simply determined by the block number it produces.

    5.1 Mechanism & Governance

    A governable parameters: validatorRewardRatio will be introduced in the ValidatorSet Contract. At the end of each block, the Validator will sign a transaction to invoke the deposit function of the contract to transfer the gas fee. The validator reward model logic is implemented within the deposit function that: validatorRewardRatio * ( gasFee - burnRatio*gasFee) will be transferred to the validator contribution funding pool address;

    The initial setting:

    • validatorRewardRatio = 50%

    This process will be carried on BNB Beacon Chain, every community member can propose a change of the params. The proposal needs to receive a minimum deposit of BNB (2000BNB on mainnet for now, refundable after the proposal has passed) so that the validators can vote for it. The validators of BSC can vote for it or against it based on their staking amount of BNB.

    If the total voting power of bounded validators that votes for it reaches the quorum (50% on mainnet), the proposal will pass and the corresponding change of the params will be passed onto BSC via cross-chain communication and take effect immediately. The vote of unbounded validators will not be considered into the tally.

    6. License

    The content is licensed under CC0.

    Changes

    Notable changes:

    • add a new hard fork to upgrade validatorSet contract containing reward 2.0 logic;
    • add empty block trigger sharing reward redistribution;
A Binance Smart Chain client based on the go-ethereum fork

Binance Smart Chain The goal of Binance Smart Chain is to bring programmability and interoperability to Binance Chain. In order to embrace the existin

Feb 8, 2022
XT Smart Chain, a chain based on the go-ethereum fork

XT Smart Chain XT Smart Chain (XSC) is a decentralized, high-efficiency and ener

Dec 28, 2022
A Binance Smart Chain client based on the erigon fork

Erigon Erigon is an implementation of Ethereum (aka "Ethereum client"), on the efficiency frontier, written in Go. System Requirements Usage Getting S

Sep 17, 2022
Yet another Binance Smart Chain client based on TrustFi Network

TrustFi Smart Chain The goal of TrustFi Smart Chain is to bring programmability and interoperability to Binance Chain. In order to embrace the existin

Mar 27, 2021
A phoenix Chain client based on the go-ethereum fork,the new PoA consensus engine is based on the VRF algorithm.

Phoenix Official Golang implementation of the Phoenix protocol. !!!The current version is for testing and developing purposes only!!! Building the sou

Apr 28, 2022
Huobi Eco Chain client based on the go-ethereum fork

The Huobi Open Platform is a unified infrastructure platform based on the technical, traffic and ecological resources of the Huobi Group, and will be gradually open to the blockchain industry.

Dec 31, 2022
Ethereum-vanity-wallet - A fork of https://github.com/meehow/ethereum-vanity-wallet but the key can be exported to a JSON keystore file

ethereum-vanity-wallet See https://github.com/meehow/ethereum-vanity-wallet This version: doesn't display the private key let's you interactively expo

Jan 2, 2022
Ethereum go-ethereum - Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated b

Feb 17, 2022
A Binance Chain vanity address generator written in golang.
A Binance Chain vanity address generator written in golang.

VaniBNB A Binance Chain vanity address generator written in golang. For example address ending with 0xkat Raw https://github.com/makevoid/vanieth http

Sep 9, 2022
LEO (Low Ethereum Orbit) is an Ethereum Portal Network client.

LEO LEO (Low Ethereum Orbit) is an Ethereum Portal Network client. What makes LEO different from other Portal Network clients is that it uses libp2p f

Apr 19, 2022
This library aims to make it easier to interact with Ethereum through de Go programming language by adding a layer of abstraction through a new client on top of the go-ethereum library.

Simple ethereum client Simple ethereum client aims to make it easier for the developers to interact with Ethereum through a new layer of abstraction t

May 1, 2022
Running chaincode in development mode: Smart contract developers that want to iteratively develop and test their chaincode packages without the overhead of the smart contract lifecycle process for every update.

Fabric DEVMODE - Nano bash 1 ORG + 1 PEER + 1 ORDERER Based on fabric-samples/test-network-nano-bash, but using devmode fabric peer Prereqs Follow the

May 14, 2022
Berylbit PoW chain using Ethash, EPI-Burn and geth. The chain will be using bot congestion flashbot bundles through nodes

Berylbit PoW chain using Ethash, EPI-Burn and geth. The chain will be using bot congestion flashbot bundles through nodes. Soon, We will work towards

Jun 30, 2022
Go-chain - EVM-compatible chain secured by the Lachesis consensus algorithm

ICICB galaxy EVM-compatible chain secured by the Lachesis consensus algorithm. B

Jun 8, 2022
Rei chain fork from quorum using raft consensus
Rei chain fork from quorum using raft consensus

GoQuorum is an Ethereum-based distributed ledger protocol with transaction/contract privacy and new consensus mechanisms. GoQuorum is a fork of go-eth

Aug 8, 2022
The Fabric Smart Client is a new Fabric Client that lets you focus on the business processes and simplifies the development of Fabric-based distributed application.

Fabric Smart Client The Fabric Smart Client (FSC, for short) is a new Fabric client-side component whose objective is twofold. FSC aims to simplify th

Dec 14, 2022
Accompanying repository for the "Build Ethereum From Scratch - Smart Contracts and More" course by David Katz
Accompanying repository for the

Build Ethereum From Scratch - Smart Contracts and More This repository accompanies the "Build Ethereum From Scratch - Smart Contracts and More" course

Dec 7, 2022
On chain interactive fraud prover for Ethereum
On chain interactive fraud prover for Ethereum

The cannon (cannon cannon cannon) is an on chain interactive fraud prover It's half geth, half of what I think truebit was supposed to be. It can prov

Dec 9, 2022
Jan 7, 2023