Implementation of the Filecoin protocol, written in Go

Project Lotus Logo

Project Lotus - 莲


Lotus is an implementation of the Filecoin Distributed Storage Network. For more details about Filecoin, check out the Filecoin Spec.

Building & Documentation

For instructions on how to build, install and setup lotus, please visit https://docs.filecoin.io/get-started/lotus.

Reporting a Vulnerability

Please send an email to [email protected]. See our security policy for more details.

Related packages

These repos are independent and reusable modules, but are tightly integrated into Lotus to make up a fully featured Filecoin implementation:

Contribute

Lotus is a universally open project and welcomes contributions of all kinds: code, docs, and more. However, before making a contribution, we ask you to heed these recommendations:

  1. If the proposal entails a protocol change, please first submit a Filecoin Improvement Proposal.
  2. If the change is complex and requires prior discussion, open an issue or a discussion to request feedback before you start working on a pull request. This is to avoid disappointment and sunk costs, in case the change is not actually needed or accepted.
  3. Please refrain from submitting PRs to adapt existing code to subjective preferences. The changeset should contain functional or technical improvements/enhancements, bug fixes, new features, or some other clear material contribution. Simple stylistic changes are likely to be rejected in order to reduce code churn.

When implementing a change:

  1. Adhere to the standard Go formatting guidelines, e.g. Effective Go. Run go fmt.
  2. Stick to the idioms and patterns used in the codebase. Familiar-looking code has a higher chance of being accepted than eerie code. Pay attention to commonly used variable and parameter names, avoidance of naked returns, error handling patterns, etc.
  3. Comments: follow the advice on the Commentary section of Effective Go.
  4. Minimize code churn. Modify only what is strictly necessary. Well-encapsulated changesets will get a quicker response from maintainers.
  5. Lint your code with golangci-lint (CI will reject your PR if unlinted).
  6. Add tests.
  7. Title the PR in a meaningful way and describe the rationale and the thought process in the PR description.
  8. Write clean, thoughtful, and detailed commit messages. This is even more important than the PR description, because commit messages are stored inside the Git history. One good rule is: if you are happy posting the commit message as the PR description, then it's a good commit message.

License

Dual-licensed under MIT + Apache 2.0

Comments
  • Arm64 Raspberry Pi Build Failure

    Arm64 Raspberry Pi Build Failure

    Describe the bug When trying to build the lotus project from source files I run into this error:

        error[E0433]: failed to resolve: use of undeclared type or module `cc`
         --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/fil-sapling-crypto-0.6.0/build.rs:6:9
          |
        6 |         cc::Build::new()
          |         ^^ use of undeclared type or module `cc`
        
        error: aborting due to previous error
    

    To Reproduce Steps to reproduce the behavior:

    1. followed the steps in this issue: https://github.com/filecoin-project/lotus/issues/1779
    export RUSTFLAGS="-C target-cpu=native -g"
    export FFI_BUILD_FROM_SOURCE=1
    make clean deps bench
    

    Expected behavior The lotus project to build without errors

    Screenshots Someone already reported the issue on an upstream repo. https://github.com/zcash-hackworks/sapling-crypto/issues/104

    Version (run lotus --version): unable to compile the latest version

    Additional context Add any other context about the problem here. Arm64 Architecture

  •  ChainGetTipSetByHeight method synchronization lacks message

    ChainGetTipSetByHeight method synchronization lacks message

    image

    The ChainGetTipSetByHeight method cannot synchronize the messages under all miners and there are references bug i used the ChainGetTipSetByHeight rpc Can't sync to that transaction

    the method is follow img: image

  • updating to new datastore/blockstore code with contexts

    updating to new datastore/blockstore code with contexts

    Status

    The following deps need to be tagged, as we have go mod versions pointing at unstable things:

    • github.com/drand/drand

    The following deps also exist in the lotus-soup go.mod:

    • github.com/drand/drand
  • GPU stuck at P2

    GPU stuck at P2

    Describe the bug

    This problem is an occasional problem, When the worker is started for the first time, there will be a high probability of problems, Then the probability of encountering this problem after restarting will decrease

    Our GPU: Geforce RTX 2080 Ti

    worker log:

    2020-11-24T01:18:01.561 INFO filcrypto::proofs::api > seal_pre_commit_phase2: start
    2020-11-24T01:18:01.575 INFO filecoin_proofs::api > validate_cache_for_precommit_phase2:start
    2020-11-24T01:18:01.603 INFO filecoin_proofs::api > validate_cache_for_precommit_phase2:finish
    2020-11-24T01:18:01.637 INFO filecoin_proofs::api::seal > seal_pre_commit_phase2:start
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > replicate_phase2
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree c using the GPU
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > Building column hashes
    2020-11-24T01:18:01.736 INFO neptune::cl > getting context for ~Index(0)
    2020-11-24T01:18:01.793 WARN neptune::cl > Cannot get device list for platform: Clover!
    2020-11-24T01:18:01.793 WARN neptune::cl > Cannot get device list for platform: Clover!
    2020-11-24T01:19:55.835 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 1/8 of length 153391689
    2020-11-24T01:21:31.248 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 2/8 of length 153391689
    2020-11-24T01:23:04.417 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 3/8 of length 153391689
    2020-11-24T01:24:38.587 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 4/8 of length 153391689
    2020-11-24T01:26:13.404 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 5/8 of length 153391689
    2020-11-24T01:27:46.782 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 6/8 of length 153391689
    2020-11-24T01:29:27.555 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 7/8 of length 153391689
    2020-11-24T01:31:13.127 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 8/8 of length 153391689
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > tree_c done
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > building tree_r_last
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree r last using the GPU
    2020-11-24T01:31:19.224 INFO neptune::cl > getting context for ~Index(0)
    2020-11-24T01:31:19.369 WARN neptune::cl > Cannot get device list for platform: Clover!
    2020-11-24T10:43:56.641+0800    ^[[33mWARN^[[0m main    lotus-seal-worker/main.go:421   Shutting down...
    2020-11-24T10:43:56.655+0800    ^[[33mWARN^[[0m main    lotus-seal-worker/main.go:98    http: Server closed
    

    dmesg log:

    [Tue Nov 24 01:18:26 2020] NVRM: GPU at PCI:0000:81:00: GPU-77feb6df-eb6f-ae6d-8f89-5f84fb7c3e40
    [Tue Nov 24 01:18:26 2020] NVRM: GPU Board Serial Number:
    [Tue Nov 24 01:18:26 2020] NVRM: Xid (PCI:0000:81:00): 13, pid=30010, Graphics SM Warp Exception on (GPC 2, TPC 0, SM 0): Out Of Range Address
    [Tue Nov 24 01:18:26 2020] NVRM: Xid (PCI:0000:81:00): 13, pid=30010, Graphics Exception: ESR 0x514730=0xc01000e 0x514734=0x20 0x514728=0x4c1eb72 0x51472c=0x174
    [Tue Nov 24 01:18:26 2020] NVRM: Xid (PCI:0000:81:00): 43, pid=30398, Ch 00000008
    

    I compared the logs of the worker after the first startup and restart, I found one place is different:

    first startup:

    2020-11-24T01:18:01.561 INFO filcrypto::proofs::api > seal_pre_commit_phase2: start
    2020-11-24T01:18:01.575 INFO filecoin_proofs::api > validate_cache_for_precommit_phase2:start
    2020-11-24T01:18:01.603 INFO filecoin_proofs::api > validate_cache_for_precommit_phase2:finish
    2020-11-24T01:18:01.637 INFO filecoin_proofs::api::seal > seal_pre_commit_phase2:start
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > replicate_phase2
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree c using the GPU
    2020-11-24T01:18:01.708 INFO storage_proofs_porep::stacked::vanilla::proof > Building column hashes
    2020-11-24T01:18:01.736 INFO neptune::cl > getting context for ~Index(0)
    
    ----------------------------------------------------------------------------------------------------------------
    
    2020-11-24T01:18:01.793 WARN neptune::cl > Cannot get device list for platform: Clover!
    2020-11-24T01:18:01.793 WARN neptune::cl > Cannot get device list for platform: Clover!
    
    ## Appeared here twice
    ----------------------------------------------------------------------------------------------------------------
    
    2020-11-24T01:19:55.835 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 1/8 of length 153391689
    2020-11-24T01:21:31.248 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 2/8 of length 153391689
    2020-11-24T01:23:04.417 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 3/8 of length 153391689
    2020-11-24T01:24:38.587 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 4/8 of length 153391689
    2020-11-24T01:26:13.404 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 5/8 of length 153391689
    2020-11-24T01:27:46.782 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 6/8 of length 153391689
    2020-11-24T01:29:27.555 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 7/8 of length 153391689
    2020-11-24T01:31:13.127 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 8/8 of length 153391689
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > tree_c done
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > building tree_r_last
    2020-11-24T01:31:18.598 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree r last using the GPU
    2020-11-24T01:31:19.224 INFO neptune::cl > getting context for ~Index(0)
    2020-11-24T01:31:19.369 WARN neptune::cl > Cannot get device list for platform: Clover!
    
    

    restart:

    2020-11-24T15:42:33.077 INFO filecoin_proofs::api::seal > seal_pre_commit_phase2:start
    2020-11-24T15:42:33.079 INFO storage_proofs_porep::stacked::vanilla::proof > replicate_phase2
    2020-11-24T15:42:33.079 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree c using the GPU
    2020-11-24T15:42:33.079 INFO storage_proofs_porep::stacked::vanilla::proof > Building column hashes
    2020-11-24T15:42:33.079 INFO neptune::cl > getting context for ~Index(0)
    
    ----------------------------------------------------------------------------------------------------------------
    
    2020-11-24T15:42:33.084 WARN neptune::cl > Cannot get device list for platform: Clover!
    
    ## Appeared here once
    ----------------------------------------------------------------------------------------------------------------
    
    2020-11-24T15:44:44.044 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 1/8 of length 153391689
    2020-11-24T15:47:00.733 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 2/8 of length 153391689
    2020-11-24T15:49:14.234 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 3/8 of length 153391689
    2020-11-24T15:51:29.922 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 4/8 of length 153391689
    2020-11-24T15:53:44.894 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 5/8 of length 153391689
    2020-11-24T15:55:59.994 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 6/8 of length 153391689
    2020-11-24T15:58:15.629 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 7/8 of length 153391689
    2020-11-24T16:00:31.432 INFO storage_proofs_porep::stacked::vanilla::proof > persisting base tree_c 8/8 of length 153391689
    2020-11-24T16:00:36.050 INFO storage_proofs_porep::stacked::vanilla::proof > tree_c done
    2020-11-24T16:00:36.050 INFO storage_proofs_porep::stacked::vanilla::proof > building tree_r_last
    2020-11-24T16:00:36.050 INFO storage_proofs_porep::stacked::vanilla::proof > generating tree r last using the GPU
    2020-11-24T16:00:36.632 INFO neptune::cl > getting context for ~Index(0)
    2020-11-24T16:00:36.632 WARN neptune::cl > Cannot get device list for platform: Clover!
    2020-11-24T16:00:45.490 INFO storage_proofs_porep::stacked::vanilla::proof > building base tree_r_last with GPU 1/8
     ......... snip ..............
    
  • Lotus-bench results thread (v20 params)

    Lotus-bench results thread (v20 params)

    This issue is a place to put lotus-bench results for v20 params.

    To best help us, run four tests:

    Start by installing build dependencies from https://docs.lotu.sh/en+getting-started

    git clone https://github.com/filecoin-project/lotus.git
    cd lotus
    make build bench
    
    ./bench --sector-size=1073741824
    ./bench --sector-size=1073741824 --no-gpu
    
    # Only run these with > 64GiB of ram, recommended 128G
    ./bench --sector-size=34359738368
    ./bench --sector-size=34359738368 --no-gpu
    

    Additionally, please tell us what CPU, GPU, and memory (including speed) you have in your setup.

    Previous (v19) thread - https://github.com/filecoin-project/lotus/issues/694

  • Cannot perform any retrieval for 32GB files

    Cannot perform any retrieval for 32GB files

    Describe the bug I tried several retrievals to several different miners for the deals I made. None of them went through. I do not believe they all offline. I suspect it has something to do with the deal size. These are all 32GB offline deals, made with --fast-retrieval=false.

    Example 1 - stuck after DealStatusOngoing Provider is doing unsealing but it takes hours without going further

    lotus client retrieve --miner f024008 bafykbzaced6noeziyglm2frycaknydeijbimvhpk72eimhfupoxd72dehp2k4 abc

    Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance) Recv: 0 B, Paid 0 FIL, ClientEventDealAccepted (DealStatusAccepted) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelCreateInitiated (DealStatusPaymentChannelCreating) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelReady (DealStatusPaymentChannelAllocatingLane) Recv: 0 B, Paid 0 FIL, ClientEventLaneAllocated (DealStatusOngoing)

    Example 2 - unmarshalling error The provider told me his miner crashed upon receiving my retrieval request

    lotus client retrieve --miner f083550 bafykbzacecu4qt4tlr5vqojgtlt553rbskqremq67z37x5cufaonjqfh5y7no mysql-2016-04-19.tar.gz.partbl 2021-01-25T04:59:20.145Z WARN rpc [email protected]/client.go:541 unmarshaling failed {"message": "{"Err":"exhausted 5 attempts but failed to open stream, err: failed to dial 12D3KooWFJ6iPAiW82pR7REB8pJfKAsoKoEAhgWKYduoQT1734a9: all dials failed\n * [/ip4/127.0.0.1/tcp/45989] dial tcp4 127.0.0.1:45989: connect: connection refused\n * [/ip6/::1/tcp/41337] dial tcp6 [::1]:41337: connect: connection refused\n * [/ip4/59.12.56.212/tcp/45989] dial tcp4 59.12.56.212:45989: connect: connection refused\n * [/ip4/59.12.56.215/tcp/45989] dial tcp4 0.0.0.0:33463-\u003e59.12.56.215:45989: i/o timeout","Root":null,"Piece":null,"Size":0,"MinPrice":"\u003cnil\u003e","UnsealPrice":"\u003cnil\u003e","PaymentInterval":0,"PaymentIntervalIncrease":0,"Miner":"f083550","MinerPeer":{"Address":"f083550","ID":"12D3KooWFJ6iPAiW82pR7REB8pJfKAsoKoEAhgWKYduoQT1734a9","PieceCID":null}}"} ERROR: RPC client error: unmarshaling result: failed to parse big string: '"\u003cnil\u003e"'

    Example 3 - incomplete response One of the provider told me his miner is set as allowUnseal=false but his worker is set as allowUnseal=true, yet the retrieval failed before the worker picked up the unseal.

    lotus client retrieve --miner f064218 baga6ea4seaqgsbrsupr6az5zf2yjqai5t4xtgn7lf2zd62wnxxq2mgyhlvbxyda publicdomainmovies.tar.05

    Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance) Recv: 0 B, Paid 0 FIL, ClientEventUnsealPaymentRequested (DealStatusAccepted) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelAddingFunds (DealStatusPaymentChannelAllocatingLane) Recv: 0 B, Paid 0 FIL, ClientEventLaneAllocated (DealStatusOngoing) Recv: 0 B, Paid 0 FIL, ClientEventPaymentRequested (DealStatusFundsNeeded) Recv: 0 B, Paid 0 FIL, ClientEventSendFunds (DealStatusSendFunds) Recv: 0 B, Paid 0.1 FIL, ClientEventPaymentSent (DealStatusOngoing) Recv: 0 B, Paid 0.1 FIL, ClientEventDataTransferError (DealStatusErrored) Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) ERROR: retrieval failed: Retrieve: Retrieval Error: error generated by data transfer: deal data transfer failed: incomplete response

    lotus client retrieve --miner f01278 bafykbzaceaj7ube4k2vhniqgdb6vq7ggcoebf4c6punse7ike2almme6esdxi mysql-2018-11-01.tar.gz.partcy

    Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance) Recv: 0 B, Paid 0 FIL, ClientEventUnsealPaymentRequested (DealStatusAccepted) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelCreateInitiated (DealStatusPaymentChannelCreating) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelReady (DealStatusPaymentChannelAllocatingLane) Recv: 0 B, Paid 0 FIL, ClientEventLaneAllocated (DealStatusOngoing) Recv: 0 B, Paid 0 FIL, ClientEventPaymentRequested (DealStatusFundsNeeded) Recv: 0 B, Paid 0 FIL, ClientEventSendFunds (DealStatusSendFunds) Recv: 0 B, Paid 0.000000000000000002 FIL, ClientEventPaymentSent (DealStatusOngoing) Recv: 0 B, Paid 0.000000000000000002 FIL, ClientEventDataTransferError (DealStatusErrored) Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) ERROR: retrieval failed: Retrieve: Retrieval Error: error generated by data transfer: deal data transfer failed: incomplete response

    Example 4 - miner is not accepting online retrieval deals lotus client retrieve --miner f047419 baga6ea4seaqgsbrsupr6az5zf2yjqai5t4xtgn7lf2zd62wnxxq2mgyhlvbxyda publicdomainmovies.tar.05

    Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance) Recv: 0 B, Paid 0 FIL, ClientEventDealRejected (DealStatusRetryLegacy) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptanceLegacy) Recv: 0 B, Paid 0 FIL, ClientEventDealRejected (DealStatusRejected) Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) ERROR: retrieval failed: Retrieve: Retrieval Proposal Rejected: deal rejected: miner is not accepting online retrieval deals

    Example 5 - normal shutdown of state machine This provider is able to serve my retrieval for a 8GB file, but not 32GB ones.

    lotus client retrieve --miner f022352 baga6ea4seaqoyyc52ajdq2v7723d3uixeem5ulwvbtv7h45776vrrg53nnf2gjq trusted_setup_phase2.tar.47

    Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance) Recv: 0 B, Paid 0 FIL, ClientEventDealAccepted (DealStatusAccepted) Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelAddingFunds (DealStatusPaymentChannelAllocatingLane) Recv: 0 B, Paid 0 FIL, ClientEventLaneAllocated (DealStatusOngoing) Recv: 0 B, Paid 0 FIL, ClientEventProviderCancelled (DealStatusCancelling) Recv: 0 B, Paid 0 FIL, ClientEventDataTransferError (DealStatusErrored) Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew) ERROR: retrieval failed: Retrieve: Retrieval Error: error generated by data transfer: unable to send cancel to channel FSM: normal shutdown of state machine

    lotus version 1.4.1+git.d6c06881e

  • Importing chain ends with chain validation failed

    Importing chain ends with chain validation failed

    Running lotus daemon --import-chain lotus daemon --import-chain minimal_finality_stateroots_336340_2020-12-19_17-00-00.car

    v1.4.0 ends with:

     15.54 GiB / 15.58 GiB [======================================================================================================================================]  99.74% 115.68 MiB/s2020-12-21T07:06:20.325Z        INFO    badgerbs        [email protected]/levels.go:962      LOG Compact 1->2, del 3 tables, add 3 tables, took 658.043122ms
    
    2020-12-21T07:06:20.325Z        INFO    badgerbs        [email protected]/levels.go:1010     [Compactor: 0] Compaction for level: 1 DONE
    2020-12-21T07:06:20.325Z        INFO    badgerbs        [email protected]/levels.go:1000     [Compactor: 0] Running compaction: {level:1 score:1.5613754019141197 dropPrefixes:[]} for level: 1
    
     15.58 GiB / 15.58 GiB [================================================================================================================================] 100.00% 115.67 MiB/s 2m17s
    2020-12-21T07:06:20.609Z        INFO    chainstore      store/store.go:527      clearing block validation cache...
    2020-12-21T07:06:20.609Z        INFO    chainstore      store/store.go:566      0 block validation entries cleared.
    2020-12-21T07:06:22.436Z        INFO    badgerbs        [email protected]/levels.go:962      LOG Compact 1->2, del 8 tables, add 8 tables, took 2.110439345s
    
    2020-12-21T07:06:22.436Z        INFO    badgerbs        [email protected]/levels.go:1010     [Compactor: 0] Compaction for level: 1 DONE
    2020-12-21T07:06:22.436Z        INFO    badgerbs        [email protected]/levels.go:1000     [Compactor: 0] Running compaction: {level:1 score:1.301148410886526 dropPrefixes:[]} for level: 1
    
    2020-12-21T07:06:23.041Z        INFO    badgerbs        [email protected]/levels.go:962      LOG Compact 1->2, del 2 tables, add 2 tables, took 604.994671ms
    
    2020-12-21T07:06:23.041Z        INFO    badgerbs        [email protected]/levels.go:1010     [Compactor: 0] Compaction for level: 1 DONE
    2020-12-21T07:06:23.041Z        INFO    badgerbs        [email protected]/levels.go:1000     [Compactor: 0] Running compaction: {level:1 score:1.0409185104072094 dropPrefixes:[]} for level: 1
    
    2020-12-21T07:06:24.137Z        INFO    badgerbs        [email protected]/levels.go:962      LOG Compact 1->2, del 4 tables, add 4 tables, took 1.09613443s
    
    2020-12-21T07:06:24.137Z        INFO    badgerbs        [email protected]/levels.go:1010     [Compactor: 0] Compaction for level: 1 DONE
    2020-12-21T07:06:42.565Z        WARN    chainstore      store/store.go:508      no heaviest tipset found, using [bafy2bzacecnamqgqmifpluoeldx7zzglxcljo6oja4vrmtj7432rphldpdmm2]
    2020-12-21T07:06:42.565Z        INFO    chainstore      store/store.go:513      New heaviest tipset! [bafy2bzacecnamqgqmifpluoeldx7zzglxcljo6oja4vrmtj7432rphldpdmm2] (height=0)
    2020-12-21T07:06:42.566Z        INFO    main    lotus/daemon.go:470     validating imported chain...
    2020-12-21T07:07:04.375Z        INFO    statemgr        stmgr/stmgr.go:878      computing state (height: 0, ts=[bafy2bzacecnamqgqmifpluoeldx7zzglxcljo6oja4vrmtj7432rphldpdmm2])
    2020-12-21T07:07:04.375Z        INFO    statemgr        stmgr/stmgr.go:878      computing state (height: 1, ts=[bafy2bzacechdx6xd62lcyy7rnyc4uxcxhuwqslcxfvj77fxlwafij3nhzchpy])
    2020-12-21T07:07:04.375Z        WARN    chainstore      store/store.go:485      reorgWorker quit
    2020-12-21T07:07:04.421Z        INFO    badgerbs        [email protected]/db.go:1030 Storing value log head: {Fid:21 Len:33 Offset:513553269}
    
    2020-12-21T07:07:04.574Z        INFO    badgerbs        [email protected]/levels.go:1000     [Compactor: 173] Running compaction: {level:0 score:1.73 dropPrefixes:[]} for level: 0
    
    2020-12-21T07:07:05.444Z        INFO    badgerbs        [email protected]/levels.go:962      LOG Compact 0->1, del 4 tables, add 4 tables, took 870.449062ms
    
    2020-12-21T07:07:05.445Z        INFO    badgerbs        [email protected]/levels.go:1010     [Compactor: 173] Compaction for level: 0 DONE
    2020-12-21T07:07:05.445Z        INFO    badgerbs        [email protected]/db.go:553  Force compaction on level 0 done
    ERROR: chain validation failed: getting block messages for tipset: failed to get messages for block: failed to load msgmeta (bafy2bzacecmwp4imjqhdg2zvc7j2s4xxahnn5jnudtrt335re24i4zim7ccfi): blockstore: block not found
    
  • [Thread] Documentation Requests

    [Thread] Documentation Requests

    If you have a request for documentation on lotus, lotus-miner, or any related ecosystem tooling, please leave a comment here with what you want documented.

    An example of a good request would be:

    I would like documentation on how to operate multiple lotus-workers on different machines on my local network, with different machines for different jobs

    or

    I would like some documentation on how to examine exactly why my message failed on chain.

    Please read through existing requests, and thumbs up any requests you also want (think of it like voting) instead of posting a duplicate (though if you want to add something to an existing request, please link to it in your comment, and let us know what you want to add)

  • Feat/datamodel selector retrieval

    Feat/datamodel selector retrieval

    ~( This PR depends on and includes https://github.com/filecoin-project/lotus/pull/6375 )~ ✅

    Introduce a new RetrievalOrder-struct field and a CLI option that takes a string representation as understood by https://pkg.go.dev/github.com/ipld/go-ipld-selector-text-lite#SelectorSpecFromPath . Allows for partial retrieval of any sub-DAG of a deal provided the user knows the exact low-level shape of the deal contents.

    As an example with this patch one can retrieve the first entry of a UnixFS directory by executing: lotus client retrieve --miner f0XXXXX --datamodel-path-selector 'Links/0/Hash' bafyROOTCID ~/output

    See top of itests/deals_partial_retrieval_test.go for a more elaborate example.

  • Various problems on arm64(aarch64)

    Various problems on arm64(aarch64)

    Describe the bug cannot build on arm64(aarch64) at least once.

    To Reproduce Steps to reproduce the behavior:

    1. Run 'git checkout v0.8.1;export RUSTFLAGS="-C target-cpu=native -g";export FFI_BUILD_FROM_SOURCE=1 ;make clean deps all'
    2. See error

    Expected behavior build success.

    Version (run lotus version):v0.8.1

    Additional context I have resovle so many problems when build on arm64(aarch64).but it still comes out one by one.Why there is no binary official for arm64?Storage node is more suitable running on arm but x86.

    make: Nothing to be done for 'deps'.
    rm -f lotus
    go build  -ldflags="-X=github.com/filecoin-project/lotus/build.CurrentCommit=+git.1ebad94d.dirty" -o lotus ./cmd/lotus
    go build github.com/supranational/blst/bindings/go: build constraints exclude all Go files in /media/sda1/Lotus/lotus/extern/fil-blst/blst/bindings/go
    go build github.com/ipsn/go-secp256k1: build constraints exclude all Go files in /storage/go/pkg/mod/github.com/ipsn/[email protected]
    # github.com/filecoin-project/go-fil-markets/pieceio
    /storage/go/pkg/mod/github.com/filecoin-project/[email protected]/pieceio/pieceio.go:215:19: undefined: ffi.GeneratePieceCIDFromFile
    # github.com/filecoin-project/lotus/extern/sector-storage
    extern/sector-storage/localworker.go:98:9: undefined: ffiwrapper.New
    extern/sector-storage/localworker.go:266:15: undefined: ffi.GetGPUDevices
    extern/sector-storage/manager.go:98:17: undefined: ffiwrapper.New
    # github.com/filecoin-project/lotus/chain/vm
    chain/vm/syscalls.go:66:16: undefined: ffiwrapper.GenerateUnsealedCID
    make: *** [Makefile:68: lotus] Error 2```
    
  • proof validation failed, sector not found in sector set after cron: sector not found

    proof validation failed, sector not found in sector set after cron: sector not found

    Describe the problem

    Sector is stuck in a loop (have 2 of these)

    Sectors status

    SectorID:	16
    Status:		CommitFailed
    CIDcommD:	baga6ea4seaqnuflxnoeju5ilz7b4ry3anpavfm4lqxlinhx6d734wrzrq7dswnq
    CIDcommR:	bagboea4b5abcakxgvcybzkocifp7vosysppkaltm75l2mo3n6gemecwnfci25sc7
    Ticket:		ff8ef989be532ef30d70b495b7e9f1c96dc32356eb7b4db5b8e0bbe326ed73b4
    TicketH:	3552
    Seed:		3f2286632086d0638b482179a4f59df132a0423f740fa7b2beb15ae05fcbfdb2
    SeedH:		5952
    Proof:		a180cd8ae1f8cbd93790049aca35d39e84724b4b4d660495e4a569e134a6a4319a3948e8e278d99413754acbc3112bccb07279ba7ab3035f73d5dc4bf4fa4c90bbd3d05579345819594f3741d228b6f06eeb9c44b3bb5fb3a01a7f0d51dc258d0bf7425a719ecc6986878ff9cef637bd22343963338bac97ec10fe1eed8a574ecda565fbd5e37e9af262422cf96a9c3db2e32d12dfebc89836916f08422c61d401b97bbc8890f02da53a495bea18699a973f5179d726811d2e5eb01a0ce82e678dad41c292e0639ab2c2a73eaff6d079b89a2341fc3a7c3401223d32e9eb4357d7183e9c12255cdf98ff70d91a6733eb823e940e50bf9bd8641a4d1260b41b8f4790b92119ae1840e66b822c943b8654d48b913c165ab3803735c971ac8d0ae402bfa362f2d2be506078a53c3822e6072c9c7c2133aff2d320cc7013e78010c8a12f267880dc134c518de5dd498b530481e8180bf5c632e6f864e1a11e885334d034f105815bb7938ef42d9bd3540df74343b4aa1a342bf7d527f94d7d4aa5b68ac0f6d07fbf77cd7dbaf7ec7cc5033f8e56cc69f3a83f687af1e2e30010aff3288828f568da44c9e206c7bac0d908f3a7ab15ab896b25019c3f8c5027de90cfce8fe0eb23f8d6dd21755077ac98e011f89c68e020b3d06a761fd002c88102cd114bbb67cddf15c414b823c64c8c221ba8fa8e0d7c60e01d9349276c35fb293be4555aaacfef2be307cfeab3027682a78c57da5e5b7f385991b427053e9d96706c9874179e72a58bb9465d6667d66093a323a7a5adf2a73f3c102ca259d895ccb4f80520504e423d532d6cfb81ae3f47f151a88982ed5b7da61ed59d0a16c89131ce298d0c2d3c9de34527e5b93b5af58729ead0ed8472508b626c7fc4b5250291b9af8b240d71e78e755c463906579faf9b69f75cfad96331cebb2df385981711b4c5420073fc3e10a51707b1641c8ea89ecf5859edcb647c5f645580557c67c60125453ab9dfa97a1a561424ee09798ef880ac9529faac51460b908e282b047ef5cea15169f60172f12f071472cf49285d3a44694706e2332a694410667f4394b46c8422c4cd799d4c2537a41eea354b945edf301d862b4cf51199e377cd5c128abfba8eebd6aa4e8a7a9c42e8087c87e7116166ee8c27eed8692ad92df8d88d1c2ffef99590d0795b27658c9d2e7f000b903445c5bf4fdb77353c9bb4fb690244a6797e075d4afff411d264fb3005de5f775132523bc60b0bbed5fd17d2e755e9113c9a387846c7c4f8bbce274a049067e9c7641ff8b37f21451f7544b85fa5ac957370e8ecb4c976daedae735af9efe485aecefc251233007cdf2cd8aa4ba7ed5a16358de00abdab60a0780e1e503c18cbe1a529abedab1d6016c67478d0e492456cbfb7ce21c9383d3e65f9cdb8b34b37cad03687bd41cff14fb519a660ffea14b1e65750a752651e61535348d75d8e157108671f2fa0f0170a73b25aea13e20bbef2b9c0526c6a3d66c2c26b3dcdc3194be124f469e54c34d910f65ff0efe55f77a42aa0bbc75daac7313e629caaa7a8ed52825793f3dc095ae0b2509956ecb3d0807f0d3711bd2a0aca88a4b0e7e13e2b3117ae4301bd1c01b3d1f474a5ecb91da6802a6d8fff744e45a132d11589ba15ab51d39973a7fc6ed57dbdcd008e1d32705a2c89921b94790ed1cd23a37f3e9d0ca01605b0f8ca0a93478d9c3b251c6caa848948efbea0200b1fd8c6a827f7111d7830aa53b687689a53b43e171f1c72d307f85d4ac3fb3b38b07fc699dfb8ff35d04921af0b535c28df5971cfc067958b2ac2f306a309e441302640ad291273d47103c793f33180626d9ce409f369380b39c11ccc8e67637150d84830787d5db93e15b6168edc5ad9e8ba46934379a65da43dac93d8910c5e547f6c03b34e6ac4622678d2c0755e1a45fed3dd7825bb5737b56849e448f1b16eabbb827f9783612be0e6f5aad92162c102867378a18c3ad7c3e9afa1c4d2ee51728711c45a4126e0515b364002fba97471b500c8cb833cf29c2e7b77c45edc0c23b14ba2647663f33cd2a63490fdc072db7949bad21b8bd61f1702164ef7092a4e0493e117c75a2d697b93cd2a256ff870896aaa4c111db4043438b07ad74810deb98c44fb191327f78eed8970c555bc58a8b525dac7e824dccd3cf1685f2d28c762b679efbd5366062aa0541556c654111c77ecc6ea3b94416beea1c5007e6191b3ae463c4fa753d5200deb2a834b2d5a284df6ee4e8f5a4db596f7e105d50ea9a89678c8b2aa59ce5d16c6278334c67e7f1168af65e16e94be6655f32e9f2a97c09b5179bfed942e8e59b55264a164daab0b4fbf3e015729cc5d82667565b36c12863ace507cc7d82ee72d92a9c96f6626328f2d1e4b9287aefef6fe045c9c7fd027aab745b287cd5333a7743b1681912bb0494b30f554b516d4747de8ea8731a883ac915933d4b0eb16dfa2e224c39ff08e560e93f4240fdbc7925507260665ce97bc4b671437e1f5207b9af4b40a26a1f6cad10d339e8843caf6d981171c26f087abd964191b9593c3e5220efb9555780bebcbb7c47c9e093fdf569b1349b09fea2470c9092172953d8cfc29c83e646552807140128249a52b58152773d6d15caf7b2dd9802cb6b09e44086c3f99c29e0202b3af82cfeee8e54ecdba5eec198b50c9743cb6ff51503b60b499a17906c8
    Deals:		[9459 0 0 0 0]
    Retries:	312
    --------
    Event Log:
    0.	2020-08-26 04:36:59 +0000 UTC:	[event;sealing.SectorStart]	{"User":{"ID":16,"SectorType":3}}
    1.	2020-08-26 05:36:59 +0000 UTC:	[event;sealing.SectorStartPacking]	{"User":{}}
    2.	2020-08-26 08:22:34 +0000 UTC:	[event;sealing.SectorAddPiece]	{"User":{"NewPiece":{"Piece":{"Size":2147483648,"PieceCID":{"/":"baga6ea4seaqifnolooacsrnd2x5tcmjd3nzcxd7mpy6ho2eddvy6wa6fdjg2uhq"}},"DealInfo":{"DealID":9459,"DealSchedule":{"StartEpoch":6532,"EndEpoch":709369},"KeepUnsealed":true}}}}
    3.	2020-08-26 10:36:02 +0000 UTC:	[event;sealing.SectorStartPacking]	{"User":{}}
    4.	2020-08-26 11:06:16 +0000 UTC:	[event;sealing.SectorPacked]	{"User":{"FillerPieces":[{"Size":2147483648,"PieceCID":{"/":"baga6ea4seaqh34u3nf3tdgpi6k2aw54rtucikcpo25uofrzjpmprinydj7b4mla"}},{"Size":4294967296,"PieceCID":{"/":"baga6ea4seaqgntqfunthkuwpixacxtcoqojjdg66vq254l7vmjyyjdu7pntvcby"}},{"Size":8589934592,"PieceCID":{"/":"baga6ea4seaqnqyicdbbfvnpjlmokmi45fgroiigxa2uw6nz6f6ojveoxlhizwai"}},{"Size":17179869184,"PieceCID":{"/":"baga6ea4seaqg2nsld34emra2ljfgrbrdcswmbjdpaftrpzjuipudt3w7qpbikpa"}}]}}
    5.	2020-08-26 15:11:04 +0000 UTC:	[event;sealing.SectorPreCommit1]	{"User":{"PreCommit1Out":"eyJyZWdpc3RlcmVkX3Byb29mIjoiU3RhY2tlZERyZzMyR2lCVjEiLCJsYWJlbHMiOnsiU3RhY2tlZERyZzMyR2lCVjEiOnsibGFiZWxzIjpbeyJwYXRoIjoiL21udC9udm1lXzJ0Yl9zbG93L2xvdHVzLWNhbGlicmF0aW9uL2NhY2hlL3MtdDAyMzg4LTE2IiwiaWQiOiJsYXllci0xIiwic2l6ZSI6MTA3Mzc0MTgyNCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvbW50L252bWVfMnRiX3Nsb3cvbG90dXMtY2FsaWJyYXRpb24vY2FjaGUvcy10MDIzODgtMTYiLCJpZCI6ImxheWVyLTIiLCJzaXplIjoxMDczNzQxODI0LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9tbnQvbnZtZV8ydGJfc2xvdy9sb3R1cy1jYWxpYnJhdGlvbi9jYWNoZS9zLXQwMjM4OC0xNiIsImlkIjoibGF5ZXItMyIsInNpemUiOjEwNzM3NDE4MjQsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL21udC9udm1lXzJ0Yl9zbG93L2xvdHVzLWNhbGlicmF0aW9uL2NhY2hlL3MtdDAyMzg4LTE2IiwiaWQiOiJsYXllci00Iiwic2l6ZSI6MTA3Mzc0MTgyNCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvbW50L252bWVfMnRiX3Nsb3cvbG90dXMtY2FsaWJyYXRpb24vY2FjaGUvcy10MDIzODgtMTYiLCJpZCI6ImxheWVyLTUiLCJzaXplIjoxMDczNzQxODI0LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9tbnQvbnZtZV8ydGJfc2xvdy9sb3R1cy1jYWxpYnJhdGlvbi9jYWNoZS9zLXQwMjM4OC0xNiIsImlkIjoibGF5ZXItNiIsInNpemUiOjEwNzM3NDE4MjQsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL21udC9udm1lXzJ0Yl9zbG93L2xvdHVzLWNhbGlicmF0aW9uL2NhY2hlL3MtdDAyMzg4LTE2IiwiaWQiOiJsYXllci03Iiwic2l6ZSI6MTA3Mzc0MTgyNCwicm93c190b19kaXNjYXJkIjo3fSx7InBhdGgiOiIvbW50L252bWVfMnRiX3Nsb3cvbG90dXMtY2FsaWJyYXRpb24vY2FjaGUvcy10MDIzODgtMTYiLCJpZCI6ImxheWVyLTgiLCJzaXplIjoxMDczNzQxODI0LCJyb3dzX3RvX2Rpc2NhcmQiOjd9LHsicGF0aCI6Ii9tbnQvbnZtZV8ydGJfc2xvdy9sb3R1cy1jYWxpYnJhdGlvbi9jYWNoZS9zLXQwMjM4OC0xNiIsImlkIjoibGF5ZXItOSIsInNpemUiOjEwNzM3NDE4MjQsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL21udC9udm1lXzJ0Yl9zbG93L2xvdHVzLWNhbGlicmF0aW9uL2NhY2hlL3MtdDAyMzg4LTE2IiwiaWQiOiJsYXllci0xMCIsInNpemUiOjEwNzM3NDE4MjQsInJvd3NfdG9fZGlzY2FyZCI6N30seyJwYXRoIjoiL21udC9udm1lXzJ0Yl9zbG93L2xvdHVzLWNhbGlicmF0aW9uL2NhY2hlL3MtdDAyMzg4LTE2IiwiaWQiOiJsYXllci0xMSIsInNpemUiOjEwNzM3NDE4MjQsInJvd3NfdG9fZGlzY2FyZCI6N31dLCJfaCI6bnVsbH19LCJjb25maWciOnsicGF0aCI6Ii9tbnQvbnZtZV8ydGJfc2xvdy9sb3R1cy1jYWxpYnJhdGlvbi9jYWNoZS9zLXQwMjM4OC0xNiIsImlkIjoidHJlZS1kIiwic2l6ZSI6MjE0NzQ4MzY0Nywicm93c190b19kaXNjYXJkIjo3fSwiY29tbV9kIjpbMjE4LDIxLDExOSwxMDcsMTM2LDE1NCwxMTcsMTEsMjA3LDE5NSwyMDAsMjI3LDk2LDEwNywxOTMsODIsMTc5LDEzOSwxMzMsMjE0LDEzNCwxNTgsMjU0LDMxLDI0NywyMDMsNzEsNDksMTM1LDE5OSw0Myw1NF19","TicketValue":"/475ib5TLvMNcLSVt+nxyW3DI1bre021uOC74ybtc7Q=","TicketEpoch":3552}}
    6.	2020-08-26 22:20:34 +0000 UTC:	[event;sealing.SectorPreCommit2]	{"User":{"Sealed":{"/":"bagboea4b5abcakxgvcybzkocifp7vosysppkaltm75l2mo3n6gemecwnfci25sc7"},"Unsealed":{"/":"baga6ea4seaqnuflxnoeju5ilz7b4ry3anpavfm4lqxlinhx6d734wrzrq7dswnq"}}}
    7.	2020-08-26 22:20:41 +0000 UTC:	[event;sealing.SectorPreCommitted]	{"User":{"Message":{"/":"bafy2bzacecfd6jgqno3otvronyg5vsfgjuatc4aokkvrycxexrpmnfolxwura"},"PreCommitDeposit":"1085841941932828268","PreCommitInfo":{"SealProof":3,"SectorNumber":16,"SealedCID":{"/":"bagboea4b5abcakxgvcybzkocifp7vosysppkaltm75l2mo3n6gemecwnfci25sc7"},"SealRandEpoch":3552,"DealIDs":[9459],"Expiration":712249,"ReplaceCapacity":false,"ReplaceSectorDeadline":0,"ReplaceSectorPartition":0,"ReplaceSectorNumber":0}}}
    8.	2020-08-26 22:24:00 +0000 UTC:	[event;sealing.SectorPreCommitLanded]	{"User":{"TipSet":"AXGg5AIgIS0dL6bQD5SrcrIjKhYpggnmbqmOkiLvLEOqs1q/yzoBcaDkAiBuHw2b0j6EzpnoquSzNvVgwNFe95D7QNAG2YikZG2ZZw=="}}
    9.	2020-08-26 23:24:26 +0000 UTC:	[event;sealing.SectorRestart]	{"User":{}}
    10.	2020-08-26 23:48:28 +0000 UTC:	[event;sealing.SectorRestart]	{"User":{}}
    11.	2020-08-26 23:48:29 +0000 UTC:	[event;sealing.SectorSeedReady]	{"User":{"SeedValue":"PyKGYyCG0GOLSCF5pPWd8TKgQj90D6eyvrFa4F/L/bI=","SeedEpoch":5952}}
    12.	2020-08-27 07:03:08 +0000 UTC:	[event;sealing.SectorCommitted]	{"User":{"Message":{"/":"bafy2bzacecjkir7xzxn77pwzpmj376wzzprdtxlnbtv4zkapizjsa4phgunza"},"Proof":"oYDNiuH4y9k3kASayjXTnoRyS0tNZgSV5KVp4TSmpDGaOUjo4njZlBN1SsvDESvMsHJ5unqzA19z1dxL9PpMkLvT0FV5NFgZWU83QdIotvBu65xEs7tfs6Aafw1R3CWNC/dCWnGezGmGh4/5zvY3vSI0OWMzi6yX7BD+Hu2KV07NpWX71eN+mvJiQiz5apw9suMtEt/ryJg2kW8IQixh1AG5e7yIkPAtpTpJW+oYaZqXP1F51yaBHS5esBoM6C5nja1BwpLgY5qywqc+r/bQebiaI0H8Onw0ASI9MunrQ1fXGD6cEiVc35j/cNkaZzPrgj6UDlC/m9hkGk0SYLQbj0eQuSEZrhhA5muCLJQ7hlTUi5E8FlqzgDc1yXGsjQrkAr+jYvLSvlBgeKU8OCLmByycfCEzr/LTIMxwE+eAEMihLyZ4gNwTTFGN5d1Ji1MEgegYC/XGMub4ZOGhHohTNNA08QWBW7eTjvQtm9NUDfdDQ7SqGjQr99Un+U19SqW2isD20H+/d819uvfsfMUDP45WzGnzqD9oevHi4wAQr/MoiCj1aNpEyeIGx7rA2Qjzp6sVq4lrJQGcP4xQJ96Qz86P4Osj+NbdIXVQd6yY4BH4nGjgILPQanYf0ALIgQLNEUu7Z83fFcQUuCPGTIwiG6j6jg18YOAdk0knbDX7KTvkVVqqz+8r4wfP6rMCdoKnjFfaXlt/OFmRtCcFPp2WcGyYdBeecqWLuUZdZmfWYJOjI6elrfKnPzwQLKJZ2JXMtPgFIFBOQj1TLWz7ga4/R/FRqImC7Vt9ph7VnQoWyJExzimNDC08neNFJ+W5O1r1hynq0O2EclCLYmx/xLUlApG5r4skDXHnjnVcRjkGV5+vm2n3XPrZYzHOuy3zhZgXEbTFQgBz/D4QpRcHsWQcjqiez1hZ7ctkfF9kVYBVfGfGASVFOrnfqXoaVhQk7gl5jviArJUp+qxRRguQjigrBH71zqFRafYBcvEvBxRyz0koXTpEaUcG4jMqaUQQZn9DlLRshCLEzXmdTCU3pB7qNUuUXt8wHYYrTPURmeN3zVwSir+6juvWqk6KepxC6Ah8h+cRYWbujCfu2Gkq2S342I0cL/75lZDQeVsnZYydLn8AC5A0RcW/T9t3NTybtPtpAkSmeX4HXUr/9BHSZPswBd5fd1EyUjvGCwu+1f0X0udV6RE8mjh4RsfE+LvOJ0oEkGfpx2Qf+LN/IUUfdUS4X6WslXNw6Oy0yXba7a5zWvnv5IWuzvwlEjMAfN8s2KpLp+1aFjWN4Aq9q2CgeA4eUDwYy+GlKavtqx1gFsZ0eNDkkkVsv7fOIck4PT5l+c24s0s3ytA2h71Bz/FPtRmmYP/qFLHmV1CnUmUeYVNTSNddjhVxCGcfL6DwFwpzslrqE+ILvvK5wFJsaj1mwsJrPc3DGUvhJPRp5Uw02RD2X/Dv5V93pCqgu8ddqscxPmKcqqeo7VKCV5Pz3Ala4LJQmVbss9CAfw03Eb0qCsqIpLDn4T4rMReuQwG9HAGz0fR0pey5HaaAKm2P/3RORaEy0RWJuhWrUdOZc6f8btV9vc0Ajh0ycFosiZIblHkO0c0jo38+nQygFgWw+MoKk0eNnDslHGyqhIlI776gIAsf2MaoJ/cRHXgwqlO2h2iaU7Q+Fx8cctMH+F1Kw/s7OLB/xpnfuP810EkhrwtTXCjfWXHPwGeViyrC8wajCeRBMCZArSkSc9RxA8eT8zGAYm2c5AnzaTgLOcEczI5nY3FQ2EgweH1duT4VthaO3FrZ6LpGk0N5pl2kPayT2JEMXlR/bAOzTmrEYiZ40sB1XhpF/tPdeCW7Vze1aEnkSPGxbqu7gn+Xg2Er4Ob1qtkhYsEChnN4oYw618Ppr6HE0u5RcocRxFpBJuBRWzZAAvupdHG1AMjLgzzynC57d8Re3AwjsUuiZHZj8zzSpjSQ/cBy23lJutIbi9YfFwIWTvcJKk4Ek+EXx1otaXuTzSolb/hwiWqqTBEdtAQ0OLB610gQ3rmMRPsZEyf3ju2JcMVVvFiotSXax+gk3M088WhfLSjHYrZ5771TZgYqoFQVVsZUERx37MbqO5RBa+6hxQB+YZGzrkY8T6dT1SAN6yqDSy1aKE327k6PWk21lvfhBdUOqaiWeMiyqlnOXRbGJ4M0xn5/EWivZeFulL5mVfMunyqXwJtReb/tlC6OWbVSZKFk2qsLT78+AVcpzF2CZnVls2wShjrOUHzH2C7nLZKpyW9mJjKPLR5Lkoeu/vb+BFycf9AnqrdFsofNUzOndDsWgZErsElLMPVUtRbUdH3o6ocxqIOskVkz1LDrFt+i4iTDn/COVg6T9CQP28eSVQcmBmXOl7xLZxQ34fUge5r0tAomofbK0Q0znohDyvbZgRccJvCHq9lkGRuVk8PlIg77lVV4C+vLt8R8ngk/31abE0mwn+okcMkJIXKVPYz8Kcg+ZGVSgHFAEoJJpStYFSdz1tFcr3st2YAstrCeRAhsP5nCngICs6+Cz+7o5U7Nul7sGYtQyXQ8tv9RUDtgtJmheQbI"}}
    13.	2020-08-27 07:06:29 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    14.	2020-08-27 07:07:29 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    15.	2020-08-27 07:07:29 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    16.	2020-08-27 07:07:29 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    17.	2020-08-27 07:08:29 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    18.	2020-08-27 07:08:29 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    19.	2020-08-27 07:08:29 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    20.	2020-08-27 07:09:29 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    21.	2020-08-27 07:09:29 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    22.	2020-08-27 07:09:29 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    (.... more of the same ....)
    	proof validation failed, sector not found in sector set after cron: sector not found
    941.	2020-08-27 13:22:03 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    942.	2020-08-27 13:22:17 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    943.	2020-08-27 13:22:30 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    944.	2020-08-27 13:23:30 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    945.	2020-08-27 13:23:44 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    946.	2020-08-27 13:23:58 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    947.	2020-08-27 13:24:58 +0000 UTC:	[event;sealing.SectorRetryComputeProof]	{"User":{}}
    948.	2020-08-27 13:25:13 +0000 UTC:	[event;sealing.SectorRetryCommitWait]	{"User":{}}
    949.	2020-08-27 13:25:28 +0000 UTC:	[event;sealing.SectorCommitFailed]	{"User":{}}
    	proof validation failed, sector not found in sector set after cron: sector not found
    
    

    Lotus miner logs

    2020-08-27T13:26:55.074Z	WARN	sectors	storage-sealing/fsm.go:403	sector 16 got error event sealing.SectorCommitFailed: proof validation failed, sector not found in sector set after cron: sector not found
    2020-08-27T13:26:55.088 INFO filcrypto::proofs::api > verify_seal: start
    2020-08-27T13:26:55.088 INFO filecoin_proofs::api::seal > verify_seal:start
    2020-08-27T13:26:55.088 INFO filecoin_proofs::caches > trying parameters memory cache for: STACKED[34359738368]-verifying-key
    2020-08-27T13:26:55.088 INFO filecoin_proofs::caches > found params in memory cache for STACKED[34359738368]-verifying-key
    2020-08-27T13:26:55.088 INFO filecoin_proofs::api::seal > got verifying key (34359738368) while verifying seal
    2020-08-27T13:26:55.121 INFO filecoin_proofs::api::seal > verify_seal:finish
    2020-08-27T13:26:55.121 INFO filcrypto::proofs::api > verify_seal: finish
    2020-08-27T13:26:55.121Z	INFO	sectors	storage-sealing/states_failed.go:19	CommitFailed(16), waiting 59.878920207s before retrying
    

    Lotus miner diagnostic info

    Please collect the following diagnostic information, and share a link here

    • https://zerobin.net/?31f306b142be4092#ZOh7SERp0vgjCXbmuID1lKKIXIcAHpwkASSDZyIAL1Q=

    ** Code modifications **

    None

    Version

    lotus version 0.5.4+git.d4fef1b5

  • Task scheduler ignores multiple AP workers when doing Snap Deals

    Task scheduler ignores multiple AP workers when doing Snap Deals

    Checklist

    • [X] This is not a security-related bug/issue. If it is, please follow please follow the security policy.
    • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
    • [X] This is not a new feature request. If it is, please file a feature request instead.
    • [X] This is not an enhancement request. If it is, please file a improvement suggestion instead.
    • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.
    • [X] I am running the Latest release, or the most recent RC(release canadiate) for the upcoming release or the dev branch(master), or have an issue updating to any of these.
    • [X] I did not make any code changes to lotus.

    Lotus component

    • [ ] lotus daemon - chain sync
    • [ ] lotus miner - mining and block production
    • [ ] lotus miner/worker - sealing
    • [ ] lotus miner - proving(WindowPoSt)
    • [ ] lotus miner/market - storage deal
    • [ ] lotus miner/market - retrieval deal
    • [ ] lotus miner/market - data transfer
    • [ ] lotus client
    • [ ] lotus JSON-RPC API
    • [ ] lotus message management (mpool)
    • [ ] Other

    Lotus Version

    Daemon:  1.16.0+mainnet+git.01254ab32+api1.5.0
    Local: lotus version 1.16.0+mainnet+git.01254ab32
    
    Daemon:  1.16.0+mainnet+git.01254ab32+api1.5.0
    Local: lotus-miner version 1.16.0+mainnet+git.01254ab3
    

    Describe the Bug

    Even if Lotus has more than one AP worker it ignores others and do all AP tasks only on one worker (when doing Snap Deals).

    Looks like StorageDealStaged state doesn't trigger task scheduler. And only one AP task has beed finished scheduler starting to look for another deal ready for AP and assing it again to same worker (whitch becomes free a moment ago).

    Issue is quite similar to https://github.com/filecoin-project/lotus/issues/8913

    Logging Information

    no logs
    
  • Lotus-worker tasks queued up in v1.16.0-rc3

    Lotus-worker tasks queued up in v1.16.0-rc3

    Checklist

    • [X] This is not a security-related bug/issue. If it is, please follow please follow the security policy.
    • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
    • [X] This is not a new feature request. If it is, please file a feature request instead.
    • [X] This is not an enhancement request. If it is, please file a improvement suggestion instead.
    • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.
    • [X] I am running the Latest release, or the most recent RC(release canadiate) for the upcoming release or the dev branch(master), or have an issue updating to any of these.
    • [X] I did not make any code changes to lotus.

    Lotus component

    • [ ] lotus daemon - chain sync
    • [ ] lotus miner - mining and block production
    • [X] lotus miner/worker - sealing
    • [ ] lotus miner - proving(WindowPoSt)
    • [ ] lotus miner/market - storage deal
    • [ ] lotus miner/market - retrieval deal
    • [ ] lotus miner/market - data transfer
    • [ ] lotus client
    • [ ] lotus JSON-RPC API
    • [ ] lotus message management (mpool)
    • [ ] Other

    Lotus Version

    lotus-miner version
    Daemon:  1.16.0-rc3+mainnet+git.824da5ea5+api1.5.0
    Local: lotus-miner version 1.16.0-rc3+mainnet+git.824da5ea5
    
     lotus-worker info
    Worker version:  1.6.0
    CLI version: lotus-worker version 1.16.0-rc3+mainnet+git.824da5ea5
    
    Session: c4a57112-1d67-481d-8806-fde3a9adf1b5
    Enabled: true
    Hostname: sealer
    CPUs: 128; GPUs: [GeForce RTX 3090]
    RAM: 222.2 GiB/251.6 GiB; Swap: 201.4 GiB/526 GiB
    Task types: FIN GET FRU C1 C2 PC2 PC1 PR1 PR2 RU AP DC GSK 
    
    536385c7-a3d1-479f-affc-9961eb10c052:
    	Weight: 10; Use: Seal 
    	Local: /seal2/worker
    c45a6c01-b823-4813-92b9-fa2841ea3523:
    	Weight: 10; Use: Seal 
    	Local: /seal/worker
    

    Describe the Bug

    The scheduler is very inefficient now, my sealing worker has 256 GB of ram and used to seal 4xPC1 or 3xPC1 plus other tasks including AP, GET, PC2 or C2 concurrently. Since the upgrade, The worker will do 4xPC1 and then when one of the PC1's completes do a single C2 and no other task. However the C2 tasks now takes 60 minutes instead of 12 minutes. Once all of the PC1s finish, the worker will only handle a single C2 or PC2 and other simple tasks like AP, PC1 will queue up.

    Notice in the logging section below that the scheduler claims there are not enough threads even though only 6 of the 128 threads are in use (ThreadRipper 3990X)

    2022-06-26T01:33:08.598Z DEBUG advmgr sector-storage/sched_resources.go:98 sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0

    lotus-miner sealing jobs

    ID | Sector | Worker | Hostname | Task | State | Time -- | -- | -- | -- | -- | -- | -- 7bc24c52 | 4546 | c4a57112 | sealer | PC2 | running | 3m31.7s 00000000 | 4547 | c4a57112 | sealer | PC2 | prepared | 3m31.6s 00000000 | 4548 | c4a57112 | sealer | PC2 | assigned(1) | 32m30.6s 00000000 | 4549 | c4a57112 | sealer | PC2 | assigned(2) | 4h31m28.5s

    Boost Gui tasks waiting Start | Deal ID | Size | Client | State -- | -- | -- | -- | -- 41m | 45acb7b1… | 29 GiB | f144zep4… | Adding to Sector 2h | 21413f38… | 29.8 GiB | f144zep4… | Adding to Sector 2h | 9925c70e… | 25.9 GiB | f144zep4… | Adding to Sector 3h | 215a35ed… | 29.3 GiB | f144zep4… | Adding to Sector 5h | 8d4c13e6… | 31.6 GiB | f144zep4… | Adding to Sector 6h | d7ec5bfe… | 30.8 GiB | f144zep4… | Adding to Sector 7h | 4785e707… | 349.3 MiB | f3vnq2cm… | Adding to Sector 7h | aa651bc3… | 422.3 MiB | f3vnq2cm… | Announcing 8h | 15b60750… | 22.5 GiB | f144zep4… | Sealer: PreCommit1 9h | ab5cba7a… | 438.2 MiB | f3vnq2cm… | Sealer: AddPiece 9h | 95605188… | 288.4 MiB | f3vnq2cm… | Sealer: AddPiece 10h | bab58273… | 31 GiB | f144zep4… | Sealer: PreCommit2 12h | 2dc49072… | 31.3 GiB | f144zep4… | Sealer: PreCommit2 15h | c54738cb… | 30.7 GiB | f144zep4… | Sealer: PreCommit2 15h | e05f6f04… | 31.4 GiB | f144zep4… | Sealer: PreCommit2 15h | 4a220393… | 23.6 GiB | f144zep4… | Sealer: PreCommit2 16h | a8bbbd74… | 31.5 GiB | f144zep4… | Sealer: PreCommit2 18h | 7b2449bd… | 20.7 GiB | f144zep4… | Sealer: WaitSeed 19h | 2a068d3f… | 29 GiB | f144zep4… | Sealer: WaitSeed

    lotus-miner sealing sched-diag { "CallToWork": { "1278-4545-8fe67f8d-4dea-4796-98f3-783e9f57c638": "seal/v0/precommit/2(8e6dd3fb4aec651f18b989424e1260d577661ddc0231f36046840a6519dd59d9)" }, "EarlyRet": [ "1278-4543-1e25e027-3c22-43e0-aef3-09944b9204e0" ], "ReturnedWork": null, "SchedInfo": { "OpenWindows": [ "2f7504d6-3059-4cc5-93b6-0df5ef990431", "2f7504d6-3059-4cc5-93b6-0df5ef990431" ], "Requests": [ { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4549 }, "TaskType": "seal/v0/precommit/2" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4550 }, "TaskType": "seal/v0/precommit/2" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4552 }, "TaskType": "seal/v0/precommit/2" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4553 }, "TaskType": "seal/v0/precommit/1" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4551 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4554 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4555 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4556 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4557 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4558 }, "TaskType": "seal/v0/addpiece" }, { "Priority": 1024, "Sector": { "Miner": 1278, "Number": 4559 }, "TaskType": "seal/v0/addpiece" } ] }, "Waiting": [ "seal/v0/precommit/2(8e6dd3fb4aec651f18b989424e1260d577661ddc0231f36046840a6519dd59d9)" ] }

    Logging Information

    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:458	SCHED Acceptable win: [[2] [2] [2] [2] [2] [2] [2] [2] [2] [2] [2]]
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:0 sector 4549 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:521	SCHED ASSIGNED	{"sqi": 0, "sector": "4549", "task": "seal/v0/precommit/2", "window": 2, "worker": "c4a57112-1d67-481d-8806-fde3a9adf1b5", "utilization": 3}
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:1 sector 4550 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:2 sector 4552 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:3 sector 4553 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:4 sector 4551 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:5 sector 4554 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:6 sector 4555 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:7 sector 4556 to window 2 (awi:0)
    2022-06-26T01:33:08.597Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:8 sector 4557 to window 2 (awi:0)
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:9 sector 4558 to window 2 (awi:0)
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched.go:480	SCHED try assign sqi:10 sector 4559 to window 2 (awi:0)
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched_resources.go:98	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for schedAssign; not enough threads, need 0, 6 in use, target 0
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched_resources.go:104	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for compactWindows; GPU(s) in use
    2022-06-26T01:33:08.598Z	DEBUG	advmgr	sector-storage/sched_resources.go:104	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for startPreparing; GPU(s) in use
    2022-06-26T01:33:08.615Z	INFO	sectors	storage-sealing/states_sealing.go:413	submitting precommit for sector 4545 (deposit: 110136904108491213): 
    2022-06-26T01:33:08.655Z	DEBUG	advmgr	sector-storage/sched_resources.go:104	sched: not scheduling on worker c4a57112-1d67-481d-8806-fde3a9adf1b5 for withResources; GPU(s) in use
    

    Repo Steps

    1. Run '...'
    2. Do '...'
    3. See error '...' ...
  • Disabled IndexProvider causes

    Disabled IndexProvider causes "panic: runtime error: invalid memory address or nil pointer dereference" - v1.17.0.rc1

    Checklist

    • [X] This is not a security-related bug/issue. If it is, please follow please follow the security policy.
    • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
    • [X] This is not a new feature request. If it is, please file a feature request instead.
    • [X] This is not an enhancement request. If it is, please file a improvement suggestion instead.
    • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.
    • [X] I am running the Latest release, or the most recent RC(release canadiate) for the upcoming release or the dev branch(master), or have an issue updating to any of these.
    • [X] I did not make any code changes to lotus.

    Lotus component

    • [ ] lotus daemon - chain sync
    • [X] lotus miner - mining and block production
    • [ ] lotus miner/worker - sealing
    • [ ] lotus miner - proving(WindowPoSt)
    • [ ] lotus miner/market - storage deal
    • [ ] lotus miner/market - retrieval deal
    • [ ] lotus miner/market - data transfer
    • [ ] lotus client
    • [ ] lotus JSON-RPC API
    • [ ] lotus message management (mpool)
    • [ ] Other

    Lotus Version

    lotus-miner version 1.17.0-rc1+calibnet+git.d2fe153e3
    

    Describe the Bug

    I'm not able to run lotus miner with disabled IndexProvider.

    New lotus-miner's installation & calibnet. Before first run I set config.toml: [IndexProvider] Enable = false

    It worked for a day and next it didnt want to run after restart. Probably it was the first restart after first storage deal.

    Logging Information

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x2013e37]
    
    goroutine 15347 [running]:
    github.com/filecoin-project/index-provider/engine.(*Engine).Start(0xc00ef98770, {0x43aa8c0, 0xc000f432f0})
            /home/filecoin/go/pkg/mod/github.com/filecoin-project/[email protected]/engine/engine.go:123 +0x437
    github.com/filecoin-project/lotus/node/modules.IndexProvider.func1.1({0x43aa8c0, 0xc000f432f0})
            /home/filecoin/networks/calibration/build/lotus/node/modules/storageminer_idxprov.go:97 +0x34
    go.uber.org/fx/internal/lifecycle.(*Lifecycle).runStartHook(0xc00034c8c0, {0x43aa8c0, 0xc000f432f0}, {0xc00ef873a0, 0xc00ef873b0, {{0xc00dde4370, 0x42}, {0x7e2cb8f, 0x54}, 0x5c}})
            /home/filecoin/go/pkg/mod/go.uber.org/[email protected]/internal/lifecycle/lifecycle.go:118 +0x1fd
    go.uber.org/fx/internal/lifecycle.(*Lifecycle).Start(0xc00034c8c0, {0x43aa8c0, 0xc000f432f0})
            /home/filecoin/go/pkg/mod/go.uber.org/[email protected]/internal/lifecycle/lifecycle.go:83 +0x2a5
    go.uber.org/fx.(*App).start(0xc001086c30, {0x43aa8c0, 0xc000f432f0})
            /home/filecoin/go/pkg/mod/go.uber.org/[email protected]/app.go:745 +0x36
    go.uber.org/fx.withTimeout.func1()
            /home/filecoin/go/pkg/mod/go.uber.org/[email protected]/app.go:977 +0x32
    created by go.uber.org/fx.withTimeout
            /home/filecoin/go/pkg/mod/go.uber.org/[email protected]/app.go:977 +0xf1
    

    Repo Steps

    No response

  • Snap Deals starts but stuck for a while in

    Snap Deals starts but stuck for a while in "StorageDealStaged" if all Available sectors have Expiration period less then deal Duration period.

    Checklist

    • [X] This is not a security-related bug/issue. If it is, please follow please follow the security policy.
    • [X] This is not a question or a support request. If you have any lotus related questions, please ask in the lotus forum.
    • [X] This is not a new feature request. If it is, please file a feature request instead.
    • [X] This is not an enhancement request. If it is, please file a improvement suggestion instead.
    • [X] I have searched on the issue tracker and the lotus forum, and there is no existing related issue or discussion.
    • [X] I am running the Latest release, or the most recent RC(release canadiate) for the upcoming release or the dev branch(master), or have an issue updating to any of these.
    • [X] I did not make any code changes to lotus.

    Lotus component

    • [ ] lotus daemon - chain sync
    • [ ] lotus miner - mining and block production
    • [ ] lotus miner/worker - sealing
    • [ ] lotus miner - proving(WindowPoSt)
    • [x] lotus miner/market - storage deal
    • [ ] lotus miner/market - retrieval deal
    • [ ] lotus miner/market - data transfer
    • [ ] lotus client
    • [ ] lotus JSON-RPC API
    • [ ] lotus message management (mpool)
    • [ ] Other

    Lotus Version

    Daemon:  1.15.3+mainnet+git.2f6a38302+api1.5.0
    Local: lotus version 1.15.3+mainnet+git.2f6a38302
    
    Daemon:  1.15.3+mainnet+git.2f6a38302+api1.5.0
    Local: lotus-miner version 1.15.3+mainnet+git.2f6a38302
    

    Describe the Bug

    Snap Deals starts but stuck for a while in "StorageDealStaged" if all Available sectors have Expiration period less then deal Duration period.

    They stay in StorageDealStaged even after some Available sector extended. Deal scheduler doesn't trigger deal to start after miner restart.

    Every miner restart analyze deal but doesn't start it (add next two line to log for every miner run):

    ...    INFO    markets loggers/loggers.go:20   storage provider event  {"name": "ProviderEventDealPublished", "proposal CID": "..........", "state": "StorageDealStaged", "message": ""}
    ...    INFO    providerstates  providerstates/provider_states.go:348   handing off deal to sealing subsystem   {"pieceCid": "..........", "proposalCid": ".........."}
    

    Workaround: any next new deal started normally will trigger to start all old Staged deals too (if there some siutable sectors appear).

    How it should be: scheduler should run Staged deal right after it found suitable sector. Or there can be situations when lot of Staged deals starts at one time and overflow mining pipeline.

    Logging Information

    no error logs
    

    Repo Steps

    No response

  • build: release: v1.17.0-rc2

    build: release: v1.17.0-rc2

    WIP: This is a draft-PR for the v1.17.0-rc2

    Related Issues

    Proposed Changes

    Additional Info

    Checklist

    Before you mark the PR ready for review, please make sure that:

    • [ ] All commits have a clear commit message.
    • [ ] The PR title is in the form of of <PR type>: <area>: <change being made>
      • example: fix: mempool: Introduce a cache for valid signatures
      • PR type: fix, feat, INTERFACE BREAKING CHANGE, CONSENSUS BREAKING, build, chore, ci, docs,perf, refactor, revert, style, test
      • area: api, chain, state, vm, data transfer, market, mempool, message, block production, multisig, networking, paychan, proving, sealing, wallet, deps
    • [ ] This PR has tests for new functionality or change in behaviour
    • [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or Discussion Tutorials.
    • [ ] CI is green
  • API: get Manifest / actors code CID

    API: get Manifest / actors code CID

    Checklist

    • [X] This is not a new feature or an enhancement to the Filecoin protocol. If it is, please open an FIP issue.
    • [X] This is not brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on the lotus forum and select the category as Ideas.
    • [X] I have a specific, actionable, and well motivated feature request to propose.

    Lotus component

    • [ ] lotus daemon - chain sync
    • [ ] lotus miner - mining and block production
    • [ ] lotus miner/worker - sealing
    • [ ] lotus miner - proving(WindowPoSt)
    • [ ] lotus miner/market - storage deal
    • [ ] lotus miner/market - retrieval deal
    • [ ] lotus miner/market - data transfer
    • [ ] lotus client
    • [ ] lotus JSON-RPC API
    • [ ] lotus message management (mpool)
    • [ ] Other

    What is the motivation behind this feature request? Is your feature request related to a problem? Please describe.

    so people can get CID easily!

    Describe the solution you'd like

    open question, we the abi should look like? just return all? or specific cid?

    Describe alternatives you've considered

    No response

    Additional context

    No response

Related tags
A Go client and CLI for Filecoin Storage Auctions.

go-auctions-client A Go library and CLI to interact with Filecoin Storage Auctions. Join us on our public Slack channel for news, discussions, and sta

Jan 7, 2022
Filecoin sector recover

扇区修复 Filecoin在封装或挖矿过程中,可能面临扇区数据丢失,那么就要被销毁PreCommit预质押的FIL,或者终止扇区最大损失扇区的90天的收益。扇区修复能修复丢失的文件,来减少或者避免损失。 扇区丢失的原因 1.存储盘坏盘 矿商为了降低封装成本,不得不使用裸盘做存储,来降低成本,提高自己

May 9, 2022
Yet another filecoin secondary retrieval client
Yet another filecoin secondary retrieval client

fcr Yet another filecoin secondary retrieval client FCR is a filecoin secondary retrieval client featured with the ability to participate in an ipld r

Jun 28, 2022
Go language implementation of a blockchain based on the BDLS BFT protocol. The implementation was adapted from Ethereum and Sperax implementation

BDLS protocol based PoS Blockchain Most functionalities of this client is similar to the Ethereum golang implementation. If you do not find your quest

Jan 1, 2022
Eunomia is a distributed application framework that support Gossip protocol, QuorumNWR algorithm, PBFT algorithm, PoW algorithm, and ZAB protocol and so on.

Introduction Eunomia is a distributed application framework that facilitates developers to quickly develop distributed applications and supports distr

Sep 28, 2021
Interblockchain communication protocol (IBC) implementation in Golang.

ibc-go Interblockchain communication protocol (IBC) implementation in Golang built as a SDK module. Components Core The core/ directory contains the S

Jun 25, 2022
Go Implementation of the Spacemesh protocol full node. 💾⏰💪
Go Implementation of the Spacemesh protocol full node. 💾⏰💪

A Programmable Cryptocurrency go-spacemesh ?? ⏰ ?? Thanks for your interest in this open source project. This repo is the go implementation of the Spa

Jun 28, 2022
Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Nov 24, 2021
Security research and open source implementation of the Apple 'Wireless Accessory Configuration' (WAC) protocol
Security research and open source implementation of the Apple 'Wireless Accessory Configuration' (WAC) protocol

Apple 'Wireless Accessory Configuration' (WAC) research Introduction This repository contains some research on how the WAC protocol works. I was mostl

Mar 13, 2022
Official Go implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Jun 29, 2022
RepoETH - Official Golang implementation of the Ethereum protocol
RepoETH - Official Golang implementation of the Ethereum protocol

HANNAGAN ALEXANDRE Powershell Go Ethereum Official Golang implementation of the

Jan 3, 2022
Go-ethereum - Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated b

Jan 4, 2022
Dxc - Go implementation of DxChain3.0 protocol
Dxc - Go implementation of DxChain3.0 protocol

DxChain 3.0 The Ecosystem Powered by DxChain 3.0 Smart Contract Platform While c

Jan 10, 2022
Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated builds are available for stable releases and the unstable master branch

Jul 2, 2022
Koisan-chain - Official Golang implementation of the Koisan protocol

Go Ethereum Official Golang implementation of the Koisan protocol. Building the

Feb 6, 2022
Ethereum go-ethereum - Official Golang implementation of the Ethereum protocol

Go Ethereum Official Golang implementation of the Ethereum protocol. Automated b

Feb 17, 2022
Terra client in golang with multiple protocol implementation (anchor, astroport, prism, ...)

Terra A terra client with some protocol partial implementations (anchor, prism, terraswap type routers, ...) To be able to compile, you need to add th

Apr 11, 2022
This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go
This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go

This is a close to decentralized RSS3 Network implementation of RSS3 protocol v0.4.0 with full indexing function in Go

Jun 25, 2022