Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON

What is Miller?

Miller is like awk, sed, cut, join, and sort for data formats such as CSV, TSV, JSON, JSON Lines, and positionally-indexed.

What can Miller do for me?

With Miller, you get to use named fields without needing to count positional indices, using familiar formats such as CSV, TSV, JSON, JSON Lines, and positionally-indexed. Then, on the fly, you can add new fields which are functions of existing fields, drop fields, sort, aggregate statistically, pretty-print, and more.

cover-art

  • Miller operates on key-value-pair data while the familiar Unix tools operate on integer-indexed fields: if the natural data structure for the latter is the array, then Miller's natural data structure is the insertion-ordered hash map.

  • Miller handles a variety of data formats, including but not limited to the familiar CSV, TSV, and JSON/JSON Lines. (Miller can handle positionally-indexed data too!)

In the above image you can see how Miller embraces the common themes of key-value-pair data in a variety of data formats.

Getting started

More documentation links

Installing

There's a good chance you can get Miller pre-built for your system:

Ubuntu Ubuntu 16.04 LTS Fedora Debian Gentoo

Pro-Linux Arch Linux

NetBSD FreeBSD

Anaconda Homebrew/MacOSX MacPorts/MacOSX Chocolatey

OS Installation command
Linux yum install miller
apt-get install miller
Mac brew install miller
port install miller
Windows choco install miller

See also README-versions.md for a full list of package versions. Note that long-term-support (LtS) releases will likely be on older versions.

See also building from source.

Community

GitHub stars Homebrew downloads Conda downloads

All Contributors

Build status

Multi-platform build status CodeQL status Codespell status

Building from source

  • With make:
    • To build: make. This takes just a few seconds and produces the Miller executable, which is ./mlr (or .\mlr.exe on Windows).
    • To run tests: make check.
    • To install: make install. This installs the executable /usr/local/bin/mlr and manual page /usr/local/share/man/man1/mlr.1 (so you can do man mlr).
    • You can do ./configure --prefix=/some/install/path before make install if you want to install somewhere other than /usr/local.
  • Without make:
    • To build: go build github.com/johnkerl/miller/cmd/mlr.
    • To run tests: go test github.com/johnkerl/miller/internal/pkg/... and mlr regtest.
    • To install: go install github.com/johnkerl/miller/cmd/mlr will install to GOPATH/bin/mlr.
  • See also the doc page on building from source.
  • For more developer information please see README-go-port.md.

License

License: BSD2

Features

  • Miller is multi-purpose: it's useful for data cleaning, data reduction, statistical reporting, devops, system administration, log-file processing, format conversion, and database-query post-processing.

  • You can use Miller to snarf and munge log-file data, including selecting out relevant substreams, then produce CSV format and load that into all-in-memory/data-frame utilities for further statistical and/or graphical processing.

  • Miller complements data-analysis tools such as R, pandas, etc.: you can use Miller to clean and prepare your data. While you can do basic statistics entirely in Miller, its streaming-data feature and single-pass algorithms enable you to reduce very large data sets.

  • Miller complements SQL databases: you can slice, dice, and reformat data on the client side on its way into or out of a database. You can also reap some of the benefits of databases for quick, setup-free one-off tasks when you just need to query some data in disk files in a hurry.

  • Miller also goes beyond the classic Unix tools by stepping fully into our modern, no-SQL world: its essential record-heterogeneity property allows Miller to operate on data where records with different schema (field names) are interleaved.

  • Miller is streaming: most operations need only a single record in memory at a time, rather than ingesting all input before producing any output. For those operations which require deeper retention (sort, tac, stats1), Miller retains only as much data as needed. This means that whenever functionally possible, you can operate on files which are larger than your system’s available RAM, and you can use Miller in tail -f contexts.

  • Miller is pipe-friendly and interoperates with the Unix toolkit.

  • Miller's I/O formats include tabular pretty-printing, positionally indexed (Unix-toolkit style), CSV, TSV, JSON, JSON Lines, and others.

  • Miller does conversion between formats.

  • Miller's processing is format-aware: e.g. CSV sort and tac keep header lines first.

  • Miller has high-throughput performance on par with the Unix toolkit.

  • Miller is written in portable, modern Go, with zero runtime dependencies. You can download or compile a single binary, scp it to a faraway machine, and expect it to work.

What people are saying about Miller

Today I discovered Miller—it's like jq but for CSV: https://t.co/pn5Ni241KM

Also, "Miller complements data-analysis tools such as R, pandas, etc.: you can use Miller to clean and prepare your data." @GreatBlueC @nfmcclure

— Adrien Trouillaud (@adrienjt) September 24, 2020

Underappreciated swiss-army command-line chainsaw.

"Miller is like awk, sed, cut, join, and sort for [...] CSV, TSV, and [...] JSON." https://t.co/TrQqSUK3KK

— Dirk Eddelbuettel (@eddelbuettel) February 28, 2017

Miller looks like a great command line tool for working with CSV data. Sed, awk, cut, join all rolled into one: http://t.co/9BBb6VCZ6Y

— Mike Loukides (@mikeloukides) August 16, 2015

Miller is like sed, awk, cut, join, and sort for name-indexed data such as CSV: http://t.co/1zPbfg6B2W - handy tool!

— Ilya Grigorik (@igrigorik) August 22, 2015

Btw, I think Miller is the best CLI tool to deal with CSV. I used to use this when I need to preprocess too big CSVs to load into R (now we have vroom, so such cases might be rare, though...)https://t.co/kUjrSSGJoT

— Hiroaki Yutani (@yutannihilat_en) April 21, 2020

Miller: a *format-aware* data munging tool By @__jo_ker__ to overcome limitations with *line-aware* workshorses like awk, sed et al https://t.co/LCyPkhYvt9

The project website is a fantastic example of good software documentation!!

— Donny Daniel (@dnnydnl) September 9, 2018

Holy holly data swiss army knife batman! How did no one suggest Miller https://t.co/JGQpmRAZLv for solving database cleaning / ETL issues to me before

Congrats to @__jo_ker__ for amazingly intuitive tool for critical data management tasks!#DataScienceandLaw #ComputationalLaw

— James Miller (@japanlawprof) June 12, 2018

🤯 @__jo_ker__'s Miller easily reads, transforms, + writes all sorts of tabular data. It's standalone, fast, and built for streaming data (operating on one line at a time, so you can work on files larger than memory).

And the docs are dream. I've been reading them all morning! https://t.co/Be2pGPZK6t

— Benjamin Wolfe (he/him) (@BenjaminWolfe) September 9, 2021

Contributors

Thanks to all the fine people who help make Miller better (emoji key):


Andrea Borruso

🤔 🎨

Shaun Jackman

🤔

Fred Trotter

🤔 🎨

komosa

🤔

jungle-boogie

🤔

Thomas Klausner

🚇

Stephen Kitt

📦

Leah Neukirchen

🤔

Luigi Baldoni

📦

Hiroaki Yutani

🤔

Daniel M. Drucker

🤔

Nikos Alexandris

🤔

kundeng

📦

Victor Sergienko

📦

Adrian Ho

🎨

zachp

📦

David Selassie

🤔

Joel Parker Henderson

🤔

Michel Ace

🤔

Matus Goljer

🤔

Richard Patel

📦

Jakub Podlaha

🎨

Miodrag Milić

📦

Derek Mahar

🤔

spmundi

🤔

Peter Körner

🛡️

rubyFeedback

🤔

rbolsius

📦

awildturtok

🤔

agguser

🤔

jganong

🤔

Fulvio Scapin

🤔

Jordan Torbiak

🤔

Andreas Weber

🤔

vapniks

📦

Zombo

📦

Brian Fulton-Howard

📦

ChCyrill

🤔

Jauder Ho

💻

Paweł Sacawa

🐛

schragge

📖

Jordi

📖 🤔

This project follows the all-contributors specification. Contributions of any kind are welcome!

Owner
John Kerl
Who: Nerd/dad What: () => {this}
John Kerl
Comments
  • Golang port / Miller 6 tracking issue

    Golang port / Miller 6 tracking issue

    Split out from https://github.com/johnkerl/miller/issues/369. See also https://github.com/johnkerl/miller/blob/master/go/README.md.

    Pre-release/rough-draft docs are at http://johnkerl.org/miller6.

    Things which may change:

    As noted in go/README.md, I want to preserve as much user experience as possible. That said:

    • --jvstack and --jsonx will still be supported as command-line flags, but JSON output will be pretty-printed (like --jvstack) by default.
    • --csvlite will still be different from --csv, as detailed below.
    • emitf and emitp were invented before I had for-loops in the DSL. If people really want to keep these and are using these, I can keep them; but maybe we're better off leaving them behind. Please let me know.
    • CR vs CR/LF (line-endings) will be platform-appropriate using Go's own portability -- Windows files will be written correctly on Windows, and likewise for Linux and MacOS. That said, I don't know if we need any longer to preserve CR/LF-to-CR/LF even on Linux (line endings which are non-standard for the platform) -- again, please let me know.
    • mlr put -S and mlr put -F will become unnecessary since string-conversion will be done just-in -time as suggested by @gromgit on https://github.com/johnkerl/miller/issues/151.

    Please include here any thoughts you have on the Go port.

  • make failing

    make failing

    Hello,

    On a gnu/linux system, make is failing with a simple make:

    make: *** No targets specified and no makefile found.  Stop.
    
    cc --version
    gcc-4.8.real (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4
    Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    

    Is there a new method to make mlr now?

  • Addition of a build system generator

    Addition of a build system generator

  • filter then put (regex)

    filter then put (regex)

    I'd like to do some kind of regex based parsing, like that:

    mlr filter '$FIELD =~ "([A-Z]+)([0-9]+)" ' then put '$F1  = "\1"; $F2 = "\2" '
    

    how can I do that? I've succeeded with sub() like this, but it's not optimal :

    mlr filter '$FIELD =~ "([A-Z]+)([0-9]+)" ' then put '$F1  = sub($FIELD, "([A-Z]+)([0-9]+)", '\1") '
    

    is there a shorter way?

  • supporting double quotes

    supporting double quotes

    I love the idea of Miller. It is clearly a needed tool that is missing from the standard unix toolbox.

    However, you really cannot say you have a tool that is designed to support csv, without supporting csv.

    CSV is a standard file format, and has an RFC: https://tools.ietf.org/html/rfc4180

    Not supporting double quotes is the same thing as saying that you do not support csv, since double quotes are central to the way that the standard handles other characters... comma being just one example. Your tool is young enough that supporting the standard now will make later development much simpler. This will prevent the situation years from now where you have a 'normal mode' and a 'standards mode'. If you make the change now you can just have the one correct mode.

    You have an ambitious work-list, but I would suggest taking a pause and thinking about how you will support the RFC version of the file format.

    People like me (open data advocates) spend alot of time trying to ensure that organizations that release csv do so under the standard format, rather than releasing unparsable garbage. Having a library like yours that supported the standard too would be a huge boon.. I could say things like:

    "See by using the RFC for your data output, all kinds of open tools will work out of the box on your data... like Miller (link)"

    Thank you for working on such a clever tool...

    Regards, -FT

  • Documentation of flatten and split*/join*

    Documentation of flatten and split*/join*

    More documentation details. flatten method does not have a complete description. It explains what it does, but it does not explain which are the arguments. You have to look and understand the examples to grasp the meaning. The usage should be clear from the description itself.

    Similarly, for join* and split*, the different arguments are not clear without looking at the examples. Some assumptions could be made, like the parameters for joink are the array/map keys and the string to use to make the join. But what about joinkv? Which parameter is the separator of key and value and which parameter is the separator of records? In fact, the current implementation is antiintuitive: the joink and joinv methods use the second argument for the record separator while the joinkv uses the third field for that and the second field is used for the key-value separator.

  • Conda build fails with

    Conda build fails with "undefined reference to `mlr_dsl_ParseTrace'"

    When I compile the latest version (5.10.2) without using anaconda, it successfully compiles and runs the unit tests, but when I try and build it as an anaconda package, I get the following error while it runs make:

    /bin/sh ../libtool  --tag=CC   --mode=link $BUILD_PREFIX/bin/x86_64-conda-linux-gnu-cc -Wall -std=gnu99 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem $PREFIX/include -fdebug-prefix-map=$SRC_DIR=/usr/local/src/conda/miller-5.10.2 -fdebug-prefix-map=$PREFIX=/usr/local/src/conda-prefix -static -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,$PREFIX/lib -Wl,-rpath-link,$PREFIX/lib -L$PREFIX/lib -o mlr mlrmain.o cli/libcli.la containers/libcontainers.la stream/libstream.la input/libinput.la dsl/libdsl.la mapping/libmapping.la output/liboutput.la lib/libmlr.la parsing/libdsl.la auxents/libauxents.la -lm
    libtool: link: $BUILD_PREFIX/bin/x86_64-conda-linux-gnu-cc -Wall -std=gnu99 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem $PREFIX/include -fdebug-prefix-map=$SRC_DIR=/usr/local/src/conda/miller-5.10.2 -fdebug-prefix-map=$PREFIX=/usr/local/src/conda-prefix -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z -Wl,relro -Wl,-z -Wl,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath -Wl,$PREFIX/lib -Wl,-rpath-link -Wl,$PREFIX/lib -o mlr mlrmain.o  -L$PREFIX/lib cli/.libs/libcli.a containers/.libs/libcontainers.a stream/.libs/libstream.a input/.libs/libinput.a dsl/.libs/libdsl.a mapping/.libs/libmapping.a output/.libs/liboutput.a lib/.libs/libmlr.a parsing/.libs/libdsl.a auxents/.libs/libauxents.a -lm
    /sc/arion/work/fultob01/conda/envs/py3.9/conda-bld/miller_1636495473849/_build_env/bin/../lib/gcc/x86_64-conda-linux-gnu/9.3.0/../../../../x86_64-conda-linux-gnu/bin/ld: parsing/.libs/libdsl.a(mlr_dsl_wrapper.o): in function `mlr_dsl_parse':
    mlr_dsl_wrapper.c:(.text.mlr_dsl_parse+0x103): undefined reference to `mlr_dsl_ParseTrace'
    

    My build script is like so:

    #!/bin/sh
    
    ./configure --prefix=$PREFIX
    make
    make check
    make install
    

    I have made the gcc toolchain, make and flex available. It also fails when I install gcc, make and flex using Anaconda then make manually.

    Do you have any idea what's going on?

  • W32/X64 release please

    W32/X64 release please

    Wow, Miller seems like a great command-line tool!

    I would love to use it, but there doesn't seem to be a Windows version yet... Could you make/compile one?

  • Alpine Linux package

    Alpine Linux package

    I found miller to be very useful with SRE tasks and often use it in Docker containers.

    Sadly, there doesn't seem to be a package for Alpine available, a distribution very popular with Docker because of its small footprint. So for now I'm stuck with the large debian and ubuntu images.

    Let's fix this: https://wiki.alpinelinux.org/wiki/Creating_an_Alpine_package

  • clang support? / freebsd support?

    clang support? / freebsd support?

    Hello,

    How would I go about compiling your program with clang?

    clang -v
    FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
    Target: i386-unknown-freebsd10.2
    Thread model: posix
    Selected GCC installation:
    

    Thanks, Sean

  • Discussion forum

    Discussion forum

  • Question - different count between

    Question - different count between "wc -l" and "mlr count"

    Hi, i have 2 files, both have same format.

    mlr count us_leads_woocommerce_2.csv
    count=423732
    
    cat us_leads_woocommerce_2.csv | wc -
    423732
    

    As you can see above, the numbers aligned. But for a different file, same format:

    mlr count us_leads_shopify.csv
    count=305141
    
    cat us_leads_shopify.csv | wc -l
    971200
    

    As you can see, mlr returned 305,141 and wc -l returned 971,200 (which is the correct result) What can cause this discrepancy?

  • Function strftime prints imprecise fractional seconds.

    Function strftime prints imprecise fractional seconds.

    In Miller 6.5.0, function strftime prints imprecise fractional seconds:

    $ # Milliseconds precision 
    $ echo unix_timestamp=1454176342.303 | mlr put '$date_and_time = strftime($unix_timestamp, "%FT%H:%M:%3S");'
    unix_timestamp=1454176342.303,date_and_time=2016-01-30T17:52:22.302
    

    Expected result: date_and_time=2016-01-30T17:52:22.303

    $ # Microseconds precision
    $ echo unix_timestamp=1454176342.303 | mlr put '$date_and_time = strftime($unix_timestamp, "%FT%H:%M:%6S");'
    unix_timestamp=1454176342.303,date_and_time=2016-01-30T17:52:22.302999
    

    Expected result: date_and_time=2016-01-30T17:52:22.303000

    $ # Nanoseconds precision 
    $ echo unix_timestamp=1454176342.303 | mlr put '$date_and_time = strftime($unix_timestamp, "%FT%H:%M:%9S");'
    unix_timestamp=1454176342.303,date_and_time=2016-01-30T17:52:22.302999973
    

    Expected result: date_and_time=2016-01-30T17:52:22.303000000

    $ mlr --version
    mlr 6.5.0
    

    I might have reported this behaviour in a comment on an earlier issue or discussion related to strftime or strptime, but I can't find the comment. Anyway, I think this behaviour deserves its own separate issue.

  • tail -f and miller

    tail -f and miller

    @johnkerl I open an issue, because in your answer you talk about bug and regression.

    Thank you

    Discussed in https://github.com/johnkerl/miller/discussions/1117

    Originally posted by aborruso October 26, 2022 Hi, I have the below input file.

    Using miller 5 I can monitor on screen file change using

    tail -f input.csv | mlr --icsvlite --otsv --ifs "," cat
    

    If I use Miller 6 I have no output on screen.

    If in miller 5 I change my command in

    mlr --icsvlite --opprint --ifs "," cat input.csv
    

    I have no output.

    Am I doing something wrong? Is there a more correct way to do it? I would like to use Miller to monitor file changes, because I need to print better tail output on screen.

    Thank you

    data,stato,codice_regione
    5,ITA,08
    2,ITA,08
    3,ITA,10
    ```</div>
  • Consider autogen of zsh-completion/bash-completion/etc configs

    Consider autogen of zsh-completion/bash-completion/etc configs

    A next step within Miller would be to have a command-line alternative like mlr --generate-zsh-completions which would create this kind of information automatically

    Originally posted by @johnkerl in https://github.com/johnkerl/miller/discussions/1124#discussioncomment-4241025

  • CSV to JSON and strings which can be interpreted as numbers

    CSV to JSON and strings which can be interpreted as numbers

    I've searched the manual if I've missed an option and the problem is so trivial, that I would be surprised no one raised it before:

    echo -e 'a,b\n"hello","004.56"' | mlr --icsv --ojson cat

    miller 6 gives: [{ "a": "hello", "b": 004.56}]

    removing the quotes which should mark the b column value as string so when further reading the value from json the value is read as floating point 4.56

    Or is there an other option to force a column to be a string when outputting as json?

Culture - A package that gets a random name from the Culture series' ships Minds.

culture A package that gets a random name from the Culture series' ships Minds. Getting started This project requires Go to be installed. On OS X with

Jan 2, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Jan 9, 2023
hack-browser-data is an open-source tool that could help you decrypt data from the browser.
hack-browser-data is an open-source tool that could help you decrypt data  from the browser.

hack-browser-data is an open-source tool that could help you decrypt data ( password|bookmark|cookie|history|credit card|download

Dec 23, 2022
Finds common flaws in passwords. Like cracklib, but written in Go.

crunchy Finds common flaws in passwords. Like cracklib, but written in Go. Detects: ErrEmpty: Empty passwords ErrTooShort: Too short passwords ErrNoDi

Dec 30, 2022
A russian roulette-like programme that has a 1/6 chance to delete your OS.

russianRouletteGo russianRouletteGo - a russian roulette-like programme that has a 1/6 chance to delete your OS. Last tested and built in Go 1.17.3 Us

Jan 3, 2022
sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP
sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP

sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP. (demo)

Jan 9, 2023
Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs
Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs

e7mon Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs. However, the execution client should be

Dec 20, 2022
Bitcoin futures curve from Deribit as a JSON webservice

Curve Bitcoin futures curve from Deribit as a JSON webservice Building go build . Running ./curve Expiration date and annualised yield of each contr

Dec 13, 2021
Ethereum-vanity-wallet - A fork of https://github.com/meehow/ethereum-vanity-wallet but the key can be exported to a JSON keystore file

ethereum-vanity-wallet See https://github.com/meehow/ethereum-vanity-wallet This version: doesn't display the private key let's you interactively expo

Jan 2, 2022
Fallback to build simdjson-go tape using only encoding/json

fakesimdjson builds a simdjson-go tape using the stdlib's JSON parser. It is slow and does a lot of allocations. This is a workaround to run programs

Mar 11, 2022
Small utility to sign a small json containing basic kyc information. The key generated by it is fully compatible with cosmos based chains.

Testnet signer utility This utility generates a signed JSON-formatted ID to prove ownership of a key used to submit tx on the blockchain. This testnet

Sep 10, 2022
Get any cryptocurrencies ticker and trade data in real time from multiple exchanges and then save it in multiple storage systems.
Get any cryptocurrencies ticker and trade data in real time from multiple exchanges and then save it in multiple storage systems.

Cryptogalaxy is an app which will get any cryptocurrencies ticker and trade data in real time from multiple exchanges and then saves it in multiple storage systems.

Jan 4, 2023
Sign, verify, encrypt and decrypt data with GPG in your browser.
Sign, verify, encrypt and decrypt data with GPG in your browser.

keygaen Sign, verify, encrypt and decrypt data with GPG in your browser. ⚠️ keygaen has not yet been audited! While we try to make keygaen as secure a

Nov 22, 2022
A blockchain-based demo that shows an alternative strategy for ensuring data and log integrity on aircraft

A blockchain-based demo that shows an alternative strategy for ensuring data and log integrity on aircraft. (Coded in less than 24 hours for GunnHack)

Feb 6, 2022
run ABI encoded data against the ethereum blockchain

Run EVM code against a database at a certain block height - Note You can't run this against a running geth node - because that would share the db and

Nov 11, 2021
Store data on Bitcoin for 350 sats/KB up to 185 KB by using P2SH-P2WSH witness scripts

Bitcandle Store data on Bitcoin for 350 sats/KB up to 185 kB by using P2SH-P2WSH witness scripts. 225ed8bc432d37cf434f80717286fd5671f676f12b573294db72

Aug 12, 2022
Easily encrypt data for the Adyen payment platform

adyen Encrypt secrets for the Adyen payment platform. This library uses crypto/rand to generate cryptographically secure AES keys and nonces, and re-u

Jan 2, 2023
Dump BitClout chain data into MongoDB

mongodb-dumper mongodb-dumper runs a full BitClout node and dumps the chain data into a MongoDB database Build Running the following commands will cre

Nov 30, 2022
collection of tools to gleam insights from a full bitclout node's data
collection of tools to gleam insights from a full bitclout node's data

bitcloutscripts collection of tools to gleam insights from a full bitclout node's data bitcloutscripts $ ./bcs bcs posts # print all posts

Jul 11, 2021