Disk Usage/Free Utility - a better 'df' alternative

duf

Latest Release Build Status Go ReportCard GoDoc

Disk Usage/Free Utility (Linux, BSD, macOS & Windows)

duf

Features

  • User-friendly, colorful output
  • Adjusts to your terminal's width
  • Sort the results according to your needs
  • Groups & filters devices
  • Can conveniently output JSON

Installation

Packages

Linux

  • Arch Linux: duf
  • Nix: nix-env -iA nixpkgs.duf
  • Snap: sudo snap install duf-utility (snapcraft.io)
  • Packages in Alpine, Debian & RPM formats

BSD

  • FreeBSD: pkg install duf

macOS

  • with Homebrew: brew install duf
  • with MacPorts: sudo port selfupdate && sudo port install duf

Windows

  • with scoop: scoop install duf

Android

  • Android (via termux): pkg install duf

Binaries

  • Binaries for Linux, FreeBSD, OpenBSD, macOS, Windows

From source

Make sure you have a working Go environment (Go 1.12 or higher is required). See the install instructions.

Compiling duf is easy, simply run:

git clone https://github.com/muesli/duf.git
cd duf
go build

Usage

You can simply start duf without any command-line arguments:

duf

If you supply arguments, duf will only list specific devices & mount points:

duf /home /some/file

If you want to list everything (including pseudo, duplicate, inaccessible file systems):

duf --all

You can show and hide specific tables:

duf --only local,network,fuse,special,loops,binds
duf --hide local,network,fuse,special,loops,binds

You can also show and hide specific filesystems:

duf --only-fs tmpfs,vfat
duf --hide-fs tmpfs,vfat

Sort the output:

duf --sort size

Valid keys are: mountpoint, size, used, avail, usage, inodes, inodes_used, inodes_avail, inodes_usage, type, filesystem.

Show or hide specific columns:

duf --output mountpoint,size,usage

Valid keys are: mountpoint, size, used, avail, usage, inodes, inodes_used, inodes_avail, inodes_usage, type, filesystem.

List inode information instead of block usage:

duf --inodes

If duf doesn't detect your terminal's colors correctly, you can set a theme:

duf --theme light

If you prefer your output as JSON:

duf --json

Troubleshooting

Users of oh-my-zsh should be aware that it already defines an alias called duf, which you will have to remove in order to use duf:

unalias duf
Owner
Christian Muehlhaeuser
Geek, Gopher, Software Developer, Maker, Opensource Advocate, Tech Enthusiast, Photographer, Board and Card Gamer
Christian Muehlhaeuser
Comments
  • Windows Support Roadmap

    Windows Support Roadmap

    This issue is a tracker for Windows support development.


    Todo

    Done

    • [x] Support Windows local devices #58
    • [x] Support network attached devices, such as SMB #58
    • [x] Support block attributes in Mount struct (which should be equivalent to "cluster" for Windows) #58
    • [x] Recognize Windows Sandbox device as a special device #58
    • [x] Support for code pages other than 437(United States) or 65001(UTF-8) #63
  • Weird input to next command line

    Weird input to next command line

    I just updated duf from 0.4 to 0.5 and noticed that on some computers after duf has printed its output and the shell prompt is shown, the sequence "11;rgb:0000/0000/0000" is "output" on the terminal, but as part of the next command (so the shell takes it as input, making a mess for next command).

    $ duf ... $ 11;rgb:0000/0000/0000?

    (output cut because even if using the code tags the output is messed up in github)

    Googling for "11;rgb:0000/0000/0000" seems to indicate that some color detection causes the terminal emulator to spit that out. I think it may have to do with the detection of the background color.

    Note that not all terminals do that. I mean, I only use lxterminal, but depending on where I run duf in some servers it works OK, in others it outputs that string. Could be some timing issue.

  • ImDisk RAM disk not shown on Windows even with -all

    ImDisk RAM disk not shown on Windows even with -all

    As in the title: the OS SSD, internal HDD and external USB 3.0 HDD are all shown correctly, but this ram disk (formatted to NTFS of course) is not, not even with -all.

    This RAM disk is shown by the df command (I tried the one that came with git bash, the one that is in msys directory of Haskell Platform and the one in a Windows build of busybox, all three show it).

  • "permission denied" errors in output

    Just installed duf-r17.3818846-1 from AUR and ran it as regular user:

    screenshot of duf output

    Running with sudo, the "permission denied" errors are gone, but it should work with regular users as well.

  • Wildcard for excluding filesystems (TimeMachine)

    Wildcard for excluding filesystems (TimeMachine)

    duf 0.6.0 from Homebrew

    duf produces voluminous output on my system, due to a lot of filesystems mapped by TimeMachine → SMB share on my NAS:

    ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
    │ 17 local devices                                                                                                                                                                  │
    ├───────────────────────────────────────────────────────┬────────┬─────────┬────────┬───────────────────────────────┬───────┬───────────────────────────────────────────────────────┤
    │ MOUNTED ON                                            │   SIZE │    USED │  AVAIL │              USE%             │ TYPE  │ FILESYSTEM                                            │
    ├───────────────────────────────────────────────────────┼────────┼─────────┼────────┼───────────────────────────────┼───────┼───────────────────────────────────────────────────────┤
    │ /                                                     │ 931.5G │   14.0G │ 232.3G │ [....................]   1.5% │ apfs  │ /dev/disk1s5s1                                        │
    │ /System/Volumes/Data                                  │ 931.5G │  683.1G │ 232.3G │ [##############......]  73.3% │ apfs  │ /dev/disk1s1                                          │
    │ /System/Volumes/Preboot                               │ 931.5G │  348.2M │ 232.3G │ [....................]   0.0% │ apfs  │ /dev/disk1s2                                          │
    │ /System/Volumes/Update                                │ 931.5G │ 1004.0K │ 232.3G │ [....................]   0.0% │ apfs  │ /dev/disk1s6                                          │
    │ /System/Volumes/VM                                    │ 931.5G │    1.0G │ 232.3G │ [....................]   0.1% │ apfs  │ /dev/disk1s4                                          │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  259.2G │   3.5T │ [#...................]   6.7% │ apfs  │ com.apple.TimeMachine.2021-02-02-070031.backup@/dev/d │
    │ 23011/2021-02-02-070031.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  266.9G │   3.5T │ [#...................]   6.9% │ apfs  │ com.apple.TimeMachine.2021-02-03-060550.backup@/dev/d │
    │ 23011/2021-02-03-060550.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  270.9G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-04-042746.backup@/dev/d │
    │ 23011/2021-02-04-042746.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  270.6G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-04-063743.backup@/dev/d │
    │ 23011/2021-02-04-063743.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  272.8G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-04-211925.backup@/dev/d │
    │ 23011/2021-02-04-211925.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  272.8G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-04-232354.backup@/dev/d │
    │ 23011/2021-02-04-232354.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  273.3G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-05-012747.backup@/dev/d │
    │ 23011/2021-02-05-012747.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/95A20C74-B2D8-4BF2-B753-966BA63 │   3.8T │  273.5G │   3.5T │ [#...................]   7.0% │ apfs  │ com.apple.TimeMachine.2021-02-05-054028.backup@/dev/d │
    │ 23011/2021-02-05-054028.backup                        │        │         │        │                               │       │ isk4s1                                                │
    │ /Volumes/.timemachine/nas2._smb._tcp.local./D283BA05- │   4.0T │  274.2G │   3.7T │ [#...................]   6.7% │ smbfs │ //luke@nas2._smb._tcp.local./TimeMachine              │
    │ 2364-4C54-B0D4-D0133C60A859/TimeMachine               │        │         │        │                               │       │                                                       │
    │ /Volumes/Backups of mini                              │   3.8T │  273.4G │   3.5T │ [#...................]   7.0% │ apfs  │ /dev/disk4s1                                          │
    │ /Volumes/Recovery                                     │ 931.5G │  585.3M │ 232.3G │ [....................]   0.1% │ apfs  │ /dev/disk1s3                                          │
    │ /Volumes/storage                                      │  10.0T │    4.4T │   5.6T │ [########............]  43.9% │ smbfs │ //luke@nas2._smb._tcp.local/storage                   │
    ╰───────────────────────────────────────────────────────┴────────┴─────────┴────────┴───────────────────────────────┴───────┴───────────────────────────────────────────────────────╯
    

    I tried various incarnations of -hide, -hide-fs, -only-fs etc but nothing I did worked to exclude those *.timemachine mounts. Is there any way to do this? The fs names are dynamic due to the snapshot nature of the system, so specifying them explicitly isn't really an option.

  • Installation error on Ubuntu - Unable to locate package duf

    Installation error on Ubuntu - Unable to locate package duf

    I am getting the below error on Ubuntu:

    image

    I followed the install command here: https://github.com/muesli/duf#linux

    Do I need to add a repository for the PPA?

  • Update macOS installation instruction

    Update macOS installation instruction

    Hello! Current homebrew command brew install muesli/tap/duf is not working.

    Error: No available formula or cask with the name "muesli/tap/duf".
    ==> Searching for similarly named formulae...
    This similarly named formula was found:
    duf
    To install it, run:
      brew install duf
    

    I've fixed it.

  • ascii graphics for non-UTF terminals

    ascii graphics for non-UTF terminals

    I do understand this is not something common nowadays but I still have a few servers I maintain that have no UTF support in console. Hence with a little hope I'd like to ask for support for non-UTF terminals by using ascii graphics instead of UTF symbols. Thanks in advance. image

  • Make duf a snap

    Make duf a snap

    This is an attempt to make duf a snap. So far, it kind of works OK - no errors are thrown. But: The output of the regular binary and the snap are different, at least on my machine. Unfortunately, I don't have that much experience that I could solve that problem myself. Maybe one of you people could help out? Here are the two outputs:

    Regular binary

    philipp@laptop ~/P/S/duf ((v0.3.0))> ./duf 
    /run/user/125/gvfs: permission denied
    ╭───────────────────────────────────────────────────────────────────────────────────────╮
    │ 3 local devices                                                                       │
    ├──────────────────┬────────┬────────┬────────┬─────────────────────┬──────┬────────────┤
    │ MOUNTED ON       │   SIZE │   USED │  AVAIL │         USE%        │ TYPE │ FILESYSTEM │
    ├──────────────────┼────────┼────────┼────────┼─────────────────────┼──────┼────────────┤
    │ /                │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /home            │ 732.4G │ 546.1G │ 149.0G │ [#######...]  74.6% │ ext4 │ /dev/sda1  │
    │ /run/timeshift/b │ 182.5G │  17.0G │ 156.2G │ [..........]   9.3% │ ext4 │ /dev/sda2  │
    │ ackup            │        │        │        │                     │      │            │
    ╰──────────────────┴────────┴────────┴────────┴─────────────────────┴──────┴────────────╯
    ╭─────────────────────────────────────────────────────────────────────────────────────────╮
    │ 8 special devices                                                                       │
    ├────────────────┬────────┬────────┬────────┬─────────────────────┬──────────┬────────────┤
    │ MOUNTED ON     │   SIZE │   USED │  AVAIL │         USE%        │ TYPE     │ FILESYSTEM │
    ├────────────────┼────────┼────────┼────────┼─────────────────────┼──────────┼────────────┤
    │ /dev           │   3.8G │     0B │   3.8G │                     │ devtmpfs │ udev       │
    │ /dev/shm       │   3.8G │ 395.9M │   3.4G │ [#.........]  10.1% │ tmpfs    │ tmpfs      │
    │ /run           │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ /run/lock      │   5.0M │   4.0K │   5.0M │ [..........]   0.1% │ tmpfs    │ tmpfs      │
    │ /run/snapd/ns  │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ /run/user/1000 │ 783.8M │  80.0K │ 783.7M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ /run/user/125  │ 783.8M │  16.0K │ 783.8M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ /sys/fs/cgroup │   3.8G │     0B │   3.8G │                     │ tmpfs    │ tmpfs      │
    ╰────────────────┴────────┴────────┴────────┴─────────────────────┴──────────┴────────────╯
    
    

    Snap

    philipp@laptop ~/P/S/duf ((v0.3.0))> duf
    /tmp/.mount_Nextclw6d1QD: no such file or directory
    /var/lib/snapd/hostfs/run/user/125/gvfs: permission denied
    /run/user/125/gvfs: permission denied
    ╭───────────────────────────────────────────────────────────────────────────────────────╮
    │ 21 local devices                                                                      │
    ├──────────────────┬────────┬────────┬────────┬─────────────────────┬──────┬────────────┤
    │ MOUNTED ON       │   SIZE │   USED │  AVAIL │         USE%        │ TYPE │ FILESYSTEM │
    ├──────────────────┼────────┼────────┼────────┼─────────────────────┼──────┼────────────┤
    │ /etc             │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /home            │ 732.4G │ 546.1G │ 149.0G │ [#######...]  74.6% │ ext4 │ /dev/sda1  │
    │ /lib/firmware    │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /lib/modules     │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /media           │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /mnt             │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /root            │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /run/timeshift/b │ 182.5G │  17.0G │ 156.2G │ [..........]   9.3% │ ext4 │ /dev/sda2  │
    │ ackup            │        │        │        │                     │      │            │
    │ /snap            │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /tmp             │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /tmp             │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /usr/lib/snapd   │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /usr/src         │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /var/lib/snapd   │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /var/lib/snapd/h │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ ostfs            │        │        │        │                     │      │            │
    │ /var/lib/snapd/h │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ ostfs            │        │        │        │                     │      │            │
    │ /var/lib/snapd/h │ 732.4G │ 546.1G │ 149.0G │ [#######...]  74.6% │ ext4 │ /dev/sda1  │
    │ ostfs/home       │        │        │        │                     │      │            │
    │ /var/lib/snapd/h │ 182.5G │  17.0G │ 156.2G │ [..........]   9.3% │ ext4 │ /dev/sda2  │
    │ ostfs/run/timesh │        │        │        │                     │      │            │
    │ ift/backup       │        │        │        │                     │      │            │
    │ /var/log         │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /var/snap        │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    │ /var/tmp         │  63.7G │  26.1G │  34.2G │ [####......]  41.1% │ ext4 │ /dev/sdb1  │
    ╰──────────────────┴────────┴────────┴────────┴─────────────────────┴──────┴────────────╯
    ╭───────────────────────────────────────────────────────────────────────────────────────────╮
    │ 18 special devices                                                                        │
    ├──────────────────┬────────┬────────┬────────┬─────────────────────┬──────────┬────────────┤
    │ MOUNTED ON       │   SIZE │   USED │  AVAIL │         USE%        │ TYPE     │ FILESYSTEM │
    ├──────────────────┼────────┼────────┼────────┼─────────────────────┼──────────┼────────────┤
    │ /dev             │   3.8G │     0B │   3.8G │                     │ devtmpfs │ udev       │
    │ /dev/shm         │   3.8G │ 405.0M │   3.4G │ [#.........]  10.3% │ tmpfs    │ tmpfs      │
    │ /run             │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ /run/lock        │   5.0M │   4.0K │   5.0M │ [..........]   0.1% │ tmpfs    │ tmpfs      │
    │ /run/netns       │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ /run/snapd/ns    │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ /run/user/1000   │ 783.8M │  80.0K │ 783.7M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ /run/user/125    │ 783.8M │  16.0K │ 783.8M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ /sys/fs/cgroup   │   3.8G │     0B │   3.8G │                     │ tmpfs    │ tmpfs      │
    │ /var/lib/snapd/h │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ ostfs/run        │        │        │        │                     │          │            │
    │ /var/lib/snapd/h │   5.0M │   4.0K │   5.0M │ [..........]   0.1% │ tmpfs    │ tmpfs      │
    │ ostfs/run/lock   │        │        │        │                     │          │            │
    │ /var/lib/snapd/h │ 783.8M │   2.3M │ 781.5M │ [..........]   0.3% │ tmpfs    │ tmpfs      │
    │ ostfs/run/snapd/ │        │        │        │                     │          │            │
    │ ns               │        │        │        │                     │          │            │
    │ /var/lib/snapd/h │ 783.8M │  80.0K │ 783.7M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ ostfs/run/user/1 │        │        │        │                     │          │            │
    │ 000              │        │        │        │                     │          │            │
    │ /var/lib/snapd/h │ 783.8M │  16.0K │ 783.8M │ [..........]   0.0% │ tmpfs    │ tmpfs      │
    │ ostfs/run/user/1 │        │        │        │                     │          │            │
    │ 25               │        │        │        │                     │          │            │
    │ /var/lib/snapd/l │   3.8G │     0B │   3.8G │                     │ tmpfs    │ none       │
    │ ib/gl            │        │        │        │                     │          │            │
    │ /var/lib/snapd/l │   3.8G │     0B │   3.8G │                     │ tmpfs    │ none       │
    │ ib/gl32          │        │        │        │                     │          │            │
    │ /var/lib/snapd/l │   3.8G │     0B │   3.8G │                     │ tmpfs    │ none       │
    │ ib/glvnd         │        │        │        │                     │          │            │
    │ /var/lib/snapd/l │   3.8G │     0B │   3.8G │                     │ tmpfs    │ none       │
    │ ib/vulkan        │        │        │        │                     │          │            │
    ╰──────────────────┴────────┴────────┴────────┴─────────────────────┴──────────┴────────────╯
    
    

  • Cannot build duf

    Cannot build duf

    I get this error when I try to build/install the latest version of duf.

    $ go install github.com/muesli/duf@latest
    go install: github.com/muesli/duf@latest (in github.com/muesli/[email protected]):
    	The go.mod file for the module providing named packages contains one or
    	more replace directives. It must not contain directives that would cause
    	it to be interpreted differently than if it were the main module.
    
    $ go version
    go version go1.17.6 darwin/amd64
    
    $ sw_vers
    ProductName:	macOS
    ProductVersion:	12.1
    BuildVersion:	21C52
    
  • Fix mountinfo parsing #153

    Fix mountinfo parsing #153

    Quick patch to fix #153 mount -t tmpfs - /tmp -> "found invalid line"

    Split directly with strings.Fields instead of "(8) separator: marks the end of the optional fields"

    Todo:

    • ~~Spaces into "optional fields"~~ (The Linux kernel test does not support space on it https://github.com/torvalds/linux/blob/e55f0c439a2681a3c299bedd99ebe998049fa508/tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c#L124)
    • ~~Zero optional fields~~ (It contains at least the type of mount (ro or rw) https://man7.org/linux/man-pages/man5/fstab.5.html https://man7.org/linux/man-pages/man5/proc.5.html)
    • ~~Non zero super options~~
    • ~~Optional fields~~ ((7) optional fields: zero or more fields of the form "tag[:value]")

    Signed-off-by: Adrien Kara [email protected]

  • Print sizes with Binary prefixes

    Print sizes with Binary prefixes

    Duf lists size with SI prefix (G, M, K) but the values are in binary prefix (Gi, Mi, Gi) sizes.

    4MB  -> 4000KB  -- As in SI prefix
    4MiB -> 4094KiB -- As in Binary prefix
    

    Duf prints a size of 4MiB (or 4094KiB) as 4MB

    Proposal: Duf should print values with Binary prefixes

  • DUF use percentage completely wrong for BTRFS RAID

    DUF use percentage completely wrong for BTRFS RAID

    Hi,

    When using DUF on BTRFS RAID filesystem, it does, as plain df does, display as “total size” the sum of all the RAID components (and not the resulting usable space on the RAID filesystem).

    It displays the “used” and “available” fields right, but the calculated percentage is plain wrong, as it is calculated against the “total size” which is the sum ot the RAID components, and not against the actually usable space.

    By some magic, plain df has it right, but duf has it wrong, and the difference makes the “USE %” value and graph both useless, and worse, misleading.

    Please see attached screenshots that better show the discrepancy.

    Nasgul_wrong_duf_230104a_f

    Nasgul_btrfs_usage_230104a_f

  • Include swap in output

    Include swap in output

    Similar to Linux swapon --show --output-all

    NAME     TYPE      SIZE  USED PRIO UUID                                 LABEL
    /dev/zd0 partition   4G 34.8M   -2 facfbd2f-eb53-47ea-a90d-e312e875d200
    
  • Add option '--combine' to display all mount points in one table

    Add option '--combine' to display all mount points in one table

    I have added an additional option to display all mount points in one unified (combined) table.

    Screenshot: Screenshot--2022-12-15--10-55

    PS: This is my very first PR in Go language. Hope You like it.

  • build(deps): bump goreleaser/goreleaser-action from 3 to 4

    build(deps): bump goreleaser/goreleaser-action from 3 to 4

    Bumps goreleaser/goreleaser-action from 3 to 4.

    Release notes

    Sourced from goreleaser/goreleaser-action's releases.

    v4.0.0

    What's Changed

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3...v4.0.0

    v3.2.0

    What's Changed

    • chore: remove workaround for setOutput by @​crazy-max (#374)
    • chore(deps): bump @​actions/core from 1.9.1 to 1.10.0 (#372)
    • chore(deps): bump yargs from 17.5.1 to 17.6.0 (#373)

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3.1.0...v3.2.0

    v3.1.0

    What's Changed

    • fix: dist resolution from config file by @​crazy-max (#369)
    • ci: fix workflow by @​crazy-max (#357)
    • docs: bump actions to latest major by @​crazy-max (#356)
    • chore(deps): bump crazy-max/ghaction-import-gpg from 4 to 5 (#360)
    • chore(deps): bump ghaction-import-gpg to v5 (#359)
    • chore(deps): bump @​actions/core from 1.6.0 to 1.8.2 (#358)
    • chore(deps): bump @​actions/core from 1.8.2 to 1.9.1 (#367)

    Full Changelog: https://github.com/goreleaser/goreleaser-action/compare/v3.0.0...v3.1.0

    Commits
    • 8f67e59 chore: regenerate
    • 78df308 chore(deps): bump minimatch from 3.0.4 to 3.1.2 (#383)
    • 66134d9 Merge remote-tracking branch 'origin/master' into flarco/master
    • 3c08cfd chore(deps): bump yargs from 17.6.0 to 17.6.2
    • 5dc579b docs: add example when using workdir along with upload-artifact (#366)
    • 3b7d1ba feat!: remove auto-snapshot on dirty tag (#382)
    • 23e0ed5 fix: do not override GORELEASER_CURRENT_TAG (#370)
    • 1315dab update build
    • b60ea88 improve install
    • 4d25ab4 Update goreleaser.ts
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump actions/setup-go from 3.4.0 to 3.5.0

    build(deps): bump actions/setup-go from 3.4.0 to 3.5.0

    Bumps actions/setup-go from 3.4.0 to 3.5.0.

    Release notes

    Sourced from actions/setup-go's releases.

    Add support for stable and oldstable aliases

    In scope of this release we introduce aliases for the go-version input. The stable alias instals the latest stable version of Go. The oldstable alias installs previous latest minor release (the stable is 1.19.x -> the oldstable is 1.18.x).

    Stable

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-go@v3
        with:
          go-version: 'stable'
      - run: go run hello.go
    

    OldStable

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-go@v3
        with:
          go-version: 'oldstable'
      - run: go run hello.go
    
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
An alternative to Consistent Hashing

Weighted Rendezvous Hashing An alternative to Consistent Hashing. Evenly distributes load on node removal. ring := rendezvous.New() for _, s := range

Feb 12, 2022
A blockchain-based demo that shows an alternative strategy for ensuring data and log integrity on aircraft

A blockchain-based demo that shows an alternative strategy for ensuring data and log integrity on aircraft. (Coded in less than 24 hours for GunnHack)

Feb 6, 2022
Certificate monitoring utility for watching tls certificates and reporting the result as metrics.
Certificate monitoring utility for watching tls certificates and reporting the result as metrics.

cert-checker cert-checker is a certificate monitoring utility for watching tls certificates. These checks get exposed as Prometheus metrics to be view

Dec 6, 2022
A super easy file encryption utility written in go and under 800kb
A super easy file encryption utility written in go and under 800kb

filecrypt A super easy to use file encryption utility written in golang ⚠ Help Wanted on porting filecrypt to other programing languages NOTE: if you

Nov 10, 2022
An ATNA (Audit Trail and Node Authentication) Cloud Backup Utility
An ATNA (Audit Trail and Node Authentication) Cloud Backup Utility

ATNA Vault ATNA Vault allows you to maintain a secure long-term archive for all your IHE audit messages. IHE vendors who can provide "filter forward"

Mar 13, 2022
A utility for the certificate trust list (CTL).

ctlutil A utility for the certificate trust list (CTL) Installation First install Go. If you just want to install the binary to your current directory

Dec 28, 2021
Small utility to sign a small json containing basic kyc information. The key generated by it is fully compatible with cosmos based chains.

Testnet signer utility This utility generates a signed JSON-formatted ID to prove ownership of a key used to submit tx on the blockchain. This testnet

Sep 10, 2022
Disk usage analyzer with console interface written in Go
Disk usage analyzer with console interface written in Go

Gdu is intended primarily for SSD disks where it can fully utilize parallel processing. However HDDs work as well, but the performance gain is not so huge.

Jan 7, 2023
Simple Golang tool for monitoring linux cpu, ram and disk usage.

Simple Golang tool for monitoring linux cpu, ram and disk usage.

Mar 19, 2022
System resource usage profiler tool which regularly takes snapshots of the memory and CPU load of one or more running processes so as to dynamically build up a profile of their usage of system resources.
System resource usage profiler tool which regularly takes snapshots of the memory and CPU load of one or more running processes so as to dynamically build up a profile of their usage of system resources.

Vegeta is a system resource usage tracking tool built to regularly take snapshots of the memory and CPU load of one or more running processes, so as to dynamically build up a profile of their usage of system resources.

Jan 16, 2022
Minimal memory usage, cloud native logstash alternative
Minimal memory usage, cloud native logstash alternative

Mr-Plow Tiny and minimal tool to export data from relational db (postgres or mysql) to elasticsearch. The tool does not implement all the logstash fea

Aug 18, 2022
A simple tool to fill random data into a file to overwrite the free space on a disk

random-fill random-fill is a simple tool to fill random data into a file to over

Oct 2, 2022
Dependency-free replacement for GNU parallel, perfect fit for usage in an initramfs.

coshell v0.2.5 A no-frills dependency-free replacement for GNU parallel, perfect for initramfs usage. Licensed under GNU/GPL v2. How it works An sh -c

Dec 19, 2022
gopkg is a universal utility collection for Go, it complements offerings such as Boost, Better std, Cloud tools.

gopkg is a universal utility collection for Go, it complements offerings such as Boost, Better std, Cloud tools. Table of Contents Introduction

Jan 5, 2023
Capdns is a network capture utility designed specifically for DNS traffic. This utility is based on tcpdump.
Capdns is a network capture utility designed specifically for DNS traffic. This utility is based on tcpdump.

Capdns is a network capture utility designed specifically for DNS traffic. This utility is based on tcpdump. Some of its features include: Unde

Feb 26, 2022
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Sep 26, 2022
A disk-backed key-value store.

What is diskv? Diskv (disk-vee) is a simple, persistent key-value store written in the Go language. It starts with an incredibly simple API for storin

Jan 7, 2023
A distributed key-value store. On Disk. Able to grow or shrink without service interruption.

Vasto A distributed high-performance key-value store. On Disk. Eventual consistent. HA. Able to grow or shrink without service interruption. Vasto sca

Jan 6, 2023
A disk-backed key-value store.

What is diskv? Diskv (disk-vee) is a simple, persistent key-value store written in the Go language. It starts with an incredibly simple API for storin

Jan 1, 2023
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.

Gleam Gleam is a high performance and efficient distributed execution system, and also simple, generic, flexible and easy to customize. Gleam is built

Jan 1, 2023