aptly - Debian repository management tool

aptly

https://api.travis-ci.org/aptly-dev/aptly.svg?branch=master http://goreportcard.com/badge/aptly-dev/aptly

Aptly is a swiss army knife for Debian repository management.

http://www.aptly.info/img/aptly_logo.png

Documentation is available at http://www.aptly.info/. For support please use mailing list aptly-discuss.

Aptly features: ("+" means planned features)

  • make mirrors of remote Debian/Ubuntu repositories, limiting by components/architectures
  • take snapshots of mirrors at any point in time, fixing state of repository at some moment of time
  • publish snapshot as Debian repository, ready to be consumed by apt
  • controlled update of one or more packages in snapshot from upstream mirror, tracking dependencies
  • merge two or more snapshots into one
  • filter repository by search query, pulling dependencies when required
  • publish self-made packages as Debian repositories
  • REST API for remote access
  • mirror repositories "as-is" (without resigning with user's key) (+)
  • support for yum repositories (+)

Current limitations:

  • translations are not supported yet

Download

To install aptly on Debian/Ubuntu, add new repository to /etc/apt/sources.list:

deb http://repo.aptly.info/ squeeze main

And import key that is used to sign the release:

$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ED75B5A4483DA07C

After that you can install aptly as any other software package:

$ apt-get update
$ apt-get install aptly

Don't worry about squeeze part in repo name: aptly package should work on Debian squeeze+, Ubuntu 10.0+. Package contains aptly binary, man page and bash completion.

If you would like to use nightly builds (unstable), please use following repository:

deb http://repo.aptly.info/ nightly main

Binary executables (depends almost only on libc) are available for download from GitHub Releases.

If you have Go environment set up, you can build aptly from source by running (go 1.11+ required):

git clone https://github.com/aptly-dev/aptly
cd aptly
make modules install

Binary would be installed to $GOPATH/bin/aptly.

Contributing

Please follow detailed documentation in CONTRIBUTING.md.

Integrations

Vagrant:

  • Vagrant configuration by Zane Williamson, allowing to bring two virtual servers, one with aptly installed and another one set up to install packages from repository published by aptly

Docker:

With configuration management systems:

CLI for aptly API:

GUI for aptly API:

Scala sbt:

Comments
  • S3 Publishing Large Repos

    S3 Publishing Large Repos

    When publishing large mirrors to S3 I quickly hit the S3 API throttling limits.

    Aptly doesn't seem to gracefully scale down (boto based stuff does) and simply fails the publish instead.

    My working theory is the many HEAD requests to check file existence happen quickly enough to cause issues.

    To re-create try to upload, say, trusty-updates to s3.

    I've tried rate limiting with IPTables and the user space throttler trickle. Too not much success. I'd love to see if others have seen this issue?

  • enhancement: daemonized API service

    enhancement: daemonized API service

    aptly should install and start its API service automatically in the background on install and boot like a standard linux service (nginx etc).

    Configuration should be stored in /etc/ and working files (DB etc) in its own directory.

    Bountysource

  • Size check mismatch errors on certain mirror updates

    Size check mismatch errors on certain mirror updates

    In the past few days, two of the Amazon EC2 mirrors in my repo set are being stubborn when I try to update them. I get the following messages:

    ERROR: unable to update: http://us-east-1.clouds.archive.ubuntu.com/ubuntu/dists/precise-updates/main/binary-i386/Packages.bz2: size check mismatch 1045491 != 1045248 ERROR: unable to update: http://us-east-1.clouds.archive.ubuntu.com/ubuntu/dists/trusty-updates/main/binary-i386/Packages.bz2: size check mismatch 720276 != 712806

    If I retry the updates for these two many times, eventually they will complete successfully. I'm not seeing any problems with the remote repos themselves when I try and update an Ubuntu instance directly from them, so I don't think the problem is with Amazon. However, I'm mirroring twenty-two other remote repos and Aptly has no trouble with any of them.

  • Missing xz-utils in package deps

    Missing xz-utils in package deps

    aptly publish dies with:

    panic: unable to unxz data.tar.xz from /var/lib/aptly/pool/8b/67/0ad_0.0.17-1_amd64.deb: exec: "xz": executable file not found in $PATH [recovered]
        panic: unable to unxz data.tar.xz from /var/lib/aptly/pool/8b/67/0ad_0.0.17-1_amd64.deb: exec: "xz": executable file not found in $PATH
    

    there is no xz-utils in deps of aptly(0.9.7) package (after install of xz-utils it works fine)

  • repo update fails with timeout awaiting response headers

    repo update fails with timeout awaiting response headers

    aptly mirror update usually fails following errors, errors are observed after downloading the repo

    Downloading http://in.archive.ubuntu.com/ubuntu/pool/restricted/n/nvidia-graphics-drivers-340/nvidia-340_340.96-0ubuntu3_amd64.deb... Downloading http://in.archive.ubuntu.com/ubuntu/pool/main/libh/libhybris/libandroid-properties1_0.1.0+git20151016+6d424c9-0ubuntu7_i386.deb... Downloading http://in.archive.ubuntu.com/ubuntu/pool/restricted/n/nvidia-graphics-drivers-361/nvidia-361-dev_361.42-0ubuntu2_amd64.deb... Downloading http://in.archive.ubuntu.com/ubuntu/pool/main/libh/libhybris/libandroid-properties-dev_0.1.0+git20151016+6d424c9-0ubuntu7_amd64.deb... ERROR: unable to update: download errors: http://in.archive.ubuntu.com/ubuntu/pool/main/m/mobile-broadband-provider-info/mobile-broadband-provider-info_20140317-1_all.deb: Get http:/ubuntu/pool/main/m/mobile-broadband-provider-info/mobile-broadband-provider-info_20140317-1_all.deb: net/http: timeout awaiting response headers http://in.archive.ubuntu.com/ubuntu/pool/main/c/corosync/libcorosync-common4_2.3.5-3ubuntu1_amd64.deb: Get http:/ubuntu/pool/main/c/corosync/libcorosync-common4_2.3.5-3ubuntu1_amd64.deb: net/http: timeout awaiting response headers

    Detailed Description

    updated to latest nightly version of aptly aptly version: 1.0.0+107+gfcd4531

    In previous version(0.9.0) was getting EOF error

    Context

    Because of this error I am unable to create snapshot root@repo01:/repo/lnxrepo# aptly snapshot create u16041 from mirror ubuntu1604-main ERROR: unable to create snapshot: mirror not updated root@repo01:/repo/lnxrepo#

    Basically If you create the multiversion repo for ubuntu then it will helpful

    Possible Implementation

    Your Environment

    lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial

    root@bng-lnx-repo01:/repo/lnxrepo# aptly version aptly version: 1.0.0+107+gfcd4531

    aplty conf file

    { "rootDir": "/repo/lnxrepo", "downloadConcurrency": 4, "downloadSpeedLimit": 0, "architectures": [], "dependencyFollowSuggests": false, "dependencyFollowRecommends": false, "dependencyFollowAllVariants": false, "dependencyFollowSource": false, "gpgDisableSign": false, "gpgDisableVerify": false, "downloadSourcePackages": false, "ppaDistributorID": "ubuntu", "ppaCodename": "", "FileSystemPublishEndpoints": { "test1": { "rootDir": "/repo/lnxrepo/branch1/aptly_public", "linkMethod": "symlink" }, "test2": { "rootDir": "/repo/lnxrepo/branch2/aptly_public", "linkMethod": "copy", "verifyMethod": "md5" }, "test3": { "rootDir": "/repo/lnxrepo/branch3/aptly_public", "linkMethod": "hardlink" } },

    "S3PublishEndpoints": {}, "SwiftPublishEndpoints": {} }

  • Add a flag to unlock database after each API request

    Add a flag to unlock database after each API request

    After the first API request, the database was locked as long as the API server is running. This prevents a user to also use the command-line client. This commit adds a new flag -no-lock that will close the database after each API request.

    Closes #234

  • aptly repo include reports an error when used for a backports distribution

    aptly repo include reports an error when used for a backports distribution

    The changes file for a backported Debian packages (e.g. suffixed with ~bpo8+1) does not include a references for the orig.tar.gz tarball, nor is the file pushed by dput.

    However, aptly tries to import the file.

    Typical error output:

    [!] Unable to import file /tmp/aptly259445817/oar_2.5.5~rc1.orig.tar.gz into pool: open /tmp/aptly259445817/oar_2.5.5~rc1.orig.tar.gz: no such file or directory
    [!] Some files were skipped due to errors:
      oar_2.5.5~rc1-1~bpo8+1.dsc
    ERROR: some files failed to be added
    

    See the related changes files here: https://gist.github.com/npf/64a7432e6ae04e9fa0c6

  • aptly corrupt? (v 0.5.1 on ubuntu 12.04)

    aptly corrupt? (v 0.5.1 on ubuntu 12.04)

    Don't know how it happened, but we have lost all our internally built packages stored in aptly: repo list shows 0 packages

    And although packages are still held in the pool and can be installed, we are unable to add new packages or do anything else with repo (even a drop fails)

    .. every operation comes back with a 'key not found' Have tried:

    • repo publish update
    • repo drop
    • publish update
    • publish drop

    Probably the biggest challenge is that we have this repo set in our CM for auto-publishing packages from our CI and CM ..

    strace suggests this is nothing to do with our gpg keys (which have not changed) but some issues with aptly internal db ..

    HELP !!!!

  • Files from conflicting packages might override each other

    Files from conflicting packages might override each other

    Reported by Yuriy Poltorak:

    aptly correctly would handle files with same name, but different md5 from conflicting packages on all stages before publishing. When publishing, conflicting packages may not be part of the same list, but, when publishing under the same prefix into different distrubutions, two publishes would share common pool. Files with the same name (but different content) would override each other silently.

    aptly should check for linking files under the same name with different size (checking for md5 might be too expensive).

  • Debian packaging for debian stretch

    Debian packaging for debian stretch

    Detailed Description

    The aptly debian package is out-of-date and that is problematic for debian stretch.

    Possible Implementation

    Make sure the debian package installs correctly on debian stretch. It should depend on the gnupg1 and gpgv1 packages for Debian stretch (breaking change in aptly version 1.3.0).

    Even after installing both packages, keys that worked before will not longer work after the upgrade to 1.3.0 (since keys would typically be generated with the default gpg version in stretch - it worked in 1.2.0, so this is a regression)

    Context

    ERROR: unable to initialize GPG signer: looks like there are no keys in gpg, please create one (official manual: http://www.gnupg.org/gph/en/manual.html)

    Your Environment

    Debian Stretch + Aptly 1.3.0 (upgrade from 1.2.0)

  • Gpg 2.1 compatiblity

    Gpg 2.1 compatiblity

    Hello, I'm on Debian Stretch (gpg 2.1). I generated my gpg key so I have a .gnupg/pubring.kbx instead of .gnupg/pubring.gpg and .gnupg/secring.gpg. The gpg provider doesn't work but I can't get the internal provider to work. host:~$ aptly publish repo -batch -gpg-key=<blah> -passphrase-file="/data/aptly/.DoNotRemoveMandatoryForAptly" -gpg-provider=internal <name> <prefix> opengpg: failure opening keyring '/data/aptly/.gnupg/pubring.gpg': open /data/aptly/.gnupg/pubring.gpg: no such file or directory opengpg: failure opening keyring '/data/aptly/.gnupg/secring.gpg': open /data/aptly/.gnupg/secring.gpg: no such file or directory ERROR: unable to initialize GPG signer: couldn't find key for key reference <blah>

    I tried to specify -secret-keyring with the kbx and it didn't work either: ERROR: unable to initialize GPG signer: error load secret keyring: openpgp: invalid data: tag byte does not have MSB set

    Am I doing something wrong or does it need a fix (I may submit a PR then)? Thanks!

  • Add flag -filter-with-build-deps

    Add flag -filter-with-build-deps

    This flag automatically adds the build dependencies of all source packages matched by the filter expression.

    (The sources of the build dependencies and the build dependencies of the build dependencies are not added, even if -dep-follow-source is specified.)

    Resolves #1131

    Description of the Change

    I added a parameter withBuildDependencies to FilterWithProgress in list.go.

    func (l *PackageList) FilterWithProgress(queries []PackageQuery, withDependencies bool, withBuildDependencies bool, source *PackageList, dependencyOptions int, architecturesList []string, progress aptly.Progress) (*PackageList, error) {
    	if !l.indexed {
    		panic("list not indexed, can't filter")
    	}
    
    	result := NewPackageList()
    
    	for _, query := range queries {
    		result.Append(query.Query(l))
    	}
    
    	if withDependencies {
    		err := addTransitiveDependencies(result, source, dependencyOptions, architecturesList, progress, l)
    		if err != nil {
    			return nil, err
    		}
    	}
    
    	if withBuildDependencies {
    		// disable DepFollowSource, enable DepFollowBuild
    		buildDependencyOptions := dependencyOptions&(^DepFollowSource) | DepFollowBuild
    		err := addTransitiveDependencies(result, source, buildDependencyOptions, architecturesList, progress, l)
    		if err != nil {
    			return nil, err
    		}
    	}
    
    	return result, nil
    }
    

    Then, I first evaluate the dependencies without the DepFollowBuild option as it was before. Afterwards, I add the build dependencies of all of the source packages but without adding their source packages (thus preventing the build dependencies of the build dependencies (and so on) from being added).

    Checklist

    • [x] unit-test added (if change is algorithm)
    • [x] functional test added/updated (if change is functional)
    • [x] man page updated (if applicable)
      • Happens automatically with cd man && make generate. Should I commit the generated aptly.1 file?
    • [x] bash completion updated (if applicable)
    • [x] documentation updated
    • [ ] author name in AUTHORS
  • Debian/Ubuntu: Aptly only compatible with GPG v1, installs v2 anyway

    Debian/Ubuntu: Aptly only compatible with GPG v1, installs v2 anyway

    Detailed Description

    Per the documentation here, Aptly is only compatible with GPG v1.

    However, the dependencies for the packages ( debian, ubuntu) force installation of gnupg which is gpg version 2.

    This can be tested by building and running this Dockerfile:

    FROM debian:latest
    
    RUN apt-get update && \
    apt-get install gnupg1 -y && \
    apt-get clean
    
    RUN apt-get install aptly ca-certificates -y && \
    apt-get clean
    
    ADD aptly.conf /etc/aptly.conf
    VOLUME ["/aptly"]
    VOLUME ["/public"]
    EXPOSE 8080
    
    ENTRYPOINT ["aptly", "api", "serve"]
    
    $ sudo docker build . -t aptly:0.0.1
    
    $ sudo docker run --entrypoint="" aptly:0.0.1 gpg --version
    gpg (GnuPG) 2.2.27
    
    $ sudo docker run --entrypoint="" aptly:0.0.1 gpgv1 --version
    docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "gpgv1": executable file not found in $PATH: unknown.
    ERRO[0000] error waiting for container: context canceled 
    

    Context

    Packages hosted on Debian and Ubuntu default repositories are not functional.

    Possible Implementation

    The cleanest way would be to make Aptly work with GPG v2, which has been the default for many years. Fixing the dependencies in the Debian packaging would be a good quick-win though.

    Your Environment

    See Dockerfile above to reproduce

    Other

    Possibly the same as:

    • https://github.com/aptly-dev/aptly/issues/822
    • https://github.com/aptly-dev/aptly/issues/931
    • https://github.com/aptly-dev/aptly/issues/1111
    • https://github.com/aptly-dev/aptly/issues/1114
    • https://github.com/aptly-dev/aptly/issues/1121
  • make api logging configurable

    make api logging configurable

    Fixes #1132

    Requirements

    All new code should be covered with tests, documentation should be updated. CI should pass.

    Description of the Change

    Checklist

    • [x] unit-test added (if change is algorithm)
    • [x] functional test added/updated (if change is functional)
    • [ ] man page updated (if applicable)
    • [ ] bash completion updated (if applicable)
    • [x] documentation updated
    • [x] author name in AUTHORS
  • No content-type when Aborting with Error

    No content-type when Aborting with Error

    Detailed Description

    When AbortWithError is called to return an error message to the API caller in api/api.go, no content-type of application/json; charset=utf-8 is set. Instead, the reponse has content-type text/plain; charset=utf-8.

    Context

    Possible Implementation

    Your Environment

  • Structured Logging

    Structured Logging

    Detailed Description

    The current logging isn't structured and therefore hard to parse. Also request logging cannot be disabled. I'm suggesting to add a leveled JSON logger which can be enabled via the config file and which allows to disable request logs via the log level.

    Log messages could look like this: {"lvl":"debug","ts":"2022-12-09T14:50:19+01:00","msg":"some message"}

    {"lvl":"warn","remote":"::1","method":"POST","path":"/api/files/hello-world-0.0.1","protocol":"HTTP/1.1","code":"400","latency":"208.625µs","agent":"PostmanRuntime/7.29.2","ts":"2022-12-09T14:50:31+01:00","msg":"Error #01: request Content-Type isn't multipart/form-data"}

    Context

    Possible Implementation

    Your Environment

  • [Question/Possible PR] Automatically Download Build Dependencies for Source Packages

    [Question/Possible PR] Automatically Download Build Dependencies for Source Packages

    Hello!

    It would be cool if aptly could download the build dependencies of all downloaded source packages automatically. I tried implementing two approaches to accomplish this and would like to ask for feedback and whether you would be willing to accept a PR for this.

    Detailed Description

    We have a script that builds a selection of Debian packages from source for different architectures. We cache the source packages and their build dependencies using aptly. Currently, the script computes the build dependencies and then generates a filter query for aptly to download the correct packages. However, the script could be simplified a lot if aptly had a switch like -filter-with-build-deps that works similarly to the existing -filter-with-deps but for build dependencies.

    Context

    As mentioned above, this change would enable us to simplify a build script.

    I think this could be helpful to anyone who is using aptly for caching source packages because when you have a source package, you often also want to have its build dependencies.

    Possible Implementation

    I implemented two approaches for achieving this. The first one is very simple but has a drawback (IMO) but the second one required more code changes.

    Approach 1. Add an Option -dep-follow-build Similar to -dep-follow-source

    This approach exposes the already-existing flag deb.DepFollowBuild to the user. It would basically work like this (note: this is not the complete patch, just the gist of it):

    diff --git a/cmd/cmd.go b/cmd/cmd.go
    index 14a0efd1..5cda7ddc 100644
    --- a/cmd/cmd.go
    +++ b/cmd/cmd.go
    @@ -114,6 +114,7 @@ package environment to new version.`,
            cmd.Flag.Int("db-open-attempts", 10, "number of attempts to open DB if it's locked by other instance")
            cmd.Flag.Bool("dep-follow-suggests", false, "when processing dependencies, follow Suggests")
            cmd.Flag.Bool("dep-follow-source", false, "when processing dependencies, follow from binary to Source packages")
    +       cmd.Flag.Bool("dep-follow-build", false, "when processing dependencies, follow build dependencies")
            cmd.Flag.Bool("dep-follow-recommends", false, "when processing dependencies, follow Recommends")
            cmd.Flag.Bool("dep-follow-all-variants", false, "when processing dependencies, follow a & b if dependency is 'a|b'")
            cmd.Flag.Bool("dep-verbose-resolve", false, "when processing dependencies, print detailed logs")
    diff --git a/context/context.go b/context/context.go
    index d80528a5..f87e73be 100644
    --- a/context/context.go
    +++ b/context/context.go
    @@ -161,6 +161,9 @@ func (context *AptlyContext) DependencyOptions() int {
                    if context.lookupOption(context.config().DepFollowSource, "dep-follow-source") {
                            context.dependencyOptions |= deb.DepFollowSource
                    }
    +               if context.lookupOption(context.config().DepFollowBuild, "dep-follow-build") {
    +                       context.dependencyOptions |= deb.DepFollowBuild
    +               }
                    if context.lookupOption(context.config().DepVerboseResolve, "dep-verbose-resolve") {
                            context.dependencyOptions |= deb.DepVerboseResolve
                    }
    

    This is a very simple change. However, it has a disadvantage: When you use -dep-follow-build together with -dep-follow-source, aptly will now download the sources and build dependencies of your build dependencies (transitively!) instead of just the build dependencies of the original results of the filter.

    So if you want the sources + build dependencies of e.g. bash you'd have to do

    ... -filter='bash {source}' -dep-follow-build -with-sources -filter-with-deps
    

    instead of

    ... -filter='bash' -dep-follow-build -dep-follow-source -with-sources -filter-with-deps
    

    Otherwise you get a huge list of packages.

    This is why I thought of another approach.

    Approach 2. Add an Option -filter-with-build-deps similar to -filter-with-deps

    In this approach, I added a parameter withBuildDependencies to FilterWithProgress in list.go.

    func (l *PackageList) FilterWithProgress(queries []PackageQuery, withDependencies bool, withBuildDependencies bool, source *PackageList, dependencyOptions int, architecturesList []string, progress aptly.Progress) (*PackageList, error) {
    	if !l.indexed {
    		panic("list not indexed, can't filter")
    	}
    
    	result := NewPackageList()
    
    	for _, query := range queries {
    		result.Append(query.Query(l))
    	}
    
    	if withDependencies {
    		err := addTransitiveDependencies(result, source, dependencyOptions, architecturesList, progress, l)
    		if err != nil {
    			return nil, err
    		}
    	}
    
    	if withBuildDependencies {
    		// disable DepFollowSource, enable DepFollowBuild
    		buildDependencyOptions := dependencyOptions&(^DepFollowSource) | DepFollowBuild
    		err := addTransitiveDependencies(result, source, buildDependencyOptions, architecturesList, progress, l)
    		if err != nil {
    			return nil, err
    		}
    	}
    
    	return result, nil
    }
    

    Then I first evaluate the dependencies without the DepFollowBuild option as it was before. Afterwards, I add the build dependencies of all of the source packages but without adding their source packages (thus preventing the build dependencies of the build dependencies (and so on) from being added).

    Conclusion

    What do you think? Should I open a PR for either of these approaches? Is there maybe a better/smarter way of doing this that I have overlooked?

This repository holds supplementary Go cryptography libraries

Go Cryptography This repository holds supplementary Go cryptography libraries. Download/Install The easiest way to install is to run go get -u golang.

Dec 30, 2021
The Ethereum Improvement Proposal repository

Ethereum Improvement Proposals (EIPs) Ethereum Improvement Proposals (EIPs) describe standards for the Ethereum platform, including core protocol spec

Jan 3, 2023
Accompanying repository for the "Build Ethereum From Scratch - Smart Contracts and More" course by David Katz
Accompanying repository for the

Build Ethereum From Scratch - Smart Contracts and More This repository accompanies the "Build Ethereum From Scratch - Smart Contracts and More" course

Dec 7, 2022
Blockchain-go - A repository that houses a blockchain implemented in Go

blockchain-go This is a repository that houses a blockchain implemented in Go. F

May 1, 2022
Feb 14, 2022
Hands-on-cockroach - Demo repository for CockroachDB Serverless

Hands-on CockroachDB Demo repository for CockroachDB Serverless. Hello World Go

Feb 17, 2022
create @auth0 management api tokens

Vault Secrets Plugin - Auth0 Vault secrets plugins to simplying creation, management, and revocation of auth0 management API tokens. Usage Setup Endpo

Jan 2, 2022
Bitcoin UTXO & xPub Management Suite
Bitcoin UTXO & xPub Management Suite

BUX Bitcoin UTXO & xPub Management Suite Table of Contents About Installation Documentation Examples & Tests Benchmarks Code Standards Usage Contribut

Dec 19, 2022
A dead simple tool to sign files and verify digital signatures.

minisign minisign is a dead simple tool to sign files and verify signatures. $ minisign -G

Dec 16, 2022
hack-browser-data is an open-source tool that could help you decrypt data from the browser.
hack-browser-data is an open-source tool that could help you decrypt data  from the browser.

hack-browser-data is an open-source tool that could help you decrypt data ( password|bookmark|cookie|history|credit card|download

Dec 23, 2022
mkcert is a simple tool for making locally-trusted development certificates
mkcert is a simple tool for making locally-trusted development certificates

A simple zero-config tool to make locally trusted development certificates with any names you'd like.

Jan 5, 2023
CLI Tool to remove unwanted connections from your Chia Node based on Geo IP Location.

chia-bouncer Tiny CLI tool to remove unwanted connections from your Chia Node based on the Geo IP Location (Country). The Tool is written in golang an

Jun 25, 2021
Sekura is an Encryption tool that's heavily inspired by the Rubberhose file system.

It allows for multiple, independent file systems on a single disk whose existence can only be verified if you posses the correct password.

Oct 16, 2022
EVM frontrunning tool

CAKE SNIPER FRONTRUNNING BOT =================================================== BEFORE STARTING: This bot require you to run the GETH client + use

Jan 9, 2023
ddlcpad, *Doki Doki Literature Club Plus Asset Decrypter*, is a tool to decrypt the encrypted asset file on the Doki Doki Literature Club Plus. Writing in golang.

ddlcpad 简体中文 What is this ddlcpad is short of Doki Doki Literature Club Plus Asset Decrypter You can decrypt the *.cy file from Doki Doki Literature C

Nov 27, 2022
A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.
A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.

A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.

Jan 7, 2023
An easy tool to apply transactions to the current EVM state. Optimized for MEV.

sibyl A more embedded version of fxfactorial/run-evm-code. This tool makes it easy to apply transactions to the current EVM state. Call it a transacti

Dec 25, 2022
Red team tool that emulates the SolarWinds CI compromise attack vector.
Red team tool that emulates the SolarWinds CI compromise attack vector.

SolarSploit Sample malicious program that emulates the SolarWinds attack vector. Listen for processes that use the go compiler Wait for a syscall to o

Nov 9, 2022
Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs
Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs

e7mon Tool for monitoring your Ethereum clients. Client-agnostic as it queries the standardized JSON-RPC APIs. However, the execution client should be

Dec 20, 2022