Git-like capabilities for your object storage

Apache License Go tests status Node tests status nessie integration tests status Artifact HUB code of conduct

What is lakeFS

lakeFS is an open source layer that delivers resilience and manageability to object-storage based data lakes.

With lakeFS you can build repeatable, atomic and versioned data lake operations - from complex ETL jobs to data science and analytics.

lakeFS supports AWS S3, Azure Blob Storage and Google Cloud Storage as its underlying storage service. It is API compatible with S3, and works seamlessly with all modern data frameworks such as Spark, Hive, AWS Athena, Presto, etc.

For more information see the Official Documentation.

Capabilities

Development Environment for Data

  • Experimentation - try tools, upgrade versions and evaluate code changes in isolation.
  • Reproducibility - go back to any point of time to a consistent version of your data lake.

Continuous Data Integration

  • Ingest new data safely by enforcing best practices - make sure new data sources adhere to your lake’s best practices such as format and schema enforcement, naming convention, etc.
  • Metadata validation - prevent breaking changes from entering the production data environment.

Continuous Data Deployment

  • Instantly revert changes to data - if low quality data is exposed to your consumers, you can revert instantly to a former, consistent and correct snapshot of your data lake.
  • Enforce cross collection consistency - provide to consumers several collections of data that must be synchronized, in one atomic, revertible, action.
  • Prevent data quality issues by enabling
    • Testing of production data before exposing it to users / consumers.
    • Testing of intermediate results in your DAG to avoid cascading quality issues.

Getting Started

Docker (MacOS, Linux)

  1. Ensure you have Docker & Docker Compose installed on your computer.

  2. Run the following command:

    curl https://compose.lakefs.io | docker-compose -f - up
  3. Open http://127.0.0.1:8000/setup in your web browser to set up an initial admin user, used to login and send API requests.

Docker (Windows)

  1. Ensure you have Docker installed.

  2. Run the following command in PowerShell:

    Invoke-WebRequest https://compose.lakefs.io | Select-Object -ExpandProperty Content | docker-compose -f - up
  3. Open http://127.0.0.1:8000/setup in your web browser to set up an initial admin user, used to login and send API requests.

Download the Binary

Alternatively, you can download the lakeFS binaries and run them directly.

Binaries are available at https://github.com/treeverse/lakeFS/releases.

Setting up a repository

Please follow the Guide to Get Started to set up your local lakeFS installation.

For more detailed information on how to set up lakeFS, please visit the documentation.

Community

Stay up to date and get lakeFS support via:

  • Slack (to get help from our team and other users).
  • Twitter (follow for updates and news)
  • YouTube (learn from video tutorials)
  • Contact us (for anything)

More information

Licensing

lakeFS is completely free and open source and licensed under the Apache 2.0 License.

Comments
  • API with an unknown path should return an error

    API with an unknown path should return an error

    Currently lakeFS register openapi handlers and handle all specific routes. In case of a call to /api/v1/test, the unknown path under the API prefix, the mux will serve the request by the UI handler and return a valid HTML (UI) page.

    The expected behaviour is to return a non-2xx status code with JSON error - prefered the internal error format, so the developer will handle an error and not fail to parse the response in case of a bad API call.

  • RClone and LakeFS integration breaks without v2_auth

    RClone and LakeFS integration breaks without v2_auth

    This might be intended and just needs an update on your website but I ran into a hard to debug issue trying to sync data into LakeFS with RClone. Note, I've followed the instructions previously on the site and this worked so not sure what LakeFS version it broke in. The behavior seems to point towards the client side however I haven't updated RClone at all since it did previously work.

    LakeFS Version: 0.48.0(Also reproduced on 0.47.0) RClone Version:

    rclone v1.56.0
    - os/version: darwin 11.5.2 (64 bit)
    - os/kernel: 20.6.0 (x86_64)
    - os/type: darwin
    - os/arch: amd64
    - go/version: go1.16.6
    - go/linking: dynamic
    - go/tags: none
    

    Error trying to copy local data into LakeFS using the suggested configuration:

    
    .... removed ...
    
    2021/09/02 21:07:25 INFO  :
    Transferred:   	   14.481Ki / 14.481 KiByte, 100%, 0 Byte/s, ETA -
    Errors:                 2 (retrying may help)
    Elapsed time:         1.2s
    
    2021/09/02 21:07:25 Failed to sync with 2 errors: last error was: s3 upload: 403 Forbidden: <?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><Resource></Resource><Region>us-gov-west-1</Region><RequestId>5d580752-d488-4f4f-976b-358729075279</RequestId><HostId>D071307137586F18</HostId></Error>
    

    To test that the credentials were correct I went the other way and used LakeFS as a source after adding a file via the UI

    ❯ rclone ls lakefs:aif-xxxx/main/ -vv
    
    2021/09/02 21:10:46 DEBUG : Setting --ca-cert "/Users/e379822/certs/lm_ca.pem" from environment variable RCLONE_CA_CERT="/Users/e379822/certs/lm_ca.pem"
    2021/09/02 21:10:46 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rclone" "ls" "lakefs:aif-xxxx/main/" "-vv"]
    2021/09/02 21:10:46 DEBUG : Creating backend with remote "lakefs:aif-xxxx/main/"
    2021/09/02 21:10:46 DEBUG : Using config file from "/Users/e379822/.config/rclone/rclone.conf"
    2021/09/02 21:10:46 DEBUG : fs cache: renaming cache item "lakefs:aif-xxxx/main/" to be canonical "lakefs:aif-xxxx/main"
            4 test.txt
    2021/09/02 21:10:47 DEBUG : 6 go routines active
    

    What put me onto it was the signature signing is this log line in the LakeFS stating it was using SigV4. time="2021-09-03T01:51:31Z" level=warning msg="error verifying credentials for key" func=pkg/gateway.AuthenticationHandler.func1 file="build/pkg/gateway/middleware.go:54" authenticator=sigv4 error=SignatureDoesNotMatch key=AKIAJ6UDLXIPOISF7LKQ

    Also verified by dumping the RClone headers that it was using SigV4 authentication by dumping the headers.

    2021/09/02 20:56:06 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2021/09/02 20:56:06 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2021/09/02 20:56:06 DEBUG : HTTP REQUEST (req 0xc000ad0100)
    2021/09/02 20:56:06 DEBUG : PUT /aif-xxxx/main/test/test.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJ6UDLXIPOISF7LKQ%2F20210903%2Fus-gov-west-1%2Fs3%2Faws4_request&X-Amz-Date=20210903T025606Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=c3b12b3bb96069f6102df42eb22b4a64d7bc728800e53fee1b6372547710fdeb HTTP/1.1
    Host: s3.lakefs.ai.us.lmco.com:443
    User-Agent: rclone/v1.56.0
    Content-Length: 4
    content-md5: uh8lEfwwQjvbsYP+M/PdDw==
    content-type: text/plain; charset=utf-8
    x-amz-acl: private
    x-amz-meta-mtime: 1630636753.967220656
    Accept-Encoding: gzip
    

    Setting the V2 auth in rclone does fix this issue:

    ❯ rclone sync -v test lakefs:aif-xxxx/main/test/ --s3-v2-auth --dump headers
    
    ....
    
    2021/09/02 21:16:33 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2021/09/02 21:16:33 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    2021/09/02 21:16:33 DEBUG : HTTP REQUEST (req 0xc0005a4200)
    2021/09/02 21:16:33 DEBUG : PUT /aif-xxxxx/main/test/test.txt HTTP/1.1
    Host: s3.lakefs.ai.us.lmco.com:443
    User-Agent: rclone/v1.56.0
    Content-Length: 4
    Authorization: XXXX
    Content-Md5: uh8lEfwwQjvbsYP+M/PdDw==
    Content-Type: text/plain; charset=utf-8
    Date: Fri, 03 Sep 2021 03:16:33 UTC
    X-Amz-Acl: private
    X-Amz-Meta-Mtime: 1630636753.967220656
    Accept-Encoding: gzip
    
    .... 
     
    2021/09/02 21:16:34 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    2021/09/02 21:16:34 DEBUG : test.txt: md5 = ba1f2511fc30423bdbb183fe33f3dd0f OK
    2021/09/02 21:16:34 INFO  : test.txt: Copied (new)
    2021/09/02 21:16:34 DEBUG : Waiting for deletions to finish
    2021/09/02 21:16:34 INFO  :
    Transferred:   	    4.827Ki / 4.827 KiByte, 100%, 0 Byte/s, ETA -
    Transferred:            2 / 2, 100%
    Elapsed time:         1.5s
    
    2021/09/02 21:16:34 DEBUG : 9 go routines active
    

    It's an easy enough work around to add v2_auth = true, however I wanted to report it in case it is a bug since you have SigV4 listed as supported here

  • Allow ingesting from a non-default S3 endpoint

    Allow ingesting from a non-default S3 endpoint

    Fixes #2886

    Command used to ingest:

     LAKECTL_S3_ENDPOINT_URL="http://127.0.0.1:9000"
      ./lakectl ingest --from s3://src-buck --to lakefs://dest-buck/main/ --intg minio 
    
    
  • Links to the documentation on the webui

    Links to the documentation on the webui

    Refer users to the documentation from the webui. For example, the Create-Repo dialog can link to the quickstart-create-repo doc page. Inspiration taken from Airbyte; while choosing the postgres Source: image

  • Replace

    Replace "i.e." with "e.g." where needed

    In some places (docs, UI and other text files) we use the abbreviation "i.e." where "e.g." actually makes more sense. "e.g." means "For example", while "i.e." means "That is".

    Find all places where we use "i.e." and replace it (only if applicable)

    Clarification edit: the issue here is to find the places where "i.e." appears by mistake, not all of its appearances.

  • lakectl autocomplete nouns

    lakectl autocomplete nouns

    Have the completions code complete our "nouns" when relevant. Example:

    lakectl fs upload --source <tab> // should suggest files
    lakectl repo delete <tab> // should suggest repositories names
    
  • Can't diff with another branch including uncommitted changes

    Can't diff with another branch including uncommitted changes

    Currently there is no way for the user to perform a diff between a branch -including uncommitted changes- and another branch.

    In git, the command git diff <branch name> will diff the current HEAD including uncommitted changes with the given branch. (git diff HEAD <branch name> will show the diff between the HEAD, without the uncommitted changes, and the given branch).

    Decide which syntax will provide the desired result: related to #1305.

  • Expand Documentation on

    Expand Documentation on "self-hosted" Buckets

    Hello, I'd like to expand the documentation on "self-hosted" instances. But for this, it would probably be beneficial if I got it to work first.

    My issue is that I cannot access it through rclone. I am pretty sure that the issue lies with my nginx-config but I can't really spot it.

    LakeFS is running on docker as suggested in the example.

    Here is my nginx config:

    	server {
    		listen 443 ssl http2;
    		listen [::]:443 ssl http2;
    		server_name s3.my-domain.de
    		server_name ~^(?<bucket>[a-zA-Z0-9\-_]+)\.s3\.my-domain\.de$;
    
    		# … SSL Stuff
    
    		if ($bucket = false) {
    			return 301 https://my-domain.de/s3-note;
    		}
    		
    		location / {
    			proxy_http_version     1.1;
    			proxy_set_header       Host $bucket.s3.my-domain.de;
    			proxy_set_header       Authorization '';
    			proxy_hide_header      x-amz-id-2;
    			proxy_hide_header      x-amz-request-id;
    			proxy_hide_header      Set-Cookie;
    			proxy_ignore_headers   "Set-Cookie";
    			proxy_buffering        off;
    			proxy_intercept_errors on;
    			proxy_pass             http://127.0.0.1:8000/$uri;
    		}
    	}
    

    The rclone error Message is:

    $ rclone copy . storage://testbucket/master -P
    2021-03-31 10:24:42 ERROR : : error reading destination directory: RequestError: send request failed
    caused by: Get "/": stopped after 10 redirects
    
  • docker-compose fails setup

    docker-compose fails setup

    Hi,

    I follow the quickstart, by downloading the docker-compose.yml, running docker-compose up, then I go to http://127.0.0.1:8000/setup, enter a username 'maarten' and get: image

    lakefs_1    |      ██╗      █████╗ ██╗  ██╗███████╗███████╗███████╗
    lakefs_1    |      ██║     ██╔══██╗██║ ██╔╝██╔════╝██╔════╝██╔════╝
    lakefs_1    |      ██║     ███████║█████╔╝ █████╗  █████╗  ███████╗
    lakefs_1    |      ██║     ██╔══██║██╔═██╗ ██╔══╝  ██╔══╝  ╚════██║
    lakefs_1    |      ███████╗██║  ██║██║  ██╗███████╗██║     ███████║
    lakefs_1    |      ╚══════╝╚═╝  ╚═╝╚═╝  ╚═╝╚══════╝╚═╝     ╚══════╝
    lakefs_1    |
    lakefs_1    | │
    lakefs_1    | │ If you're running lakeFS locally for the first time,
    lakefs_1    | │     complete the setup process at http://127.0.0.1:8000/setup
    lakefs_1    | │
    lakefs_1    |
    lakefs_1    | │
    lakefs_1    | │ For more information on how to use lakeFS,
    lakefs_1    | │     check out the docs at https://docs.lakefs.io/quickstart/repository
    lakefs_1    | │
    lakefs_1    |
    lakefs_1    | time="2020-11-12T13:41:32Z" level=info msg="schema migrated" func="db.(*DatabaseMigrator).Migrate" file="build/db/migration.go:49" direction=up host="127.0.0.1:8000" method=POST path=/api/v1/setup_lakefs request_id=e4eacba2-ea5a-491b-a77b-28821f2e72df service_name=rest_api took=3.740318ms
    lakefs_1    | time="2020-11-12T13:41:32Z" level=error msg="SQL query failed with error" func="db.(*dbTx).Get" file="build/db/tx.go:83" args="[Admins 2020-11-12 13:41:32.291128212 +0000 UTC m=+8.923394987]" error="scany: rows final error: ERROR: duplicate key value violates unique constraint \"auth_groups_unique_display_name\" (SQLSTATE 23505)" query="INSERT INTO auth_groups (display_name, created_at) VALUES ($1, $2) RETURNING id" took="4.118µs" type=get
    

    Am I doing something wrong, or is the docker image broken?

    cheers,

    Maarten

  • Misleading error when trying to list commits for nonexistent repository

    Misleading error when trying to list commits for nonexistent repository

    When running lakectl log on a nonexistent repository, a misleading message appears.

    Example: lakectl log lakefs://notarepo@master

    Output: Error executing command: branch 'master' not found

    Expected: Error executing command: repo 'notarepo' not found

  • Large merges take long time to complete (>60 seconds)

    Large merges take long time to complete (>60 seconds)

    Reported on Slack.

    Few hundred thousands objects merge fails after 30 seconds with (UI):

    time="2021-11-10T15:33:02Z" level=trace msg="HTTP call ended" func=pkg/httputil.TracingMiddleware.func1.1 file="build/pkg/httputil/tracing.go:149" host=xxxx<http:///> method=POST path=/api/v1/repositories/aif-xxxx-xxxx/refs/xxxx-xxxx-processing-a6c0f/merge/main request_body="[123 125]" request_id=da585880-e027-41bb-b3cc-b64bfa08a818 response_body="{\"message\":\"merge in CommitManager: apply ns=s3://lakefs-data/aif-xxxx-xxxx<s3://lakefs-data/xxx> id=3fe239d80f74e" response_headers="map[Content-Type:[application/json] X-Request-Id:[da585880-e027-41bb-b3cc-b64bfa08a818]]" sent_bytes=0 service_name=rest_api status_code=500 took=30.008900206s
    

    lakectl:

    Command
    
    /home/lakefs $ lakectl merge lakefs://aif-xxx-xxx/xxx-xxx-processing-a6c0f lakefs://aif-xxx-xxx/main -c /tmp/.lakectl.yaml --l
    og-level trace
    DEBU[0000]/build/cmd/lakectl/cmd/root.go:67 github.com/treeverse/lakefs/cmd/lakectl/cmd.glob..func66() loaded configuration from file                fields.file=/tmp/.lakectl.yaml file=/tmp/.lakectl.yaml
    Source: lakefs://aif-xxx-xxx/xxx-xxx-processing-a6c0f
    Destination: lakefs://aif-xxx-xxx/main
    504 Gateway Timeout
    

    Logs from Pod:

    time="2021-11-10T18:19:39Z" level=trace msg="HTTP call ended" func=pkg/httputil.TracingMiddleware.func1.1 file="build/pkg/httputil/tracing.go:149" host=lakefs.ai.us.lmco.com method=POST path=/api/v1/repositories/aif-xxx-xxx/refs/xxx-xxx-processing-a6c0f/merge/main request_body="[123 125]" request_id=fc0348ba-0c85-4674-921f-253dd6839b66 response_body="{\"message\":\"merge in CommitManager: apply ns=s3://lakefs-data/aif-xxx-xxx id=3fe239d80f74e" response_headers="map[Content-Type:[application/json] X-Request-Id:[fc0348ba-0c85-4674-921f-253dd6839b66]]" sent_bytes=0 service_name=rest_api stat us_code=500 took=29.992150768s
    
  • Terraform Provider

    Terraform Provider

    Feature Request

    Currently it is possible to create lakefs resources (such as repository, branches, users, credentials, branch protection rules, garbage policies, ...) using:

    • Web UI
    • CLI (lakectl)
    • Java/Python clients

    It could be really nice to have a terraform provider like github terraform provider to do some infrastructure as code with LakeFS

    Example

    We could stuff like:

    # Init provider
    provider "lakefs" {
      host = "https://lakefs.example.com/api/v1"
      access_key = "AKIAlakefs12345EXAMPLE"
      secret_key  = "abc/lakefs/1234567bPxRfiCYEXAMPLEKEY"
    }
    
    # User for_each to create users and attach them to admin group
    resource "lakefs_user" "data_engineers" {
      for_each = local.data_engineers
      username = each.value
    }
    
    resource "lakefs_group_membership" {
      for_each = local.data_engineers
      group = "Admins"
      username = lakefs_user.data_engineers[each.value].id
    } 
    
    # Create repo and add garbage policy
    resource "lakefs_repository" "my_repo" {
      name = "my-repo"
      storage_namespace = "s3://${aws_s3_bucket.my_bucket.name}/my-repo/"
    }
    
    resource "lakefs_repository_garbage_policy" "my_repo" {
      repository = lakefs_repository.my_repo.id
      policy = { ... }
    }
    ... 
    
  • Refactor lakefs_export script output

    Refactor lakefs_export script output

    • Avoid temporary files
    • Don't read entire "rclone check" output into memory
    • Detect "rclone check" success/failure directly rather than via error output

    This is part of #4841 but obviously not a fix for it.

  • Bump json5 from 2.2.1 to 2.2.2 in /webui

    Bump json5 from 2.2.1 to 2.2.2 in /webui

    Bumps json5 from 2.2.1 to 2.2.2.

    Release notes

    Sourced from json5's releases.

    v2.2.2

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).
    Changelog

    Sourced from json5's changelog.

    v2.2.2 [code, diff]

    • Fix: Properties with the name __proto__ are added to objects and arrays. (#199) This also fixes a prototype pollution vulnerability reported by Jonathan Gregson! (#295).
    Commits
    • 14f8cb1 2.2.2
    • 10cc7ca docs: update CHANGELOG for v2.2.2
    • 7774c10 fix: add proto to objects and arrays
    • edde30a Readme: slight tweak to intro
    • 97286f8 Improve example in readme
    • d720b4f Improve readme (e.g. explain JSON5 better!) (#291)
    • 910ce25 docs: fix spelling of Aseem
    • 2aab4dd test: require tap as t in cli tests
    • 6d42686 test: remove mocha syntax from tests
    • 4798b9d docs: update installation and usage for modules
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

This package provides the following capabilities for managing and installing the WebView2

WebView2Runtime This package provides the following capabilities for managing and installing the WebView2 runtime: Retrieve version of installed WebVi

Aug 1, 2022
PHP functions implementation to Golang. This package is for the Go beginners who have developed PHP code before. You can use PHP like functions in your app, module etc. when you add this module to your project.

PHP Functions for Golang - phpfuncs PHP functions implementation to Golang. This package is for the Go beginners who have developed PHP code before. Y

Dec 30, 2022
Git watchdog will scan your public repository and find out the vulnerabilities

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Dec 30, 2021
Secretsmanager - Secrets management that allows you to store your secrets encrypted in git

I created secretsmanager to store some secrets within a repository. The secrets are encrypted at rest, with readable keys and editable JSON, so you can rename a key or delete it by hand. The cli tool handles the bare minumum of requirements.

May 6, 2022
Dec 24, 2022
Exploitation of CVE-2018-18925 a Remote Code Execution against the Git self hosted tool: Gogs.
Exploitation of CVE-2018-18925 a Remote Code Execution against the Git self hosted tool: Gogs.

CVE-2018-18925 Exploitation of CVE-2018-18925 a Remote Code Execution against the Git self hosted tool: Gogs. Gogs is based on the Macaron framework.

Nov 9, 2022
Secure software enclave for storage of sensitive information in memory.

MemGuard Software enclave for storage of sensitive information in memory. This package attempts to reduce the likelihood of sensitive data being expos

Dec 30, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Jan 6, 2023
Ah shhgit! Find secrets in your code. Secrets detection for your GitHub, GitLab and Bitbucket repositories: www.shhgit.com
Ah shhgit! Find secrets in your code. Secrets detection for your GitHub, GitLab and Bitbucket repositories: www.shhgit.com

shhgit helps secure forward-thinking development, operations, and security teams by finding secrets across their code before it leads to a security br

Dec 23, 2022
Encrypt your files or notes by your GPG key and save to MinIO or Amazon S3 easily!
Encrypt your files or notes by your GPG key and save to MinIO or Amazon S3 easily!

Super Dollop Super Dollop can encrypt your files and notes by your own GPG key and save them in S3 or minIO to keep them safe and portability, also yo

Jul 11, 2022
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang

Casbin News: still worry about how to write the correct Casbin policy? Casbin online editor is coming to help! Try it at: https://casbin.org/editor/ C

Jan 6, 2023
Lightweight static analysis for many languages. Find bug variants with patterns that look like source code.

Lightweight static analysis for many languages. Find bugs and enforce code standards. Semgrep is a fast, open-source, static analysis tool that finds

Jan 9, 2023
A fully self-contained Nmap like parallel port scanning module in pure Golang that supports SYN-ACK (Silent Scans)

gomap What is gomap? Gomap is a fully self-contained nmap like module for Golang. Unlike other projects which provide nmap C bindings or rely on other

Dec 10, 2022
Fastest recursive HTTP fuzzer, like a Ferrari.
Fastest recursive HTTP fuzzer, like a Ferrari.

Medusa Fastest recursive HTTP fuzzer, like a Ferrari. Usage Usage: medusa [options...] Options: -u Single URL -uL

Oct 14, 2022
Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user.
Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user.

Driftwood is a tool that can enable you to lookup whether a private key is used for things like TLS or as a GitHub SSH key for a user. Drift

Dec 29, 2022
Generic impersonation and privilege escalation with Golang. Like GenericPotato both named pipes and HTTP are supported.

This is very similar to GenericPotato - I referenced it heavily while researching. Gotato starts a named pipe or web server and waits for input. Once

Nov 9, 2022
DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)
DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)

Minify and Secure Docker containers (free and open source!) Don't change anything in your Docker container image and minify it by up to 30x making it

Dec 27, 2022
Someone tried to unlock your device
Someone tried to unlock your device

PC Auth Notifier Someone tried to unlock your device I made this project because I want to learn flutter by myself, unfortunately I can't use my XPS c

Sep 17, 2022
A fast and easy to use URL health checker ⛑️ Keep your links healthy during tough times
A fast and easy to use URL health checker ⛑️ Keep your links healthy during tough times

AreYouOK? A minimal, fast & easy to use URL health checker Who is AreYouOk made for ? OSS Package Maintainers ??️

Oct 7, 2022