a go daemon that syncs MongoDB to Elasticsearch in realtime. you know, for search.

monstache

a go daemon that syncs mongodb to elasticsearch in realtime

Monstache CI Go Report Card

Version 6

This version of monstache is designed for MongoDB 3.6+ and Elasticsearch 7.0+. It uses the official MongoDB golang driver and the community supported Elasticsearch driver from olivere.

Some of the monstache settings related to MongoDB have been removed in this version as they are now supported in the connection string

Changes from previous versions

Monstache now defaults to use change streams instead of tailing the oplog for changes. Without any configuration monstache watches the entire MongoDB deployment. You can specify specific namespaces to watch by setting the option change-stream-namespaces to an array of strings.

The interface for golang plugins has changed due to the switch to the new driver. Previously the API exposed a Session field typed as a *mgo.Session. Now that has been replaced with a MongoClient field which has the type *mongo.Client.

See the MongoDB go driver docs for details on how to use this client.

Comments
  • Direct Read won't pull entire collection

    Direct Read won't pull entire collection

    I have a collection that has 17Million documents in it. I have enabled direct read for that specific collection and it has gotten up to about 4.5 million documents sync'd to Elastic but it seems to stop there and just do the Change Stream with no forward motion on the remaining 13 million documents. Is there some type of limit that would prevent sending this entire collection? I've attached my configuration for review.

    config-prod.txt

  • Bug: Resuming on relate uses wrong index name

    Bug: Resuming on relate uses wrong index name

    Hi @rwynn , Thanks for your work, really amazing.

    When Resuming using the following config, events in the change stream gets indexed to a wrong index name {"index":{"_index":"5c976492cf09a2266a829a76_db.students","_id":"5c9e4143f37e264e131c84be","

    version:4.16.0

    resume=true
    [[relate]]
    	namespace="db.students"
    	with-namespace="db.studentsView"
    	src-field="refId"
    	match-field="_id"
    
    [[mapping]]
      namespace="db.studentsView"
      index="studentsView"
      type="studentsView"
    

    This only happens on resumng, works fine when listening to new events in the change stream

    Thanks!

  • Error performing direct read of collection: Cursor already in use

    Error performing direct read of collection: Cursor already in use

    Hi,

    I am using direct-read-ns to copy a view of documents from mongo to initialize an index in es. The view itself is a lookup between a collection and another view (itself made of a lookup between 2 other collections).

    I get the above error after a few documents got successfully replicated (300 out of 9000)

    ERROR 2019/05/23 17:01:14 Error performing direct read of collection saq.v_products_quantities: (CursorInUse) cursor id 5339801235002722837 is already in use
    

    Before the cursor error I also get some document rejected by es due to field formatting issue. I doubt it has anything to do with the cursor issue ( I need to fix some documents obviously). But I mention it just in case.

    ERROR 2019/05/23 17:01:18 Bulk response item: {"_index":"saq.v_products_quantities","_type":"_doc","_id":"13945062","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse field [sugar_content] of type [float] in document with id '13945062'","caused_by":{"reason":"For input string: \"\u003c1.2 g/l\"","type":"number_format_exception"}}}
    

    For now I have disabled change events to focus on the initial direct read sync. I am using monstache 5.0.5, mongo 4.0.9 is replicaset mode with a single node, and elastic 6.7.2

    Am I missing anything in my TOM settings to get the direct-read right ?

    Thanks


    My tom looks like this:

    direct-read-namespaces = ["saq.v_products_quantities"] # read direct from the view of the collection to seed index
    
    disable-change-events = true
    
    gzip = true
    
    verbose = false
    
    change-stream-namespaces = ["saq.products","saq.quantities"] 
    
    [[mapping]]
    namespace = "saq.products"
    index = "saq.v_products_quantities"
    
    [[mapping]]
    namespace = "saq.v_products_quantities" 
    index = "saq.v_products_quantities"
    
    [[relate]]
    namespace = "saq.products"  
    with-namespace = "saq.v_products_quantities"
    keep-src = false 
    
    [[relate]]
    namespace = "saq.quantities" 
    with-namespace = "saq.products"
    src-field = "product_code_SAQ" 
    match-field = "saq_code" 
    keep-src = false
    
  • Frequently seeing

    Frequently seeing "connection timed out" errors

    Hey Ryan,

    Seeing these error (below) messages every few minutes in monstache logs.
    ERROR 2019/01/17 18:35:57 Unable to dial MongoDB: dial tcp <ip>:27017: connect: connection timed out ERROR 2019/01/17 18:35:57 Unable to dial MongoDB: dial tcp <ip>:27017: connect: connection timed out

    My connection settings are: [mongo-dial-settings] ssl = true read-timeout = 300 write-timeout = 300

    [mongo-session-settings] socket-timeout = 300 sync-timeout = 300

    Ingestion in production is also relatively slower compared to our dev tests. In about ~14hrs, only about 1.2 million documents have been transferred from mongo to ES, whereas in dev it only took a few hours to transfer 2.5+ million documents. The only difference between the tests was adding a namespace-regex in prod. Is filtering performance expensive?

    Thanks!

  • MongoDB View Replication

    MongoDB View Replication

    I have an interesting idea that I might try and work on... we are looking to make use of MongoDB views more in our applications, with the concept that our aggregate view constructs the document we wish to be present in Elasticsearch, thereby offloading a lot of work to the database and reducing the complexity of queries to ES.

    The issue I see is that the opLog will only detail changes to the MongoDB collections that comprise the view. The idea I am toying with is that using a Monstache config setting we could define an array of collections that make up the view, and then when one of those collections are detected as having an update in the opLog we then go to the named MongoDB view. This would then in effect allow for the replication of complex views built up out of aggregates and could be quite a powerful concept for searching data in ES.

    I'll refresh myself with the Monstache code to see if this is viable, and then fork, but if you have any pointers/suggestions/ideas would love to hear them.

  • Partial update only

    Partial update only

    I have done another test: Mongo and ES with default values.

    I have inserted 79913 documents in Mongo ES shows 15703

    Monstache's log doesn't show any error, it is full of lines like I posted below. It ran fine, then.. just stopped and never resumed.

    I just restarted the Monstache container, not touching anything else, and it is now continuing the process.

    What kind of event could make it believe it is up to date where a restart would behave differently?

    2018-11-15T04:33:15.813943344Z , 2018-11-15T04:33:15.863602133Z TRACE 2018/11/15 04:33:15 POST /_bulk HTTP/1.1 , 2018-11-15T04:33:15.863636783Z Host: elasticsearch:9200 , 2018-11-15T04:33:15.863648604Z User-Agent: elastic/6.2.11 (linux-amd64) , 2018-11-15T04:33:15.863659574Z Content-Length: 353 , 2018-11-15T04:33:15.863670284Z Accept: application/json , 2018-11-15T04:33:15.863681165Z Content-Type: application/x-ndjson , 2018-11-15T04:33:15.863691950Z Accept-Encoding: gzip , 2018-11-15T04:33:15.863702970Z , 2018-11-15T04:33:15.813943344Z , 2018-11-15T04:33:15.863602133Z TRACE 2018/11/15 04:33:15 POST /_bulk HTTP/1.1 , 2018-11-15T04:33:15.863636783Z Host: elasticsearch:9200 , 2018-11-15T04:33:15.863648604Z User-Agent: elastic/6.2.11 (linux-amd64) , 2018-11-15T04:33:15.863659574Z Content-Length: 353 , 2018-11-15T04:33:15.863670284Z Accept: application/json , 2018-11-15T04:33:15.863681165Z Content-Type: application/x-ndjson , 2018-11-15T04:33:15.863691950Z Accept-Encoding: gzip , 2018-11-15T04:33:15.863702970Z , 2018-11-15T04:33:15.863713770Z {"index":{"_index":"test.pictures","_id":"6t8Y22YJibT2581XdMqHHeEtQ7rB-sdAY1eZyWNSpJk","_type":"pictureobject","version":6623940778571857932,"version_type":"external"}}, 2018-11-15T04:33:15.863729797Z {"AlbumId":"ba1219cf-a4ac-48ef-ba44-97ea92acd215","AlbumOffset":1,"CreatedOn":"2018-11-15T04:33:15.779Z","Height":1056,:1280}, 2018-11-15T04:33:15.863745391Z , 2018-11-15T04:33:15.975347512Z TRACE 2018/11/15 04:33:15 HTTP/1.1 200 OK , 2018-11-15T04:33:15.975396401Z Content-Type: application/json; charset=UTF-8 , 2018-11-15T04:33:15.975409870Z , 2018-11-15T04:33:15.975421687Z {"took":160,"errors":false,"items":[{"index":{"_index":"test.pictures","_type":"pictureobject","_id":"UV5Va08voGCcigueWS33CAAOlVaPzccnGtQ16O748g0","_version":6623940778571857929,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2773,"_primary_term":1,"status":201}}]}, 2018-11-15T04:33:15.998852980Z TRACE 2018/11/15 04:33:15 HTTP/1.1 200 OK , 2018-11-15T04:33:15.998884665Z Content-Type: application/json; charset=UTF-8 , 2018-11-15T04:33:15.998901749Z , 2018-11-15T04:33:15.998916722Z {"took":134,"errors":false,"items":[{"index":{"_index":"test.pictures","_type":"pictureobject","_id":"6t8Y22YJibT2581XdMqHHeEtQ7rB-sdAY1eZyWNSpJk","_version":6623940778571857932,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2886,"_primary_term":1,"status":201}}]}, 2018-11-15T04:33:16.011857085Z TRACE 2018/11/15 04:33:16 POST /_bulk HTTP/1.1 , 2018-11-15T04:33:16.011873086Z Host: elasticsearch:9200 , 2018-11-15T04:33:16.011877574Z User-Agent: elastic/6.2.11 (linux-amd64) , 2018-11-15T04:33:16.011881519Z Content-Length: 353 , 2018-11-15T04:33:16.011885168Z Accept: application/json , 2018-11-15T04:33:16.011892067Z Content-Type: application/x-ndjson , 2018-11-15T04:33:16.011895881Z Accept-Encoding: gzip , 2018-11-15T04:33:16.011899544Z , 2018-11-15T04:33:16.011911735Z {"index":{"_index":"test.pictures","_id":"-6flUZrvARXzfwkpyZGuSZ3Jcjk+s7lIJUG86HucvaI","_type":"pictureobject","version":6623940778571857943,"version_type":"external"}}, 2018-11-15T04:33:16.011917122Z {"AlbumId":"bdabb811-e0a4-4683-be32-c5f9b0c1c550","AlbumOffset":1,"CreatedOn":"2018-11-15T04:33:15.951Z","Height":1707,:1280}, 2018-11-15T04:33:16.011922212Z , 2018-11-15T04:33:16.168856954Z TRACE 2018/11/15 04:33:16 HTTP/1.1 200 OK , 2018-11-15T04:33:16.168879239Z Content-Type: application/json; charset=UTF-8 , 2018-11-15T04:33:16.168886263Z , 2018-11-15T04:33:16.168892682Z {"took":155,"errors":false,"items":[{"index":{"_index":"test.pictures","_type":"pictureobject","_id":"-6flUZrvARXzfwkpyZGuSZ3Jcjk+s7lIJUG86HucvaI","_version":6623940778571857943,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2873,"_primary_term":1,"status":201}}]}, 2018-11-15T04:33:16.939082838Z TRACE 2018/11/15 04:33:16 POST /_bulk HTTP/1.1 , 2018-11-15T04:33:16.939114432Z Host: elasticsearch:9200 , 2018-11-15T04:33:16.939119081Z User-Agent: elastic/6.2.11 (linux-amd64) , 2018-11-15T04:33:16.939123031Z Content-Length: 352 , 2018-11-15T04:33:16.939126620Z Accept: application/json , 2018-11-15T04:33:16.939130441Z Content-Type: application/x-ndjson , 2018-11-15T04:33:16.939133983Z Accept-Encoding: gzip , 2018-11-15T04:33:16.939137506Z , 2018-11-15T04:33:16.939141020Z {"index":{"_index":"test.pictures","_id":"io0YMW+63RHRth+lpSS9ZuFnCyEZBbqhs6RS7o9emoA","_type":"pictureobject","version":6623940782866825221,"version_type":"external"}}, 2018-11-15T04:33:16.939159386Z {"AlbumId":"bdabb811-e0a4-4683-be32-c5f9b0c1c550","AlbumOffset":2,"CreatedOn":"2018-11-15T04:33:16.842Z","Height":936,:1280}, 2018-11-15T04:33:16.939166411Z , 2018-11-15T04:33:17.120868582Z TRACE 2018/11/15 04:33:17 HTTP/1.1 200 OK , 2018-11-15T04:33:17.120909556Z Content-Type: application/json; charset=UTF-8 , 2018-11-15T04:33:17.120923217Z , 2018-11-15T04:33:17.120934657Z {"took":180,"errors":false,"items":[{"index":{"_index":"test.pictures","_type":"pictureobject","_id":"io0YMW+63RHRth+lpSS9ZuFnCyEZBbqhs6RS7o9emoA","_version":6623940782866825221,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2941,"_primary_term":1,"status":201}}]}, 2018-11-15T04:33:17.390380525Z TRACE 2018/11/15 04:33:17 POST /_bulk HTTP/1.1 , 2018-11-15T04:33:17.390418980Z Host: elasticsearch:9200 , 2018-11-15T04:33:17.390428162Z User-Agent: elastic/6.2.11 (linux-amd64) , 2018-11-15T04:33:17.390435967Z Content-Length: 353 , 2018-11-15T04:33:17.390443256Z Accept: application/json , 2018-11-15T04:33:17.390450596Z Content-Type: application/x-ndjson , 2018-11-15T04:33:17.390457727Z Accept-Encoding: gzip , 2018-11-15T04:33:17.390478861Z , 2018-11-15T04:33:17.390486694Z {"index":{"_index":"test.pictures","_id":"rYE331Fy23wLP9PiwkYf86UZ1mQphIBC4wWdg-VqdsY","_type":"pictureobject","version":6623940787161792515,"version_type":"external"}}, 2018-11-15T04:33:17.390496858Z {"AlbumId":"b9a0522f-4a3e-4cbe-8c18-9a7259097ad5","AlbumOffset":9,"CreatedOn":"2018-11-15T04:33:17.355Z","Height":1690,:1280},

  • data transformation guidelines

    data transformation guidelines

    hello again,

    i have a unique situation with my transformation where in each single document has an array list and since i am interested in visualisations across each object in array i would have to split array into n objects

    for example using javascript notation

    module.exports = function(doc){ doc.foo.forEach(s){ return s.info; } }

    so i have say 1 document with 10 objects in array for foo so i need to split single document into 10 documents . i know you can leverage logstash filters do that but wondering if there is a way to do transformation from 1 to many documents instead of 1-1 transformations?

    thanks chakri

  • Sync stops working

    Sync stops working

    Hello, the sync stops working after an random time. A few days everything can work properly and then the sync of any changes stops working. Sometimes it happens several times a day. And sync can stop working only for several collections and can all at once. And Monstache begins to heavily load the CPU. After restarting Monstache everything is working properly. I found nothing in the logs, and I can't trace the pattern. Once it happened after removing a large number of documents at once (~10000), but maybe I'm wrong because I could not repeat it. What could be the problem? image

  • Filter oplog to only apply selected updates

    Filter oplog to only apply selected updates

    Not sure if this functionality already exists but my use case is since writing to elasticsearch isn't exactly fast some data that is needed real time is lagged behind. So I was thinking of having the ability to have an instance of monstache that only checks for those fields that are needed real time and ignoring the rest. Is that possible?

  • exit-after-direct-read trigger

    exit-after-direct-read trigger

    Hi @rwynn ,

    More of a question than an issue: How does monstache trigger the exit for the exit-after-direct-read option when direct reading a collection? To my understanding, when it is direct reading a collection it will also sync any live changes. Does this mean that if inserts are frequently occurring for the mongo collection, then the exit will not trigger?

    Thanks!

  • direct-read-namespaces for 1.1 billion entries

    direct-read-namespaces for 1.1 billion entries

    Hi, I have a mongodb db w/ 53 collections for a total of 1,1 billion documents.

    I need to insert these into elasticsearch, and I was hoping for some guidance on how to do this as quickly as possible.

    Both run, dockerized, on a bare-metal i7 with 24gigs of RAM. I set the mongodb wiredTigerCacheSizeGB to 10. I limited ELK-stack to 8g of RAM "ES_JAVA_OPTS=-Xms8g -Xmx8g". Additionally, queue_size is set to 200. CPU load averages are at 18.90 20.79 22.88. Memory is at 20/23gb used. Unfortunately both the mongodb and elk stack are on the same two ZVOL'ed SSDs.

    The logfile of monstache is still spammed w/

    ERROR 2019/03/22 14:56:44 Bulk response item: {"_index":"importpw.qm","_type":"_doc","_id":"5c8d630213ab3a58ec2d056d","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [1386143][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[importpw.qm][0]] containing [660] requests, target allocation id: MBIIfJ5aSOaCh1u4RiiTkQ, primary term: 1 on EsThreadPoolExecutor[name = fAhQBq0/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@4f2fe7b6[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 486895]]"}}
    

    The indexing is still set to 1s, so I know that doesnt help, but are there more things I can do?

    Monstache settings are simply:

        environment:
          - MONSTACHE_MONGO_URL=127.0.0.1:27017
          - MONSTACHE_ES_URL=127.0.0.1:9200
          - MONSTACHE_DIRECT_READ_NS=fiftythree,collections,comma,separated,are,here
    
  • Obfuscate the mongodb password in toml file

    Obfuscate the mongodb password in toml file

    Hello, Can you please let me know if its possible to obfuscate the mongodb password and elasticsearch-password?

    For instance if i want to use a simpl4 base64 encoding on a password an assign it to URL will it work ? I tried the below and it didnt work .

    pass=VFEFWGF(#! dec_pass=echo $pass | base 64 --decode

    connect to MongoDB using the following URL

    mongo-url = "mongodb://someuser:$dec_pass@localhost:40001"

    So please let me know some possibility to use obfuscated mongodb and elasticsearch-password.

    Thanks, Subbu

  • Elasticsearch 8 support

    Elasticsearch 8 support

    Does Monstache support ES 8? Latest version promises a big bump in performance along with other cool new features. I'm just not exactly sure will Monstache keep working correctly with version 8.

    I also noticed that you using non-official ElasticSearch Go client library (https://github.com/olivere/elastic) that deprecated already. Any way you migrate to official ElasticSearch Go library?

  • Delete from specified index

    Delete from specified index

    In some cases docs in Elastic are not deleted and I get the following error: Failed to find unique document [theDocId] for deletion using index pattern *

    After enabling and looking at the Trace logs, I can see that the doc with same ID appears in two collections. The delete operation is ignored in this case since hits.total.value == 2 . This is how the code is implemented.

    I would like to delete the doc from the same index that match the source collection namespace. For example, if doc has been deleted from collection DB1.Collection1, I would like it to be searched and deleted only from index db1.collection1

    Is there a way to make the delete operation be like that ?

  • When a document its updated by monstache it passes through the ingest pipeline

    When a document its updated by monstache it passes through the ingest pipeline

    Hi, Im adding some fields with some script processors in an ingest pipeline in elasticsearch, and I was wondering if the updated documents also pass trough the pipeline, in other words if these fields will exist in the document if it is updated.

    Thanks!

  • Go plugin

    Go plugin

    So I implemented the go plugin to add the insertedTime when its inserting, here is the plugin

    package main
    import (
    	"github.com/rwynn/monstache/v6/monstachemap"
    	"fmt"
    	"time"
    	"github.com/olivere/elastic/v7"
    )
    func Process(input*monstachemap.ProcessPluginInput) (err error){
    	doc := input.Document
    	
    	bulk := input.ElasticBulkProcessor
    	Operation := input.Operation
    	if Operation == "i"{
    		var newData newPostData
    		newData.InsertedTime = time.Now().UnixNano()
    		req := elastic.NewBulkIndexRequest().Index("allposts").Id(fmt.Sprintf("%v", doc["unique"])) .Doc(newData)
    		if _, err = req.Source(); err == nil {
    			bulk.Add(req)
    		} else {
    			return err
    		}
    	}
    	return
    }
    type newPostData struct{ 
    	InsertedTime interface{}
    }
    

    I add this script aswell on the config.toml

    module.exports = function (doc,ns, updateDesc) {
        var meta = { id: doc.unique };
        doc._meta_monstache = meta;
        
        
        return doc;
    }
    

    I have aswell this pipeline

    module.exports = function(ns, changeStream) {
      if (changeStream) {
        return [
          {
            $project: {
              _id: 1,
              "fullDocument.unique": 1,
              "operationType": 1,
              "fullDocument.userTags": 1,
              "fullDocument.profile.photo": 1,
              "ns": 1
            }
          }
        ]
      } else {
        return []
      }
    }
    

    but I have a problem that in some docs, it adds insertedTime to the doc what is correct, but in some docs removes everything and only inserts the insertedTime.

    Does someone knows why?

This project is meant to make you code a digital version of an ant farm

This project is meant to make you code a digital version of an ant farm. Create a program lem-in that will read from a file (describing the ants and the colony) given in the arguments. Upon successfully finding the quickest path, lem-in will display the content of the file passed as argument and each move the ants make from room to room. How does it work? You make an ant farm with tunnels and rooms. You place the ants on one side and look at how they find the exit.

Dec 24, 2021
Blog-mongodb - this repository for educational purpose, learn how to use mongodb and use mongodb with go

ENDPOINT ENDPOINT METHOD ACCESS /register POST all /login POST all /articles GET all /articles POST all /articles/{articleId} GET all /articles/{artic

Jan 4, 2022
A note taking app, that you can draw in, syncs to the cloud, and is on most platforms!

About NotDraw About · How to contribute · How to run · Trello · FAQ This is achived because I dont want to work on it anymore Structure Codebase Descr

Jul 11, 2022
A note taking app, that you can draw in, syncs to the cloud, and is on most platforms!

About NoteDraw About · How to contribute · How to run Structure Codebase Description SRC The sorce code for the client side (Go) Branches Only Ones th

Jul 11, 2022
The package manager for macOS you didn’t know you missed. Simple, functional, and fast.
The package manager for macOS you didn’t know you missed. Simple, functional, and fast.

Stew The package manager for macOS you didn’t know you missed. Built with simplicity, functionality, and most importantly, speed in mind. Installation

Mar 30, 2022
Use the tools you know. Respect users' privacy. Forget cookie consents. Comply with GDPR, ePrivacy, COPPA, CalOPPA, PECR, PIPEDA, CASL;
Use the tools you know. Respect users' privacy. Forget cookie consents. Comply with GDPR, ePrivacy, COPPA, CalOPPA, PECR, PIPEDA, CASL;

Privera Community Edition (CE) The Analytics' Anonymization Proxy Use the tools you know. Respect users' privacy. Forget cookie consents.

Dec 15, 2022
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Zinc is a search engine that does full text indexing. It is a lightweight alternative to elasticsearch and runs in less than 100 MB of RAM. It us

Jan 8, 2023
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Search Engine Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of

Jan 1, 2023
Go-mongodb - Practice Go with MongoDB because why not

Practice Mongo DB with Go Because why not. Dependencies gin-gonic go mongodb dri

Jan 5, 2022
Know when GC runs from inside your golang code

gcnotifier gcnotifier provides a way to receive notifications after every run of the garbage collector (GC). Knowing when GC runs is useful to instruc

Dec 26, 2022
Makes a VM, imaginative name, I know

mkvm Makes a virtual machine from a bunch of templates using cloud-init userdata to customize them. This is an experimental tool I made for testing th

Jan 7, 2023
A very very simple CLI tool to know the next and previous SpaceX flights.

Rocket A very very simple CLI tool to know the next and previous SpaceX flights. Commands rocket Get the next flight. rocket latest Get the last fligh

Apr 19, 2021
sigbypass4xx is a utility to automate well-know techniques used to bypass access control restrictions.

sigbypass4xx sigbypass4xx is a utility to automate well-know techniques used to bypass access control restrictions. Resources Usage Installation From

Nov 9, 2022
CoachCarter: a discord bot which lets a server know if its inactive for too long
CoachCarter: a discord bot which lets a server know if its inactive for too long

I took this job because I wanted to affect change in a special group of young me

Jan 7, 2022
Know your customer library. The sources are OFAC, UK sanctions, Interpol, Wikidata, etc.

KYC/AML check The package provides a level of know your customer service for banking and other financial services. Note: They may have some missing re

Sep 21, 2022
Raspberry Pi alarm clock for childs, to let them know whether they can wake up or stay in bed

Miveil Raspberry Pi alarm clock for childs, to let them know whether they can wake up or stay in bed. The idea was to have a simple device that let my

Apr 14, 2022
Simple filter query language parser so that you can build SQL, Elasticsearch, etc. queries safely from user input.

fexpr fexpr is a filter query language parser that generates extremely easy to work with AST structure so that you can create safely SQL, Elasticsearc

Dec 29, 2022
QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file
QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file

QueryCSV enable you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to CSV file

Dec 22, 2021
Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for.
Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for.

Squizit is a simple tool, that aim to help you get the grade you want, not the one you have learnt for. Screenshots First, input PIN Then enjoy! Hoste

Mar 11, 2022
uilive is a go library for updating terminal output in realtime
uilive is a go library for updating terminal output in realtime

uilive uilive is a go library for updating terminal output in realtime. It provides a buffered io.Writer that is flushed at a timed interval. uilive p

Dec 28, 2022