Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Search Engine

Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of the resources. It uses bluge as the underlying indexing library.

It is very simple and easy to operate as opposed to Elasticsearch which requires a couple dozen knobs to understand and tune which you can get up and running in 2 minutes

It is a drop-in replacement for Elasticsearch if you are just ingesting data using APIs and searching using kibana (Kibana is not supported with zinc. Zinc provides its own UI).

Check the below video for a quick demo of Zinc.

Zinc Youtube

Playground Server

You could try ZincSearch without installing using below details:

Server https://playground.dev.zincsearch.com
User ID admin
Password Complexpass#123

Note: Do not store sensitive data on this server as its available to everyone on internet. Data will also be cleaned on this server regularly.

Join slack channel

Slack

Why zinc

While Elasticsearch is a very good product, it is complex and requires lots of resources and is more than a decade old. I built Zinc so it becomes easier for folks to use full text search indexing without doing a lot of work.

Features:

  1. Provides full text indexing capability
  2. Single binary for installation and running. Binaries available under releases for multiple platforms.
  3. Web UI for querying data written in Vue
  4. Compatibility with Elasticsearch APIs for ingestion of data (single record and bulk API)
  5. Out of the box authentication
  6. Schema less - No need to define schema upfront and different documents in the same index can have different fields.
  7. Index storage in s3 and minio (experimental)
  8. aggregation support

Roadmap items:

  1. High Availability
  2. Distributed reads and writes
  3. Geosptial search
  4. Raise an issue if you are looking for something.

Screenshots

Search screen

Search screen 1 Search screen for games

User management screen

Users screen

Getting started

Download / Installation / Run

Check installation installation docs

Data ingestion

Single record

Check single record ingestion docs

Bulk ingestion

Check bulk ingestion docs

Fluent bit

Check fluet-bit ingestion docs

Syslog-ng

Check syslog-ng ingestion docs

API Reference

Check Zinc API docs

How to develop and contribute to Zinc

Check the contributing guide

Who uses Zinc (Known users)?

  1. Quadrantsec
  2. Accodeing to you

Please do raise a PR adding your details if you are using Zinc.

Comments
  • Zinc Search Console > Results table pane > only 20 records can be displayed as a maximum

    Zinc Search Console > Results table pane > only 20 records can be displayed as a maximum

    Hello, in the meantime, thank you for the constantly evolving product. Very useful for my projects. One question though: from version 0.1.3 onwards, I don't understand how to enable the limitation of 20 "records" on the "ZincSearch" web application. Thank you. I only ever see 20 records.

  • Log Ingestion via Filebeat Fails

    Log Ingestion via Filebeat Fails

    Community Note

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Tell us about your request What do you want to see in Zinc? Log ingestion via Filebeat (see #90)

    Which service(s) is this relate to? This could be GUI or API Log ingestion/index creation

    Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard or not doable? I am unable to ingest data from Filebeat into Zinc. Specifically, I am trying to ingest syslog and auth.log. Using a modified version of the Filebeat config example, I am getting the below error:

    Mar 27 18:15:17 snail filebeat[19654]: {"log.level":"error","@timestamp":"2022-03-27T18:15:17.202Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://127.0.0.1:4080/es/)): Connection marked as failed because the onConnect callback failed: error loading template: failed to load template: couldn't load template: 400 Bad Request: {\"error\":\"error_type: parsing_exception, reason: [mappings] properties [type] should be exists\"}. Response body: {\"error\":\"error_type: parsing_exception, reason: [mappings] properties [type] should be exists\"}","service.name":"filebeat","ecs.version":"1.6.0"}
    

    I have attempted to reconcile this by creating an index named "nginx-log" (note, I've tried a custom name and nothing changed) and setting the following in /etc/filebeat/filebeat.yml:

    setup.template.name: "nginx-log"
    setup.template.pattern: "nginx-log"
    

    The same error persisted. According to Elasticsearch's documentation, I should be able to manually map input fields. I've added the following to my filebeat.yml:

    setup.template.append_fields:
    - name: content
      type: text
    

    Here are all my indicies:

    ~$ curl "http://localhost:4080/api/index" -u "admin:Complexpass123#"
    
    {"article":{"name":"article","mappings":{"content":"text","publish_date":"time","status":"keyword","title":"text","user":"text"}},"nginx-log":{"name":"nginx-log","mappings":{"content":"text"}},"system-log":{"name":"system-log","mappings":{"content":"text"}}}
    

    After adding the above append_fields setting, the following error occurs:

    Mar 27 19:40:19 snail filebeat[20382]: {"log.level":"error","@timestamp":"2022-03-27T19:40:19.467Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://127.0.0.1:4080/es/)): Connection marked as failed because the onConnect callback failed: error loading template: failed to load template: couldn't load template: 400 Bad Request: {\"error\":\"error_type: parsing_exception, reason: [template] unknown option [data_stream]\"}. Response body: {\"error\":\"error_type: parsing_exception, reason: [template] unknown option [data_stream]\"}","service.name":"filebeat","ecs.version":"1.6.0"}
    

    Are you currently working around this issue? How are you currently solving this problem? No known workaround.

    Additional context Anything else we should know? I have not posted to the Filebeat forum as this seems like an error produced by Zinc. If this is user error related to Filebeat, please say so and I will inquire at the ES/Filebeat forum.

    I would also like to note I needed to add the following extra settings in my filebeat.yml in order to avoid errors:

    output.elasticsearch.allow_older_versions: true
    
    Mar 27 15:03:36 snail filebeat[17143]: {"log.level":"error","@timestamp":"2022-03-27T15:03:36.289Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://localhost:4080/es/)): Connection marked as failed because the onConnect callback failed: Elasticsearch is too old. Please upgrade the instance. If you would like to connect to older instances set output.elasticsearch.allow_older_versions to true. ES=0.1.8, Beat=8.1.1.","service.name":"filebeat","ecs.version":"1.6.0"}
    

    Attachments If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

  • [question] Is this CPU usage normal?

    [question] Is this CPU usage normal?

    Hi,

    i'm comparing cpu/ram usage againts ES, and setuped a fluentd logging with 2 destination: 1. ES 2. Zinc (same vm, same cpu, same memory, same storage). Is is 'normal' if Zinc eats 50-70% CPU, while ES is under 5% on a 10 minutes average?

    Im just collection logs with fluentd, from 15 nodes, not a big traffic ~1GB/day and I don't have complex queries, so it's weird to see this high cpu usage.

    I tried Zinc via docker, and via binary too, but got same results on CPU usage.

  • OOM Killed using bulk insertion

    OOM Killed using bulk insertion

    I try to insert data (from a little file of 37M) but zinc get killed by OOM. It eat more than 8GB before be killed by kernel.

    oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/[email protected],task=zinc,pid=129244,uid=1000
    [13291.887950] Out of memory: Killed process 129244 (zinc) total-vm:12128240kB, anon-rss:11309292kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:22328kB oom_score_adj:0
    [13292.196257] oom_reaper: reaped process 129244 (zinc), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
    
    [GIN-debug] Listening and serving HTTP on :4080
    {"level":"debug","time":"2021-12-15T18:49:58+01:00","message":"auth: success"}
    Killed
    
  • Set @timestamp on bulk import

    Set @timestamp on bulk import

    I may be stupid — but I cannot get this to work. So please help me out how I can actually set the @timestamp upon bulk import via API.

    I found the code where you actually do t, err := time.Parse(time.RFC3339, v.(string)) but even if I provide a properly formatted timestamp in a field named @timestamp on any level it will always be ignored.

    So I must do something wrong. I found no forum, thus I post this Issue …

  •  Observation - Receive 500 internal error on inserting key with null value

    Observation - Receive 500 internal error on inserting key with null value

    Maybe this is more observation, but still - when particular value is null, I receive 500. I do not know what would be correct behaviour, ESS survives exactly this scenario.

    When for particular key "null" value is assigned on indexing. As:

    185	3.377123217	127.0.0.1	127.0.0.1	HTTP	505	POST /es/product_v2/_doc HTTP/1.1  (application/json)
    
    {"product_id":"eabf69fbe53f57a4ef6fabd6af8430ba","product_type":"car","car_manufacturer":null,"car_model":"\"VOLUNTEER 4X2\"","car_year":"2011","car_gear_type":null,"weight":0}
    

    I received back: 500 Internal error with following output from zinc:

    2021/12/28 18:00:38 [Recovery] 2021/12/28 - 18:00:38 panic recovered:
    POST /es/product_v2/_doc HTTP/1.1
    Host: localhost:4080
    Accept: application/json
    Authorization: *
    Content-Length: 176
    Content-Type: application/json
    User-Agent: elasticsearch-php/6.7.1 (Linux 5.11.0-43-generic, PHP 7.4.3)
    
    interface conversion: interface {} is string, not float64
    /usr/local/go/src/runtime/iface.go:261 (0x40a374)
    /Users/prabhat/projects/zinc/zinc/pkg/core/Index.go:77 (0x957df4)
    /Users/prabhat/projects/zinc/zinc/pkg/core/UpdateDocument.go:5 (0x95a1a6)
    /Users/prabhat/projects/zinc/zinc/pkg/handlers/UpdateDocument.go:47 (0x96de96)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x967c27)
    /Users/prabhat/projects/zinc/zinc/pkg/auth/AuthMiddleware.go:21 (0x967a6e)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x891fa1)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 (0x891f8c)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x890e5d)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:489 (0x890ae5)
    /Users/prabhat/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:445 (0x890644)
    /usr/local/go/src/net/http/server.go:2879 (0x668eda)
    /usr/local/go/src/net/http/server.go:1930 (0x664587)
    /usr/local/go/src/runtime/asm_amd64.s:1581 (0x463580)
    
  • Better error messaging - when no write access to data folder

    Better error messaging - when no write access to data folder

    Community Note

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Attempting to run ZincSearch through Docker on my M1 laptop results in:

    $ docker run --rm -e ZINC_DATA_PATH="/data" -v $PWD/data:/data -p 4080:4080 -e ZINC_FIRST_ADMIN_USER=admin -e ZINC_FIRST_ADMIN_PASSWORD=admin --name zinc public.ecr.aws/zinclabs/zinc:latest
    {"level":"fatal","error":"while opening memtables error: while opening fid: 1 error: while updating skiplist error: mremap size mismatch: requested: 20 got: 134217728","time":"2022-06-04T21:35:59Z","message":"open badger db for metadata failed"}
    

    Which appears to be a bug in Badger.

  • Web UI return 500 error

    Web UI return 500 error

    Tell us about your request search and return correct data.

    Which service(s) is this relate to? API

    Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard or not doable? When i use webui and search ,sometimes API return 500 error.

    Attachments you can use this temp account to login.

  • 0.1.9 & 0.2.0 login page is empty

    0.1.9 & 0.2.0 login page is empty

    Tell us about your request login ZINC

    Which service(s) is this relate to? http://localhost:4080/ui/

    Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard or not doable? version : zinc_0.1.9_Windows_x86_64.tar.gz zinc_0.2.0_Windows_x86_64.tar.gz Login page is empty . And has 2 JS errors.

    Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/plain". Strict MIME type checking is enforced for module scripts per HTML spec.
    /ui/assets/index.df1980fd.js
    :4080/ui/assets/vendor.cf9ece94.js
    
  • Starting this on windows not working

    Starting this on windows not working

    Tried several options, but cat get it to go up and running from a command schell

    $ FIRST_ADMIN_USER=admin FIRST_ADMIN_PASSWORD=Complexpass 123 zinc.exe 'FIRST_ADMIN_USER' is not recognized as an internal or external command, operable program or batch file.

    other try:

    zinc.exe FIRST_ADMIN_USER=admin FIRST_ADMIN_PASSWORD=QNHadmin {"level":"debug","time":"2021-12-06T16:19:12+01:00","message":"Loading indexes..."} {"level":"debug","time":"2021-12-06T16:19:12+01:00","message":"Loading system indexes..."} {"level":"debug","time":"2021-12-06T16:19:12+01:00","message":"Index loaded: _users"} {"level":"debug","time":"2021-12-06T16:19:12+01:00","message":"Index loaded: _index_mapping"} 2021/12/06 16:19:12 FIRST_ADMIN_USER and FIRST_ADMIN_PASSWORD must be set on first start. You should also change the credentials after first login.

  • filebeat configuration error

    filebeat configuration error

    I follow the fileBeat configuration in the documentation image

    then I follow filebeat test output image

    then I follow filebeat -e -c /etc/filebeat/filebeat.yml image

    what can i do ?

  • WIP Write Ahead Log

    WIP Write Ahead Log

    I've built a seemingly working and effective (I'm getting up to x10 speed) WAL implementation, but there are still issues left to resolve/discuss that I'll take up in the coming days

    #165

  • error opening index: error getting exclusive access to diretory: unable to obtain exclusive access: resource temporarily unavailable

    error opening index: error getting exclusive access to diretory: unable to obtain exclusive access: resource temporarily unavailable"

    Tell us about your request I am testing about document

    Which service(s) is this relate to? This could be GUI or API

    Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard or not doable? I get error :error opening index: error getting exclusive access to diretory: unable to obtain exclusive access: resource temporarily unavailable"

    I don't know how to solve it and is there a good way to create a lot of test data?

  • Added Test Documentation

    Added Test Documentation

    Added test workflows and unit test taxonomy in documentation. README.md contains overall index and folder test_docs contain details categories main index refers to.

  • Extend ES compatibility

    Extend ES compatibility

    This Pull-Request further extends the "compatibility" with ElasticSearch.

    The changes were developed to use Zinc instead of ElasticSearch with parsedmarc.

  • New Query language

    New Query language

    A better query language that supports more english like syntax. e.g. sport=Hockey and year>25.

    Possible technologies for building this could be yacc, antlr and participle.

community search engine

Lieu an alternative search engine Created in response to the environs of apathy concerning the use of hypertext search and discovery.

Jul 1, 2022
Weaviate is a cloud-native, modular, real-time vector search engine
Weaviate is a cloud-native, modular, real-time vector search engine

Weaviate is a cloud-native, real-time vector search engine (aka neural search engine or deep search engine). There are modules for specific use cases such as semantic search, plugins to integrate Weaviate in any application of your choice, and a console to visualize your data.

Jul 4, 2022
Self hosted search engine for data leaks and password dumps
Self hosted search engine for data leaks and password dumps

Self hosted search engine for data leaks and password dumps. Upload and parse multiple files, then quickly search through all stored items with the power of Elasticsearch.

Aug 2, 2021
A search engine for XKCD

xkcd_searchtool a search engine for XKCD What is it? This tool can crawling the comic transcripts from XKCD.com Users can search a comic using key wor

Sep 29, 2021
Polarite is a Pastebin alternative made for simplicity written in Go.
 Polarite is a Pastebin alternative made for simplicity written in Go.

Polarite is a Pastebin alternative made for simplicity written in Go. Usage Web Interface Visit https://polarite.teknologiumum.com API Send a POST req

Feb 14, 2022
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.

go-techLog1C Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch. Стек является кросс-платформен

Mar 24, 2022
Vuls Beater for Elasticsearch - connecting vuls

vulsbeat Welcome to vulsbeat.Please push Star. This software allows you Vulnerability scan results of vuls can be imported to Elastic Stack.

Jan 25, 2022
Elastic is an Elasticsearch client for the Go programming language.

Elastic is an Elasticsearch client for the Go programming language.

Jun 26, 2022
An Elasticsearch Migration Tool.

An Elasticsearch Migration Tool Elasticsearch cross version data migration. Dec 3rd, 2020: [EN] Cross version Elasticsearch data migration with ESM Fe

Jun 24, 2022
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.

elasticpwn Quickly collects data from exposed Elasticsearch or Kibana instances and generates a report to be reviewed. It mainly aims for sensitive da

Jun 27, 2022
Discobeat is an elastic beat that publishes messages from Discord to elasticsearch

Discobeat Discobeat is an elastic beat that publishes messages from Discord to elasticsearch Ensure that this folder is at the following location: ${G

Apr 30, 2022
jacobin - A more than minimal JVM written in Go and capable of running Java 11 bytecode.

This overview gives the background on this project, including its aspirations and the features that it supports. The remaining pages discuss the basics of JVM operation and, where applicable, how Jacobin implements the various steps, noting any items that would be of particular interest to JVM cognoscenti.

Jun 15, 2022
Phalanx is a cloud-native full-text search and indexing server written in Go built on top of Bluge that provides endpoints through gRPC and traditional RESTful API.

Phalanx Phalanx is a cloud-native full-text search and indexing server written in Go built on top of Bluge that provides endpoints through gRPC and tr

Jun 30, 2022
Optimistic rollup tech, minimal and generic.

Opti Optimistic rollup tech, minimal and generic. VERY experimental, just exploratory code, question is: 1:1 EVM rollup with interactive fraud proof p

Oct 29, 2021
Minimal example app of hexagonal architecture in go

Hexagonal Architecture Minimal example of hexagonal architecture (ports & adapters) in go. Resources T

Nov 5, 2021
gosignal is expected to be used in minimal window manager configurations

gosignal is expected to be used in minimal window manager configurations. It provides a simple battery monitor , which notifies of battery events. It has a config file where you can configure the notification messages given

Mar 21, 2022
Gec is a minimal stack-based programming language

Gec is a minimal stack-based programming language

May 22, 2022
create a provider to get atlassian resources

Terraform Provider Scaffolding This repository is a template for a Terraform provider. It is intended as a starting point for creating Terraform provi

Dec 31, 2021
The gofinder program is an acme user interface to search through Go projects.

The gofinder program is an acme user interface to search through Go projects.

Jun 14, 2021