Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.

Zinc Search Engine

Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of the resources. It uses bluge as the underlying indexing library.

It is very simple and easy to operate as opposed to Elasticsearch which requires a couple dozen knobs to understand and tune which you can get up and running in 2 minutes

It is a drop-in replacement for Elasticsearch if you are just ingesting data using APIs and searching using kibana (Kibana is not supported with zinc. Zinc provides its own UI).

Check the below video for a quick demo of Zinc.

Zinc Youtube

Playground Server

You could try ZincSearch without installing using below details:

Server https://playground.dev.zincsearch.com
User ID admin
Password Complexpass#123

Note: Do not store sensitive data on this server as its available to everyone on internet. Data will also be cleaned on this server regularly.

Join slack channel

Slack

Why zinc

While Elasticsearch is a very good product, it is complex and requires lots of resources and is more than a decade old. I built Zinc so it becomes easier for folks to use full text search indexing without doing a lot of work.

Features:

  1. Provides full text indexing capability
  2. Single binary for installation and running. Binaries available under releases for multiple platforms.
  3. Web UI for querying data written in Vue
  4. Compatibility with Elasticsearch APIs for ingestion of data (single record and bulk API)
  5. Out of the box authentication
  6. Schema less - No need to define schema upfront and different documents in the same index can have different fields.
  7. Index storage in s3 and minio (experimental)
  8. aggregation support

Roadmap items:

  1. High Availability
  2. Distributed reads and writes
  3. Geosptial search
  4. Raise an issue if you are looking for something.

Screenshots

Search screen

Search screen 1 Search screen for games

User management screen

Users screen

Getting started

Download / Installation / Run

Check installation installation docs

Data ingestion

Single record

Check single record ingestion docs

Bulk ingestion

Check bulk ingestion docs

Fluent bit

Check fluet-bit ingestion docs

Syslog-ng

Check syslog-ng ingestion docs

API Reference

Check Zinc API docs

How to develop and contribute to Zinc

Check the contributing guide

Who uses Zinc (Known users)?

  1. Quadrantsec
  2. Accodeing to you

Please do raise a PR adding your details if you are using Zinc.

Comments
  • Zinc Search Console > Results table pane > only 20 records can be displayed as a maximum

    Zinc Search Console > Results table pane > only 20 records can be displayed as a maximum

    Hello, in the meantime, thank you for the constantly evolving product. Very useful for my projects. One question though: from version 0.1.3 onwards, I don't understand how to enable the limitation of 20 "records" on the "ZincSearch" web application. Thank you. I only ever see 20 records.

  • Log Ingestion via Filebeat Fails

    Log Ingestion via Filebeat Fails

    Community Note

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Tell us about your request What do you want to see in Zinc? Log ingestion via Filebeat (see #90)

    Which service(s) is this relate to? This could be GUI or API Log ingestion/index creation

    Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard or not doable? I am unable to ingest data from Filebeat into Zinc. Specifically, I am trying to ingest syslog and auth.log. Using a modified version of the Filebeat config example, I am getting the below error:

    Mar 27 18:15:17 snail filebeat[19654]: {"log.level":"error","@timestamp":"2022-03-27T18:15:17.202Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://127.0.0.1:4080/es/)): Connection marked as failed because the onConnect callback failed: error loading template: failed to load template: couldn't load template: 400 Bad Request: {\"error\":\"error_type: parsing_exception, reason: [mappings] properties [type] should be exists\"}. Response body: {\"error\":\"error_type: parsing_exception, reason: [mappings] properties [type] should be exists\"}","service.name":"filebeat","ecs.version":"1.6.0"}
    

    I have attempted to reconcile this by creating an index named "nginx-log" (note, I've tried a custom name and nothing changed) and setting the following in /etc/filebeat/filebeat.yml:

    setup.template.name: "nginx-log"
    setup.template.pattern: "nginx-log"
    

    The same error persisted. According to Elasticsearch's documentation, I should be able to manually map input fields. I've added the following to my filebeat.yml:

    setup.template.append_fields:
    - name: content
      type: text
    

    Here are all my indicies:

    ~$ curl "http://localhost:4080/api/index" -u "admin:Complexpass123#"
    
    {"article":{"name":"article","mappings":{"content":"text","publish_date":"time","status":"keyword","title":"text","user":"text"}},"nginx-log":{"name":"nginx-log","mappings":{"content":"text"}},"system-log":{"name":"system-log","mappings":{"content":"text"}}}
    

    After adding the above append_fields setting, the following error occurs:

    Mar 27 19:40:19 snail filebeat[20382]: {"log.level":"error","@timestamp":"2022-03-27T19:40:19.467Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://127.0.0.1:4080/es/)): Connection marked as failed because the onConnect callback failed: error loading template: failed to load template: couldn't load template: 400 Bad Request: {\"error\":\"error_type: parsing_exception, reason: [template] unknown option [data_stream]\"}. Response body: {\"error\":\"error_type: parsing_exception, reason: [template] unknown option [data_stream]\"}","service.name":"filebeat","ecs.version":"1.6.0"}
    

    Are you currently working around this issue? How are you currently solving this problem? No known workaround.

    Additional context Anything else we should know? I have not posted to the Filebeat forum as this seems like an error produced by Zinc. If this is user error related to Filebeat, please say so and I will inquire at the ES/Filebeat forum.

    I would also like to note I needed to add the following extra settings in my filebeat.yml in order to avoid errors:

    output.elasticsearch.allow_older_versions: true
    
    Mar 27 15:03:36 snail filebeat[17143]: {"log.level":"error","@timestamp":"2022-03-27T15:03:36.289Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(http://localhost:4080/es/)): Connection marked as failed because the onConnect callback failed: Elasticsearch is too old. Please upgrade the instance. If you would like to connect to older instances set output.elasticsearch.allow_older_versions to true. ES=0.1.8, Beat=8.1.1.","service.name":"filebeat","ecs.version":"1.6.0"}
    

    Attachments If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)

  • [question] Is this CPU usage normal?

    [question] Is this CPU usage normal?

    Hi,

    i'm comparing cpu/ram usage againts ES, and setuped a fluentd logging with 2 destination: 1. ES 2. Zinc (same vm, same cpu, same memory, same storage). Is is 'normal' if Zinc eats 50-70% CPU, while ES is under 5% on a 10 minutes average?

    Im just collection logs with fluentd, from 15 nodes, not a big traffic ~1GB/day and I don't have complex queries, so it's weird to see this high cpu usage.

    I tried Zinc via docker, and via binary too, but got same results on CPU usage.

  • Template endpoints won't work

    Template endpoints won't work

    What were you trying to achieve? I'm am trying to get http://localhost:4080/es/_index_template to make a template index in Zinc but all I get is: {"error":"template.name should be not empty"}

    PS.: I copied the example in this page

  • Swagger API Docs - Implement the rest of the Endpoints

    Swagger API Docs - Implement the rest of the Endpoints

    Is your feature request related to a problem? Please describe. This is a continuation ticket for https://github.com/zinclabs/zinc/pull/178. Now that we have Swagger support in zinc, we would like to provide the rest of the API Docs.

    Describe the solution you'd like Provide the rest of the Swagger API Documentation by annotating the API Endpoints using https://github.com/swaggo/gin-swagger Comments.

    Additional context Use following checklist to keep track of the Current and Missing Docs:

    Auth

    • [x] List
    • [x] Delete
    • [x] CreateUpdate
    • [x] Login

    index

    • [x] Index
    • [x] List
    • [x] Create
    • [x] Delete

    index settings

    • [x] Get Mapping
    • [x] Update Mapping
    • [x] Get Settings
    • [x] Update Settings

    Search

    • [x] Search
    • [x] Bulk Insert
    • [x] Update Document
    • [x] Delete Document
    • [x] Crete Document

    ES

    • [ ] License
    • [ ] _xpack
    • [x] _search
    • [x] _msearch
    • [x] :target/_search
    • [x] :target/_msearch
    • [x] _index_template
    • [x] _index_template/:target
    • [x] :target/_mapping
    • [x] :target/_settings
    • [x] _analyze
    • [x] :target/_analyze
    • [x] _bulk
    • [x] :target/_bulk
    • [x] :target/_doc
    • [x] :target/_doc/:id
  • OOM Killed using bulk insertion

    OOM Killed using bulk insertion

    I try to insert data (from a little file of 37M) but zinc get killed by OOM. It eat more than 8GB before be killed by kernel.

    oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/[email protected],task=zinc,pid=129244,uid=1000
    [13291.887950] Out of memory: Killed process 129244 (zinc) total-vm:12128240kB, anon-rss:11309292kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:22328kB oom_score_adj:0
    [13292.196257] oom_reaper: reaped process 129244 (zinc), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
    
    [GIN-debug] Listening and serving HTTP on :4080
    {"level":"debug","time":"2021-12-15T18:49:58+01:00","message":"auth: success"}
    Killed
    
  • ZINC_DATA_PATH is not writable

    ZINC_DATA_PATH is not writable

    What were you trying to achieve? Running the docker image

    What action did you take? Running stock example with mapped valume path returns error. Same is true if I don't specify any volume mount.

    docker run -v /root/docker/zincsearch:/data -e DATA_PATH="/data" -p 4080:4080 \
        -e ZINC_FIRST_ADMIN_USER=admin -e ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123 \
        --name zinc public.ecr.aws/h9e2j3o7/zinc:latest
    

    What action/response/output did you expect? Running app

    What actually happened?

    {"level":"debug","time":"2022-09-06T14:44:50Z","message":"open .env: no such file or directory"}
    {"level":"fatal","error":"mkdir data/_test_: permission denied","time":"2022-09-06T14:44:50Z","message":"ZINC_DATA_PATH is not writable"}
    
    

    How to reproduce the issue? Just start it

    What version of ZincSearch are you using? I tried with latest and 0.3.1

    Anything else that you can tell that will help us diagnose and resolve the issue efficiently? I am running a ubuntu server.

  • Unescaped format for nginx json log

    Unescaped format for nginx json log

    I forward stdout of docker-compose to fluent-bit and zinc. How to remove unnecessary escaping for key "log"? I expected "log" as json-field Actual result: { "_index": "stdout", "_type": "_doc", "_id": "6dc987ae-0530-82c6-06f3-f6115153b088", "_score": 2, "@timestamp": "2022-07-12T13:32:20Z", "_source": { "container_id": "a78ceeec09408ca28afcbb025ca4c51c3437c3cab8b9ca2cbff72f7848febbd8", "container_name": "/nginx", "log": "{ \"timestamp2\": \"1657632740.811\", \"remote_addr\": \"185.71.52.6\", \"body_bytes_sent\": 32, \"request_time\": 0.002, \"response_status\": 200, \"request\": \"GET /?qwe=asd HTTP/1.1\", \"request_method\": \"GET\", \"http_user_agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36\",\"http_x_identifier\": \"-\",\"upstream_addr\": \"127.0.0.1:9000\",\"http_cf_ipcountry\": \"-\",\"http_cf_connecting_ip\": \"-\", \"http_x_token\": \"-\" }", "source": "stdout" } }

    Expected result: { "_index": "stdout", "_type": "_doc", "_id": "6dc987ae-0530-82c6-06f3-f6115153b088", "_score": 2, "@timestamp": "2022-07-12T13:32:20Z", "_source": { "container_id": "a78ceeec09408ca28afcbb025ca4c51c3437c3cab8b9ca2cbff72f7848febbd8", "container_name": "/nginx", "log": "{ "timestamp2": "1657632740.811", "remote_addr": "185.70.52.12", "body_bytes_sent": 32, "request_time": 0.002, "response_status": 200, "request": "GET /?qwe=asd HTTP/1.1", "request_method": "GET", "http_user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","http_x_identifier": "-","upstream_addr": "127.0.0.1:9000","http_cf_ipcountry": "-","http_cf_connecting_ip": "-", "http_x_token": "-" }", "source": "stdout" } } Current docker-compose.yml https://gist.github.com/devig/81fd80df35afc89e7930dd323e874245

  • Searching for a single Slash (`/`) fails

    Searching for a single Slash (`/`) fails

    Community Note

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    What were you trying to achieve?

    I want to find records that have a field (path) containing a single slash (/) character.

    What action did you take?

    I tried path:/, path:\/, path:"/" and many more things I could think of and would expect are valid ways to escape a single slash, but all these methods failed.

    What action/response/output did you expect?

    I get a filtered result set containing the searched-for records.

    What actually happened?

    I get a brief error popup at the bottom of the screen and the slash character gets highlighted in all previously displayed (not newly fetched) documents. See screenshot below.

    How to reproduce the issue?

    Create a document containing a field with a single slash and search for it.

    What version of ZincSearch are you using?

    Latest commit as of two days ago (197486c3a48c9906e0fe69aae3118594ac5ff7d9).

    Anything else that you can tell that will help us diagnose and resolve the issue efficiently?

    Not that I can think of.

    Any attachments/screenshots? ZINC path search error

    Which service(s) is this relate to? GUI and API
    (I also tried a python script going through all query scenarios I can think of)

  • Sort values on Elasticsearch API

    Sort values on Elasticsearch API

    Is your feature request related to a problem? Please describe. One of the most important needs on Search clusters is to be Sorted by the user's choice. As zinc supports sorting by a value, I considered that it'll accept on the ES API but as I see the documentation there was no support on it. Most of zinc's users are probably from a Elasticsearch's context and using client like olivere. Will this feature be released for ease of use and compatibility?

  • can't  collect the log from filebeat 8.2.4

    can't collect the log from filebeat 8.2.4

    Note: Any feature request that do not follow below template will be closed without consideration. You may remove this line while filing a feature request.

    Is your feature request related to a problem? Please describe. i try to use zinc be the log server, and i use filebeat 8.2.4 version be agent, this is my config from elk ecosystem (it work in that) i saw your given example config, i adjust my config like your example , it doesn't work the config content like this, the segment starts at #### and end with ####, adjustment part is use hightlight word

    ####
    
     ###################### Filebeat Configuration Example #########################
     
     # This file is an example configuration file highlighting only the most common
     # options. The filebeat.reference.yml file from the same directory contains all the
     # supported options with more comments. You can use it as a reference.
     #
     # You can find the full configuration reference here:
     # https://www.elastic.co/guide/en/beats/filebeat/index.html
     
     # For more available modules and options, please see the filebeat.reference.yml sample
     # configuration file.
     
     #=========================== Filebeat inputs =============================
    
    **setup.ilm.enabled: false
    setup.template.name: "zstack"
    setup.template.pattern: "zstack-*"**
    
     
     filebeat.inputs:            # 这个代表filebeat的input项配置
     
     # Each - is an input. Most options can be set at the input level, so
     # you can use different inputs for various configurations.
     # Below are the input specific configurations.
     
     - type: log                  # 代表文件类型的log 其他的类型还有: container \ docker \ file \ kafka \
     
       # Change to true to enable this input configuration.
       enabled: true
     
       # Paths that should be crawled and fetched. Glob based paths.
       paths:
         - /usr/local/zstack/apache-tomcat/logs/management-server.log
         - /var/log/zstack/zstack-kvmagent.log
         #- /var/log/*.log
         #- c:\programdata\elasticsearch\logs\*
     
       # Exclude lines. A list of regular expressions to match. It drops the lines that are
       # matching any regular expression from the list.
       #exclude_lines: ['^DBG']
     
       # Include lines. A list of regular expressions to match. It exports the lines that are
       # matching any regular expression from the list.
       #include_lines: ['^ERR', '^WARN']
     
       # Exclude files. A list of regular expressions to match. Filebeat drops the files that
       # are matching any regular expression from the list. By default, no files are dropped.
       #exclude_files: ['.gz$']
     
       # Optional additional fields. These fields can be freely picked
       # to add additional information to the crawled log files for filtering
       #fields:
       #  level: debug
       #  review: 1
     
       ### Multiline options
     
       # Multiline can be used for log messages spanning multiple lines. This is common
       # for Java Stack Traces or C-Line Continuation
     
       # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
       #multiline.pattern: ^\[
     
       # Defines if the pattern set under pattern should be negated or not. Default is false.
       #multiline.negate: false
     
       # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
       # that was (not) matched before or after or as long as a pattern is not matched based on negate.
       # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
       #multiline.match: after
     
     
     #============================= Filebeat modules ===============================
     
     filebeat.config.modules:
       # Glob pattern for configuration loading
       path: ${path.config}/modules.d/*.yml
     
       # Set to true to enable config reloading
       reload.enabled: false
     
       # Period on which files under path should be checked for changes
       #reload.period: 10s
     
     #==================== Elasticsearch template setting ==========================
     
     setup.template.settings:
       index.number_of_shards: 1
       #index.codec: best_compression
       #_source.enabled: false
     
     #================================ General =====================================
     
     # The name of the shipper that publishes the network data. It can be used to group
     # all the transactions sent by a single shipper in the web interface.
     #name:
     
     # The tags of the shipper are included in their own field with each
     # transaction published.
     #tags: ["service-X", "web-tier"]
     
     # Optional fields that you can specify to add additional information to the
     # output.
     #fields:
     #  env: staging
     
     
     #============================== Dashboards =====================================
     # These settings control loading the sample dashboards to the Kibana index. Loading
     # the dashboards is disabled by default and can be enabled either by setting the
     # options here or by using the `setup` command.
     #setup.dashboards.enabled: false
     
     # The URL from where to download the dashboards archive. By default this URL
     # has a value which is computed based on the Beat name and version. For released
     # versions, this URL points to the dashboard archive on the artifacts.elastic.co
     # website.
     #setup.dashboards.url:
     
     #============================== Kibana =====================================
     
     # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
     # This requires a Kibana endpoint configuration.
     setup.kibana:
     
       # Kibana Host
       # Scheme and port can be left out and will be set to the default (http and 5601)
       # In case you specify and additional path, the scheme is required: http://localhost:5601/path
       # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
       #host: "localhost:5601"
     
       # Kibana Space ID
       # ID of the Kibana Space into which the dashboards should be loaded. By default,
       # the Default Space will be used.
       #space.id:
     
     #================================ Outputs =====================================
     
     # Configure what output to use when sending the data collected by the beat.
     
     #-------------------------- Elasticsearch output ------------------------------
     output.elasticsearch:
       # Array of hosts to connect to.
       hosts: ["http://172.20.14.127:4080"]  #你搭建的es的ip
       path: "/es/"
       index: "zstack-%{+yyyy.MM.dd}"
       username: "admin"
       password: "Complexpass#123"
     
       # Protocol - either `http` (default) or `https`.
       #protocol: "https"
     
       # Authentication credentials - either API key or username/password.
       #api_key: "id:api_key"
       #username: "elastic"
       #password: "changeme"
     
     #----------------------------- Logstash output --------------------------------
     #output.logstash:
       # The Logstash hosts
       #hosts: ["localhost:5044"]
     
       # Optional SSL. By default is off.
       # List of root certificates for HTTPS server verifications
       #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
     
       # Certificate for SSL client authentication
       #ssl.certificate: "/etc/pki/client/cert.pem"
     
       # Client Certificate Key
       #ssl.key: "/etc/pki/client/cert.key"
     
     #================================ Processors =====================================
     
     # Configure processors to enhance or manipulate events generated by the beat.
     
     processors:
       - add_host_metadata: ~
       - add_cloud_metadata: ~
       - add_docker_metadata: ~
       - add_kubernetes_metadata: ~
     
     #================================ Logging =====================================
     
     # Sets log level. The default log level is info.
     # Available log levels are: error, warning, info, debug
     #logging.level: debug
     
     # At debug level, you can selectively enable logging only for some components.
     # To enable all selectors use ["*"]. Examples of other selectors are "beat",
     # "publish", "service".
     #logging.selectors: ["*"]
     
     #============================== X-Pack Monitoring ===============================
     # filebeat can export internal metrics to a central Elasticsearch monitoring
     # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
     # reporting is disabled by default.
     
     # Set to true to enable the monitoring reporter.
     #monitoring.enabled: false
     
     # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
     # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
     # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
     #monitoring.cluster_uuid:
     
     # Uncomment to send the metrics to Elasticsearch. Most settings from the
     # Elasticsearch output are accepted here as well.
     # Note that the settings should point to your Elasticsearch *monitoring* cluster.
     # Any setting that is not set is automatically inherited from the Elasticsearch
     # output configuration, so if you have the Elasticsearch output configured such
     # that it is pointing to your Elasticsearch monitoring cluster, you can simply
     # uncomment the following line.
     #monitoring.elasticsearch:
     
     #================================= Migration ==================================
     
     # This allows to enable 6.7 migration aliases
     #migration.6_to_7.enabled: true
    
     
    
    ####
    

    i am frustrated with the zinc's index template, i guess this is use for aggreate the virous log index, but i can't create on your support front web ui, i dont known why use this concept contrained in config, i have not found the doc about this ..

    if you can given me correct use way, i will be gladful and greatful, i want integrate this simple \ light search engine in my product to replace heavy elasticSearch, keep the recently log file in zinc which can help developer to expeditely find out the error log info .

    the fllowing content is filebeat's log info, i am surely about this node can connect to the zinc server, i use the script you provide insert a single data record is succeed.

    [[email protected] bjwtest]# tailf logs/filebeat-20220726-6.ndjson 
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.logger":"crawler","log.origin":{"file.name":"beater/crawler.go","file.line":119},"message":"starting input, keys present on the config: [filebeat.inputs.0.enabled filebeat.inputs.0.paths.0 filebeat.inputs.0.paths.1 filebeat.inputs.0.type]","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"warn","@timestamp":"2022-07-26T16:20:12.084+0800","log.logger":"cfgwarn","log.origin":{"file.name":"log/input.go","file.line":89},"message":"DEPRECATED: Log input. Use Filestream input instead.","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":171},"message":"Configured paths: [**/usr/local/zstack/apache-tomcat/logs/management-server.log /var/log/zstack/zstack-kvmagent.log**]","service.name":"filebeat","input_id":"04fc3f9f-107c-4fc9-8201-5a5119de9d23","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.logger":"crawler","log.origin":{"file.name":"beater/crawler.go","file.line":150},"message":"Starting input (ID: 118385907603013962)","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.origin":{"file.name":"beater/crawler.go","file.line":76},"message":"input file config bjw: &{{{0xc00015df20} 0} 0xc0001cc050 0xc000342e20}","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.origin":{"file.name":"beater/crawler.go","file.line":77},"message":"input file config bjw: %!v(MISSING)","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.logger":"crawler","log.origin":{"file.name":"beater/crawler.go","file.line":108},"message":"Loading and starting Inputs completed. Enabled inputs: 1","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.084+0800","log.origin":{"file.name":"cfgfile/reload.go","file.line":164},"message":"Config reloader started","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:12.085+0800","log.origin":{"file.name":"cfgfile/reload.go","file.line":224},"message":"Loading of config files completed.","service.name":"filebeat","ecs.version":"1.6.0"}
    {"log.level":"info","@timestamp":"2022-07-26T16:20:42.086+0800","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":184},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"user.slice"},"cpuacct":{"id":"user.slice","total":{"ns":575468794804}},"memory":{"id":"user.slice","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":6900948992}}}},"cpu":{"system":{"ticks":20,"time":{"ms":20}},"total":{"ticks":60,"time":{"ms":60},"value":0},"user":{"ticks":40,"time":{"ms":40}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":11},"info":{"ephemeral_id":"49f4e6c5-1025-4af5-b701-289ae0720713","uptime":{"ms":30046},"version":"8.2.4"},"memstats":{"gc_next":8275856,"memory_alloc":6093920,"memory_sys":20530184,"memory_total":14882032,"rss":28778496},"runtime":{"goroutines":30}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":1},"output":{"events":{"active":0},"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":4},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.0125,"5":0.0025}}}},"ecs.version":"1.6.0"}}
    
  • build(deps-dev): bump @cypress/vue from 4.2.0 to 5.0.2 in /web

    build(deps-dev): bump @cypress/vue from 4.2.0 to 5.0.2 in /web

    Bumps @cypress/vue from 4.2.0 to 5.0.2.

    Release notes

    Sourced from @​cypress/vue's releases.

    5.0.0

    Released 8/19/2020

    Summary:

    Cypress now includes support for test retries! Similar to how Cypress will retry assertions when they fail, test retries will allow you to automatically retry a failed test prior to marking it as failed. Read our new guide on Test Retries for more details.

    Breaking Changes:

    Please read our Migration Guide which explains the changes in more detail and how to change your code to migrate to Cypress 5.0.

    • The cypress-plugin-retries plugin has been deprecated in favor of test retries built into Cypress. Addresses #1313.
    • The Cypress.Cookies.defaults() whitelist option has been renamed to preserve to more closely reflect its behavior. Addressed in #7782.
    • The blacklistHosts configuration has been renamed to blockHosts to more closely reflect its behavior. Addressed in #7622.
    • The cy.server() whitelist option has been renamed to ignore to more closely reflect its behavior. Addresses #6642.
    • libgbm-dev is now a requirement to run Cypress on Linux. Addressed in #7791.
    • Values yielded by cy.setCookie(), cy.getCookie(), and cy.getCookies() will now contain the sameSite property if specified. Addresses #6892.
    • The experimentalGetCookiesSameSite configuration flag has been removed, since this behavior is now the default. Addresses #6892.
    • The return type of the Cypress.Blob methods arrayBufferToBlob, base64StringToBlob, binaryStringToBlob, and dataURLToBlob have changed from Promise<Blob> to Blob. Addresses #6001.
    • Cypress no longer supports file paths with a question mark ? in them. We now use the webpack preprocessor by default and it does not support files with question marks. Addressed in #7982.
    • For TypeScript compilation of spec, support, and plugins files, the esModuleInterop option is no longer coerced to true. If you need to utilize esModuleInterop, set it in your tsconfig.json. Addresses #7575.
    • Cypress now requires TypeScript 3.4+. Addressed in #7856.
    • Installing Cypress on your system now requires Node.js 10+. Addresses #6574.
    • In spec files, the values for the globals __dirname and __filename no longer include leading slashes. Addressed in #7982.

    Features:

    • There's a new retries configuration option to configure the number of times to retry a failing test. Addresses #1313.
    • .click(), .dblclick(), and .rightclick() now accept options altKey, ctrlKey, metaKey, and shiftKey to hold down key combinations while clicking. Addresses #486.
    • You can now chain .snapshot() off of cy.stub() and cy.spy() to disabled snapshots during those commands. For example: cy.stub().snapshot(false). Addresses #3849.

    Bugfixes:

    • The error Cannot set property 'err' of undefined will no longer incorrectly throw when rerunning tests in the Test Runner. Fixes #7874 and #8193.
    • Cypress will no longer throw a Cannot read property 'isAttached' of undefined error during cypress run on Firefox versions >= 75. Fixes #6813.
    • The error Maximum call stack size exceeded will no longer throw when calling scrollIntoView on an element in the shadow dom. Fixes #7986.
    • Cypress environment variables that accept arrays as their value will now properly evaluate as arrays. Fixes #6810.
    • Elements having display: inline will no longer be considered hidden if it has child elements within it that are visible. Fixes #6183.
    • When experimentalShadowDomSupport is enabled, .parent() and .parentsUntil() commands now work correctly in shadow dom as well as passing a selector to .parents() when the subject is in the shadow dom. Fixed in #8202.
    • Screenshots will now be correctly taken when a test fails in an afterEach or beforeEach hook after the hook has already passed. Fixes #3744.
    • Cypress will no longer report screenshots overwritten in a cy.screenshot() onAfterScreenshot option as a unique screenshot. Fixes #8079.
    • Taking screenshots will no longer fail when the screenshot names are too long for the filesystem to accept. Fixes #2403.
    • The "last used browser" will now be correctly remembered during cypress open if a non-default-channel browser was selected. Fixes #8281.
    • For TypeScript projects, tsconfig.json will now be loaded and used to configure TypeScript compilation of spec and support files. Fixes #7006 and #7503.
    • reporterStats now correctly show the number of passed and failed tests when a test passes but the afterEach fails. Fixes #7730.
    • The Developer Tools menu will now always display in Electron when switching focus from Specs to the Test Runner. Fixes #3559.

    Documentation Changes:

    • We have a new guide on Test Retries.

    ... (truncated)

    Commits
    • 8c8e628 chore: release @​cypress/vue-v5.0.2
    • 5928369 fix: upgrade electron/fuses to resolve code signing issue (#24785)
    • b2ee525 chore: limit CI runs (#24755)
    • 2166ba0 fix: fix windows-lint CI job (#24758)
    • a4e9642 chore: update package.json to 11.2.0 (#24780)
    • ec01774 fix: A docblock pointing to a non-existent online tool (#24771)
    • 4bbd78e feat: Re-introduce Run All specs for End to End under experimentalRunAllSpecs...
    • b9d053e docs: Updates schematic docs for new config file type (#24313)
    • bf6a52a feat: add cloud recommendation message to CI output (#24680)
    • e3435b6 chore: re-name dashboard references to Cypress Cloud (#24699)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump loader-utils from 1.4.0 to 1.4.2 in /web

    build(deps): bump loader-utils from 1.4.0 to 1.4.2 in /web

    Bumps loader-utils from 1.4.0 to 1.4.2.

    Release notes

    Sourced from loader-utils's releases.

    v1.4.2

    1.4.2 (2022-11-11)

    Bug Fixes

    v1.4.1

    1.4.1 (2022-11-07)

    Bug Fixes

    Changelog

    Sourced from loader-utils's changelog.

    1.4.2 (2022-11-11)

    Bug Fixes

    1.4.1 (2022-11-07)

    Bug Fixes

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
  • build(deps): bump github.com/aws/aws-sdk-go-v2/service/s3 from 1.29.1 to 1.29.4

    build(deps): bump github.com/aws/aws-sdk-go-v2/service/s3 from 1.29.1 to 1.29.4

    Bumps github.com/aws/aws-sdk-go-v2/service/s3 from 1.29.1 to 1.29.4.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • build(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.17.8 to 1.18.3

    build(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.17.8 to 1.18.3

    Bumps github.com/aws/aws-sdk-go-v2/config from 1.17.8 to 1.18.3.

    Changelog

    Sourced from github.com/aws/aws-sdk-go-v2/config's changelog.

    Release (2022-11-22)

    General Highlights

    • Dependency Update: Updated to the latest SDK module versions

    Module Highlights

    • github.com/aws/aws-sdk-go-v2/service/appflow: v1.21.0
      • Feature: Adding support for Amazon AppFlow to transfer the data to Amazon Redshift databases through Amazon Redshift Data API service. This feature will support the Redshift destination connector on both public and private accessible Amazon Redshift Clusters and Amazon Redshift Serverless.
    • github.com/aws/aws-sdk-go-v2/service/kinesisanalyticsv2: v1.15.0
      • Feature: Support for Apache Flink 1.15 in Kinesis Data Analytics.

    Release (2022-11-21)

    Module Highlights

    • github.com/aws/aws-sdk-go-v2/service/route53: v1.25.0
      • Feature: Amazon Route 53 now supports the Asia Pacific (Hyderabad) Region (ap-south-2) for latency records, geoproximity records, and private DNS for Amazon VPCs in that region.

    Release (2022-11-18.2)

    Module Highlights

    • github.com/aws/aws-sdk-go-v2/service/ssmsap: v1.0.1
      • Bug Fix: Removes old model file for ssm sap and uses the new model file to regenerate client

    Release (2022-11-18)

    General Highlights

    • Dependency Update: Updated to the latest SDK module versions

    Module Highlights

    • github.com/aws/aws-sdk-go-v2/service/appflow: v1.20.0
      • Feature: AppFlow provides a new API called UpdateConnectorRegistration to update a custom connector that customers have previously registered. With this API, customers no longer need to unregister and then register a connector to make an update.
    • github.com/aws/aws-sdk-go-v2/service/auditmanager: v1.21.0
      • Feature: This release introduces a new feature for Audit Manager: Evidence finder. You can now use evidence finder to quickly query your evidence, and add the matching evidence results to an assessment report.
    • github.com/aws/aws-sdk-go-v2/service/chimesdkvoice: v1.0.0
    • github.com/aws/aws-sdk-go-v2/service/cloudfront: v1.21.0
      • Feature: CloudFront API support for staging distributions and associated traffic management policies.
    • github.com/aws/aws-sdk-go-v2/service/connect: v1.38.0
      • Feature: Added AllowedAccessControlTags and TagRestrictedResource for Tag Based Access Control on Amazon Connect Webpage
    • github.com/aws/aws-sdk-go-v2/service/dynamodb: v1.17.6
      • Documentation: Updated minor fixes for DynamoDB documentation.
    • github.com/aws/aws-sdk-go-v2/service/dynamodbstreams: v1.13.25
      • Documentation: Updated minor fixes for DynamoDB documentation.
    • github.com/aws/aws-sdk-go-v2/service/ec2: v1.72.0
      • Feature: This release adds support for copying an Amazon Machine Image's tags when copying an AMI.
    • github.com/aws/aws-sdk-go-v2/service/glue: v1.35.0
      • Feature: AWSGlue Crawler - Adding support for Table and Column level Comments with database level datatypes for JDBC based crawler.
    • github.com/aws/aws-sdk-go-v2/service/iotroborunner: v1.0.0
      • Release: New AWS service client module

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Pull Request Preview Environments for increasing maintainer productivity

    Pull Request Preview Environments for increasing maintainer productivity

    I would like to make life easier for Zinc maintainers by implementing Uffizzi preview environments. Disclaimer: I work on Uffizzi.

    Uffizzi is a Open Source full stack previews engine and our platform is available completely free for Zinc (and all open source projects). This will provide maintainers with preview environments of their PRs in the cloud, allowing them iterate faster and reduce time to merge.

    Uffizzi is purpose-built for the task of previewing PRs and it integrates with your workflow to deploy preview environments in the background without any manual steps for maintainers or contributors.

    TODO:

    • [ ] Intial PoC
community search engine

Lieu an alternative search engine Created in response to the environs of apathy concerning the use of hypertext search and discovery.

Dec 3, 2022
Weaviate is a cloud-native, modular, real-time vector search engine
Weaviate is a cloud-native, modular, real-time vector search engine

Weaviate is a cloud-native, real-time vector search engine (aka neural search engine or deep search engine). There are modules for specific use cases such as semantic search, plugins to integrate Weaviate in any application of your choice, and a console to visualize your data.

Dec 2, 2022
Self hosted search engine for data leaks and password dumps
Self hosted search engine for data leaks and password dumps

Self hosted search engine for data leaks and password dumps. Upload and parse multiple files, then quickly search through all stored items with the power of Elasticsearch.

Aug 2, 2021
A search engine for XKCD

xkcd_searchtool a search engine for XKCD What is it? This tool can crawling the comic transcripts from XKCD.com Users can search a comic using key wor

Sep 29, 2021
Polarite is a Pastebin alternative made for simplicity written in Go.
 Polarite is a Pastebin alternative made for simplicity written in Go.

Polarite is a Pastebin alternative made for simplicity written in Go. Usage Web Interface Visit https://polarite.teknologiumum.com API Send a POST req

Nov 12, 2022
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.
Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.

go-techLog1C Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch. Стек является кросс-платформен

Nov 30, 2022
Vuls Beater for Elasticsearch - connecting vuls

vulsbeat Welcome to vulsbeat.Please push Star. This software allows you Vulnerability scan results of vuls can be imported to Elastic Stack.

Jan 25, 2022
Elastic is an Elasticsearch client for the Go programming language.

Elastic is an Elasticsearch client for the Go programming language.

Dec 2, 2022
An Elasticsearch Migration Tool.

An Elasticsearch Migration Tool Elasticsearch cross version data migration. Dec 3rd, 2020: [EN] Cross version Elasticsearch data migration with ESM Fe

Nov 24, 2022
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.
Quickly collect data from thousands of exposed Elasticsearch or Kibana instances and generate a report to be reviewed.

elasticpwn Quickly collects data from exposed Elasticsearch or Kibana instances and generates a report to be reviewed. It mainly aims for sensitive da

Nov 9, 2022
Discobeat is an elastic beat that publishes messages from Discord to elasticsearch

Discobeat Discobeat is an elastic beat that publishes messages from Discord to elasticsearch Ensure that this folder is at the following location: ${G

Apr 30, 2022
jacobin - A more than minimal JVM written in Go and capable of running Java 11 bytecode.

This overview gives the background on this project, including its aspirations and the features that it supports. The remaining pages discuss the basics of JVM operation and, where applicable, how Jacobin implements the various steps, noting any items that would be of particular interest to JVM cognoscenti.

Nov 22, 2022
Phalanx is a cloud-native full-text search and indexing server written in Go built on top of Bluge that provides endpoints through gRPC and traditional RESTful API.

Phalanx Phalanx is a cloud-native full-text search and indexing server written in Go built on top of Bluge that provides endpoints through gRPC and tr

Nov 18, 2022
Optimistic rollup tech, minimal and generic.

Opti Optimistic rollup tech, minimal and generic. VERY experimental, just exploratory code, question is: 1:1 EVM rollup with interactive fraud proof p

Aug 30, 2022
Minimal example app of hexagonal architecture in go

Hexagonal Architecture Minimal example of hexagonal architecture (ports & adapters) in go. Resources T

Nov 5, 2021
gosignal is expected to be used in minimal window manager configurations

gosignal is expected to be used in minimal window manager configurations. It provides a simple battery monitor , which notifies of battery events. It has a config file where you can configure the notification messages given

Mar 21, 2022
Gec is a minimal stack-based programming language

Gec is a minimal stack-based programming language

Sep 18, 2022
create a provider to get atlassian resources

Terraform Provider Scaffolding This repository is a template for a Terraform provider. It is intended as a starting point for creating Terraform provi

Dec 31, 2021
The gofinder program is an acme user interface to search through Go projects.

The gofinder program is an acme user interface to search through Go projects.

Jun 14, 2021