Use SQL to data from CSV files. Open source CLI. No DB required.

image

CSV Plugin for Steampipe

Use SQL to query data from CSV files.

Quick start

Install the plugin with Steampipe:

steampipe plugin install csv

Configure the paths to your CSV files in ~/.steampipe/config/csv.spc:

connection "csv" {
  plugin = "csv"
  paths  = [ "/path/to/your/files/*.csv" ]
}

Run a query for the my_users.csv file:

select
  first_name,
  last_name
from
  my_users

Developing

Prerequisites:

Clone:

git clone https://github.com/turbot/steampipe-plugin-csv.git
cd steampipe-plugin-csv

Build, which automatically installs the new version to your ~/.steampipe/plugins directory:

make

Configure the plugin:

cp config/* ~/.steampipe/config
vi ~/.steampipe/config/csv.spc

Try it!

steampipe query
> .inspect csv

Further reading:

Contributing

Please see the contribution guidelines and our code of conduct. All contributions are subject to the Apache 2.0 open source license.

help wanted issues:

Owner
Turbot
Get cloud work done with Turbot — Creators of https://turbot.com/v5 and https://steampipe.io
Turbot
Comments
  • set the default column names for some cases

    set the default column names for some cases

    If there is no header row, you may want to use the default column names. Of course it is not easy to judge that but, at least we know that there is the case.

    1. header row has the empty value
    2. header row has the duplicated value

    You could add more cases but I am only dealing with those two cases. I have tested golangci-lint at local.

    Example query results

    Results
    1. When header row has the duplicated value,
    > cat test.csv
    a,,b,b,c
    1,1,1,1,1
    2,2,2,2,2
    3,3,3,3,3
    4,4,4,4,4
    

    the query result is given as follows:

    > select * from test
    +-----+-----+-----+-----+-----+---------------------------+
    | _c0 | _c1 | _c2 | _c3 | _c4 | _ctx                      |
    +-----+-----+-----+-----+-----+---------------------------+
    | 2   | 2   | 2   | 2   | 2   | {"connection_name":"csv"} |
    | 4   | 4   | 4   | 4   | 4   | {"connection_name":"csv"} |
    | a   |     | b   | b   | c   | {"connection_name":"csv"} |
    | 1   | 1   | 1   | 1   | 1   | {"connection_name":"csv"} |
    | 3   | 3   | 3   | 3   | 3   | {"connection_name":"csv"} |
    +-----+-----+-----+-----+-----+---------------------------+
    
    1. When header row has the duplicated value,
    > cat test.csv
    a,a,b,b,c
    1,1,1,1,1
    2,2,2,2,2
    3,3,3,3,3
    4,4,4,4,4
    

    the query result is given as follows:

    > select * from test
    +-----+-----+-----+-----+-----+---------------------------+
    | _c0 | _c1 | _c2 | _c3 | _c4 | _ctx                      |
    +-----+-----+-----+-----+-----+---------------------------+
    | 3   | 3   | 3   | 3   | 3   | {"connection_name":"csv"} |
    | a   | a   | b   | b   | c   | {"connection_name":"csv"} |
    | 1   | 1   | 1   | 1   | 1   | {"connection_name":"csv"} |
    | 2   | 2   | 2   | 2   | 2   | {"connection_name":"csv"} |
    | 4   | 4   | 4   | 4   | 4   | {"connection_name":"csv"} |
    +-----+-----+-----+-----+-----+---------------------------+
    
  • Use of DoubleQuotes not working for some table names

    Use of DoubleQuotes not working for some table names

    Describe the bug Can't query column header for csv file with the name "Billable Hours"

    Steampipe version (steampipe -v) Example: v0.16.3

    Plugin version (steampipe plugin list) Example: v0.3.2

    To reproduce create this csv file:

    Billable Hours,Bundle Type
    10,xxx
    

    query the file:

    > select "Billable Hours" from test
    Error: column "Billable Hours" does not exist (SQLSTATE 42703)
    > select "Bundle Type" from test
    +-------------+
    | Bundle Type |
    +-------------+
    | xxx         |
    +-------------+
    

    Expected behavior Both queries should work

    Additional context Was having trouble reproducing with other column names, e.g. "Foo Bar" works fine...

  • CSV Plugin Crashes if any files in path are in invalid format

    CSV Plugin Crashes if any files in path are in invalid format

    Describe the bug CSV Plugin Crashes if any files in path are in invalid format

    Steampipe version (steampipe -v) failed to start plugin 'hub.steampipe.io/plugins/turbot/csv@latest': myfile.xlsx - Detailed Findings (1).csv header row has empty value in field 1

    Plugin version (steampipe plugin list)

    $ steampipe --version
    steampipe version 0.16.0
    OPL-M-PSOLOMON4:Downloads psolomon$ steampipe plugin list
    failed to start plugin 'hub.steampipe.io/plugins/turbot/csv@latest': myfile.xlsx - Detailed Findings (1).csv header row has empty value in field 1
    +--------------------------------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Name                                             | Version | Connections                                                                                                                                                           |
    +--------------------------------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | hub.steampipe.io/plugins/turbot/aws@latest       | 0.74.1  | <LIST_REDACTED> |
    |                                                  |         |  |
    |                                                  |         |                                                                                                                                                        |
    | hub.steampipe.io/plugins/turbot/csv@latest       | 0.3.2   |                                                                                                                                                                       |
    | hub.steampipe.io/plugins/turbot/datadog@latest   | 0.1.0   | datadog                                                                                                                                                               |
    | hub.steampipe.io/plugins/turbot/finance@latest   | 0.2.1   | finance                                                                                                                                                               |
    | hub.steampipe.io/plugins/turbot/github@latest    | 0.19.0  | github                                                                                                                                                                |
    | hub.steampipe.io/plugins/turbot/jira@latest      | 0.5.0   | jira                                                                                                                                                                  |
    | hub.steampipe.io/plugins/turbot/net@latest       | 0.7.0   | net                                                                                                                                                                   |
    | hub.steampipe.io/plugins/turbot/pagerduty@latest | 0.1.0   | pagerduty                                                                                                                                                             |
    | hub.steampipe.io/plugins/turbot/slack@latest     | 0.8.0   | slack                                                                                                                                                                 |
    | hub.steampipe.io/plugins/turbot/terraform@latest | 0.1.0   | terraform                                                                                                                                                             |
    | hub.steampipe.io/plugins/turbot/whois@latest     | 0.5.0   | whois                                                                                                                                                                 |
    | hub.steampipe.io/plugins/turbot/zoom@latest      | 0.4.0   | zoom                                                                                                                                                                  |
    +--------------------------------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    

    To reproduce Create a CSV with an invalid header column (i.e. blank)

    Expected behavior Instead of completely crashing the plugin, just print warnings of the bad files.

    Additional context None at this time.

  • Partial Case-Sensivity makes CAMELCase.csv fail

    Partial Case-Sensivity makes CAMELCase.csv fail

    Describe the bug I have a CAMELCase.csv file on a case-sensivtive file-system

    Steampipe version (steampipe -v) Example: v0.14.3

    Plugin version (steampipe plugin list) csv@latest | 0.3.0 | csv

    To reproduce

    create a simple .csv file with some UPPERlower.csv name

    .inspect does work, however using select always bails with relation does not exist

    > .inspect csv.FIxme
    +--------+-------+-------------------------------------------------------+
    | column | type  | description                                           |
    +--------+-------+-------------------------------------------------------+
    | _ctx   | jsonb | Steampipe context in JSON form, e.g. connection_name. |
    | one    | text  | Field 0.                                              |
    | two    | text  | Field 1.                                              |
    +--------+-------+-------------------------------------------------------+
    > select * from FIxme
    Error: relation "fixme" does not exist (SQLSTATE 42P01)
    

    Expected behavior the query should work

    Additional context

    renaming the file to alllowercase.csv works

  • Poor error message when the folder contains bad CSVs

    Poor error message when the folder contains bad CSVs

    Describe the bug When the paths in the connection config point to a folder that contains one/more invalid/bad CSV files, the plugin returns the following error:

    failed to plugin initialise plugin 'steampipe-plugin-csv': TableMapFunc 'PluginTables' had unhandled error: parse error on line 1, column 25: bare " in non-quoted-field
    

    It is hard to understand what's actually wrong from this message IMO.

    Steampipe version (steampipe -v) Example: v0.9.0-rc.0

    Plugin version (steampipe plugin list) Example: v0.3.0

    To reproduce Add a bad CSV file(may contain a bare " somewhere ) to a folder, and add the path to the folder in the connection config. Query a good CSV.

    Expected behavior A clear and concise description of what you expected to happen.

    Additional context Add any other context about the problem here.

  • support for gzipped csv

    support for gzipped csv

    I have added the gzip support, #39. Let say the file path is .../test.csv.gz.

    1. I include the match string condition like "*.csv.gz" in the config file.
    2. Since the table name is wrong, cut the base when it ends with .gz. As a result, the base is now test.csv and make the table name as test same as before.
    3. Check if the path ends with .gz when the file open, then add one more step to unzip the gzipped file.

    Example query results

    Results Here is the sample csv file.
    > cat test.csv
    a,b,c,d,e
    1,2,3,4,3
    1,2,1,1,2
    5,4,3,2,1
    a,a,a,b,c
    

    Let us gzip the file,

    > gzip test.csv.gz
    

    and then you can find the file is now gzipped.

    > ls
    test.csv.gz
    

    Now, do steampipe query and the result is here.

    Welcome to Steampipe v0.16.4
    For more information, type .help
    > select * from test
    +---+---+---+---+---+---------------------------+
    | a | b | c | d | e | _ctx                      |
    +---+---+---+---+---+---------------------------+
    | a | a | a | b | c | {"connection_name":"csv"} |
    | 5 | 4 | 3 | 2 | 1 | {"connection_name":"csv"} |
    | 1 | 2 | 1 | 1 | 2 | {"connection_name":"csv"} |
    | 1 | 2 | 3 | 4 | 3 | {"connection_name":"csv"} |
    +---+---+---+---+---+---------------------------+
    
  • Default configuration breaks steampipe

    Default configuration breaks steampipe

    Describe the bug The default configuration file installed by the plugin is incomplete, and breaks steampipe.

    The error failed to start plugin 'csv': paths must be configured occurs even if the CSV plugin isn't called.

    Steampipe version (steampipe -v) 0.11.0

    Plugin version (steampipe plugin list) 0.1.0

    To reproduce

    $ steampipe --version
    steampipe version 0.11.0
    
    $ steampipe plugin list
    +------+---------+-------------+
    | Name | Version | Connections |
    +------+---------+-------------+
    +------+---------+-------------+
    
    $ steampipe plugin install csv
    
    Installed plugin: csv v0.1.0
    Documentation:    https://hub.steampipe.io/plugins/turbot/csv
    
    $ steampipe plugin list
    Error: Plugin Listing failed - failed to start plugin 'csv': paths must be configured
    

    Expected behavior The plugin should not break steampipe after installation. For example:

    • the plugin should only fail if it's queried
    • or the configuration should contain a placeholder path so that it parses correctly

    Additional context Add any other context about the problem here.

  • byte order mark becomes space before first field in header when exporting from excel

    byte order mark becomes space before first field in header when exporting from excel

    I think that's what's happening, anyway. When saving a sheet as CSV from Excel, my default is UTF-8. and if the first column header is "name" it becomes " name" in the schema.

    My workaround: Save as CSV (MS-DOS). 😱

  • Outputting results from query as CSV into current working directory results in error

    Outputting results from query as CSV into current working directory results in error

    Describe the bug When using Steampipe to perform a query using the csv plugin and outputting the result of that query as csv into the current working directory results in an error

    Steampipe version (steampipe -v) v0.16.0

    Plugin version (steampipe plugin list) csv: v0.3.0

    To reproduce

    Welcome to Steampipe v0.16.0
    For more information, type .help
    > .inspect test
    +---------+-------+-------------------------------------------------------+
    | column  | type  | description                                           |
    +---------+-------+-------------------------------------------------------+
    | _ctx    | jsonb | Steampipe context in JSON form, e.g. connection_name. |
    | column1 | text  | Field 0.                                              |
    | column2 | text  | Field 1.                                              |
    +---------+-------+-------------------------------------------------------+
    > .exit
    ➜  mod_list steampipe query "select * from test;" --output csv >> results.csv
    Warning: executeQueries: query 1 of 1 failed: ERROR: failed to start plugin 'hub.steampipe.io/plugins/turbot/csv@latest': runtime error: invalid memory address or nil pointer dereference (SQLSTATE HV000)
    ➜  mod_list steampipe query "select * from test;" --output csv >> results.test
    Warning: executeQueries: query 1 of 1 failed: ERROR: failed to start plugin 'hub.steampipe.io/plugins/turbot/csv@latest': runtime error: invalid memory address or nil pointer dereference (SQLSTATE HV000)
    ➜  mod_list rm results.csv
    ➜  mod_list steampipe query "select * from test;" --output csv >> results.test
    ➜  mod_list cat results.test
    column1,column2,_ctx
    value1,value2,"{""connection_name"":""csv""}"
    value3,value4,"{""connection_name"":""csv""}"
    ➜  mod_list
    

    Expected behavior No error, or warning not to do it.

    Additional context error persists until the csv is deleted from the directory.

  • if a column name in the header row contains a period, the values in the column will be null

    if a column name in the header row contains a period, the values in the column will be null

    Given this in seitz.csv

    "subscriptions.id","subscriptions_plan_id","subscriptions_plan_quantity"
    a,b,1
    d,e,2
    

    The result for select * from csv.seitz:

    +------------------+-----------------------+-----------------------------+
    | subscriptions.id | subscriptions_plan_id | subscriptions_plan_quantity |
    +------------------+-----------------------+-----------------------------+
    | <null>           | b                     | 1                           |
    | <null>           | e                     | 2                           |
    +------------------+-----------------------+-----------------------------+
    
  • error if csv filename begins with a number

    error if csv filename begins with a number

    To reproduce, convert a file that works properly, with a name like foo.csv, to instead be 1foo.csv.

    Error: syntax error at or near "1" (SQLSTATE 42601)

  • Empty CSV files causes plugin initialization failure

    Empty CSV files causes plugin initialization failure

    Describe the bug If the plugin's paths matches one or more .csv file that contains no data, the plugin fails to initialize.

    Steampipe version (steampipe -v) v0.17.4

    Plugin version (steampipe plugin list) v0.5.0

    To reproduce

    • Create empty.csv in one of the directories in paths config arg, which contains no data
    • Run steampipe query
    • View Steampipe logs in ~/.steampipe/logs/plugin-<date>.log

    Expected behavior Should the plugin skip over empty or malformed CSV files? This was previously discussed in https://github.com/turbot/steampipe-plugin-csv/issues/40 and https://github.com/turbot/steampipe-plugin-csv/issues/31, as these issues were more common previously when the header was expected to be valid all of the time.

    Additional context Add any other context about the problem here.

  • CSV plugin loads csv tables from prior working directory when issues encountered

    CSV plugin loads csv tables from prior working directory when issues encountered

    Describe the bug When steampipe with the csv plugin has been run in one directory successfully, quit, and then launched from a different directory with a csv file that contains a "_ctx" column , it will load the csv tables from the prior directory with no other warnings to the terminal. This occurs even if the csv.spc is set to only load from the new directory. No steampipe processes were found running between runs.

    Steampipe version (steampipe -v) steampipe version 0.17.4

    To reproduce $ cd /users/me/turbot $ steampipe query --output csv "select * from googledirectory_user" > google_users $ mv google_users google_users.csv

    oh, look, its quittin' time!

    eat, sleep, get up

    Do some early morning AdventOfCode work

    directory contains day4.csv see https://github.com/Eric-Hacker/AOC22/tree/main/Day4 for example

    $ cd /users/me/aoc/day4 $ steampipe query "select * from day4.csv"

    whoops, how time flies, better get some real work done

    $ cd /users/me/turbot $ steampipe query

    .inspect cdv +-------+-----------------------------------------------------------+ | table | description | +-------+-----------------------------------------------------------+ | day4 | CSV file at /Users/me/aoc/day4/day4.csv | +-------+-----------------------------------------------------------+

    see csv.day4 table

    think, hmm, I guess I wasn't meant to get work done today, maybe I should go back to working on AOC

    Expected behavior Should be loading tables from the current directory or give a warning if there are issues.

  • Use file paths to qualify table names for csv files

    Use file paths to qualify table names for csv files

    Problem: When CSV files in the search path have the same name, they are inaccessible. I work with local CSV a lot, they arrive as the result of spark/trino/pandas transforms and I land them like a hive warehouse to stay organized:

    tree data/
    data/
    └── extracts
        └── csv
            ├── dataset=Page_Traffic
            │   └── rundate=2022-10-23
            │       └── data.csv
            ├── dataset=hdp_sessions
            │   ├── rundate=2022-10-17
            │   │   └── data.csv
            │   └── rundate=2022-10-18
            │       └── data.csv
            ├── dataset=funnel_metrics
            │   ├── rundate=2022-10-21
            │   │   └── data.csv
             ... etc....
    
    # paths = [ "**/*.csv"]
    steampipe query
    > .inspect csv
    +-------+--------------------------------------------------------------------------------------------+
    | table | description                                                                                |
    +-------+--------------------------------------------------------------------------------------------+
    | data  | CSV file at /Users/../data/extracts/csv/dataset=funnel_metrics/rundate=2022-10-22/data.csv |
    +-------+--------------------------------------------------------------------------------------------+
    

    Only one of these files will become part of the schema csv. It is very common for CSV files to have same filename but organized by path.

    I'd like easy access to all of the tables.

    Solution: Qualify the table name by prepending the path from current working directory to the file. Transform characters like /={. to underscore _. You could limit the path to 3 parent directories above the .csv file. Long table names are not a hindrance, and current user experience would be unchanged. Import all these locations of data.csv with distinct names depending on the path from current working directory.

    cd data/extracts/csv/; steampipe query

    > .inspect csv
    +------------------------------------------------+------------------------------------------------------------------+
    |                  table                         | description                                                      |
    +------------------------------------------------+------------------------------------------------------------------+
    | dataset_Page_Traffic_rundate_2022-10-23_data   | CSV file at ./dataset=Page_Traffic/rundate=2022-10-23/data.csv   |
    +------------------------------------------------+------------------------------------------------------------------+
    | dataset_funnel_metrics_rundate_2022-10-21_data | CSV file at ./dataset=funnel_metrics/rundate=2022-10-21/data.csv |
    +------------------------------------------------+------------------------------------------------------------------+
    | dataset_hdp_sessions_rundate=2022-10-17_data   | CSV file at ./dataset=hdp_sessions/rundate=2022-10-17/data.csv   |
    +------------------------------------------------+------------------------------------------------------------------+
    | dataset_hdp_sessions_rundate_2022-10-18_data   | CSV file at ./dataset=hdp_sessions/rundate=2022-10-18/data.csv   |
    +------------------------------------------------+------------------------------------------------------------------+
    
    

    Describe alternatives you've considered

    1. Right now I have to cd data/extracts/csv/dataset=funnel_metrics/Page_Traffic/rundate=2022-10-22 to determine the table definition. Unfortunately, I cant join or combine results from multiple tables unless I rename them or move a set to a directory.
    2. Rename and move tables around before using Steampipe to interact with them.

    Additional context I love the steampipe tool. Its a universal interface with so many uses, it is quickly becoming a daily tool for me. Great decisions by the designers.

  • Gather the errors and print logs before terminate the plugin.

    Gather the errors and print logs before terminate the plugin.

    Discussed in the issue #31, and this is how I pass the malformed csv files. It may confuse to user why some tables are not created and should find the reason from the log file.

  • Enable CSV plugin to access files stored in an S3 bucket

    Enable CSV plugin to access files stored in an S3 bucket

    Describe the solution you'd like I absolutely love the power of the CSV plugin but it would be great if you could use this plugin to access files stored in an S3 bucket. I currently get various reports delivered to S3 buckets and I'd love to be able to perform joins of the data stored in these CSV files with live data from the AWS plugin. This would a) eliminate the extra step required to download files to my local machine for access via the CSV plugin and b) enable interesting use cases where CSV files could be leveraged to build dashboards on an server running in AWS and/or in Steampipe cloud.

    the way I'd envision this working is you could configure your csv.spc to point to an S3 URL or ARN, and the credentials steampipe is using would need to be granted access to the bucket.

Use SQL to query host, DNS and exploit information using Shodan. Open source CLI. No DB required.

Shodan Plugin for Steampipe Query Shodan with SQL Use SQL to query host, DNS and exploit information using Shodan. For example: select * from shod

Nov 10, 2022
Use SQL to instantly query users, groups, applications and more from Okta. Open source CLI. No DB required.
Use SQL to instantly query users, groups, applications and more from Okta. Open source CLI. No DB required.

Okta Plugin for Steampipe Use SQL to query infrastructure including users, groups, applications and more from Okta. Get started → Documentation: Table

Nov 10, 2022
Use SQL to instantly query instances, networks, databases, and more from Scaleway. Open source CLI. No DB required.
Use SQL to instantly query instances, networks, databases, and more from Scaleway. Open source CLI. No DB required.

Scaleway Plugin for Steampipe Use SQL to query infrastructure servers, networks, databases and more from your Scaleway project. Get started → Document

Nov 16, 2022
Use SQL to instantly query Datadog resources across accounts. Open source CLI. No DB required.

steampipe-plugin-datadog Datadog Plugin for Steampipe Use SQL to query dashboards, users, roles and more from Datadog. Get started → Documentation: Ta

Dec 17, 2022
Use SQL to instantly query Hypothesis resources. Open source CLI. No DB required.

Hypothesis Plugin for Steampipe Prerequisites Steampipe Golang Build $ git clone https://github.com/judell/steampipe-plugin-hypothesis.git $ cd steam

Dec 11, 2022
Use SQL to instantly query Algolia indexes and configuration. Open source CLI. No DB required

Use SQL to instantly query Algolia indexes and configuration. Open source CLI. No DB required

Oct 1, 2022
Get data from .csv files use SQL-like queries.

csvql Get data from .csv files use SQL-like queries. Задание Необходимо написать консольную программу, которая по заданному клиентом запросу осуществл

Dec 7, 2021
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

Nov 9, 2022
Lightweight SQL database written in Go for prototyping and playing with text (CSV, JSON) data

gopicosql Lightweight SQL database written in Go for prototyping and playing wit

Jul 27, 2022
CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. Can output to various formats.
CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. Can output to various formats.

trdsql CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN. It is a tool like q, textql and others. The difference from these tools is t

Jan 1, 2023
Run SQL queries against JSON, CSV, Excel, Parquet, and more.

Run SQL queries against JSON, CSV, Excel, Parquet, and more This is a CLI companion to DataStation (a GUI) for running SQL queries against data files.

Dec 31, 2022
Opionated sql formatter for use with .go files containing backticked queries

fumpt-the-sql Opionated sql formatter for use with .go files containing backticked queries. Uses https://sqlformat.darold.net/ for the actual sql form

Dec 10, 2021
This is the code example how to use SQL to query data from any relational databases in Go programming language.

Go with SQL example This is the code example how to use SQL to query data from any relational databases in Go programming language. To start, please m

Mar 12, 2022
Go package providing simple database and server interfaces for the CSV files produced by the sfomuseum/go-libraryofcongress package
Go package providing simple database and server interfaces for the CSV files produced by the sfomuseum/go-libraryofcongress package

go-libraryofcongress-database Go package providing simple database and server interfaces for the CSV files produced by the sfomuseum/go-libraryofcongr

Oct 29, 2021
Single binary CLI for generating structured JSON, CSV, Excel, etc.

fakegen: Single binary CLI for generating a random schema of M columns to populate N rows of JSON, CSV, Excel, etc. This program generates a random sc

Dec 26, 2022
write APIs using direct SQL queries with no hassle, let's rethink about SQL

SQLer SQL-er is a tiny portable server enables you to write APIs using SQL query to be executed when anyone hits it, also it enables you to define val

Jan 7, 2023
Parses a file and associate SQL queries to a map. Useful for separating SQL from code logic

goyesql This package is based on nleof/goyesql but is not compatible with it any more. This package introduces support for arbitrary tag types and cha

Oct 20, 2021
Go-sql-reader - Go utility to read the externalised sql with predefined tags

go-sql-reader go utility to read the externalised sql with predefined tags Usage

Jan 25, 2022
Use SQL to instantly query file, domain, URL and IP scanning results from VirusTotal.
Use SQL to instantly query file, domain, URL and IP scanning results from VirusTotal.

VirusTotal Plugin for Steampipe Use SQL to query file, domain, URL and IP scanning results from VirusTotal. Get started → Documentation: Table definit

Nov 10, 2022