pREST (PostgreSQL REST), simplify and accelerate development, ⚡ instant, realtime, high-performance on any Postgres application, existing or new

RESTful API pREST

Build Status GoDoc Go Report Card Coverage Status SourceLevel Homebrew

pREST (PostgreSQL REST), simplify and accelerate development, instant, realtime, high-performance on any Postgres application, existing or new

Postgres version

  • 9.4 or higher

Problem

There is PostgREST written in Haskell, but keeping Haskell software in production is not an easy job. With this need pREST was born. Read more.

1-Click Deploy

Heroku

Deploy to Heroku and instantly get a realtime RESTFul API backed by Heroku Postgres:

Deploy to Heroku

Documentation

https://docs.prestd.com/ (content source and template source)

Owner
pREST
pREST (PostgreSQL REST), simplify and accelerate development, ⚡ instant, realtime, high-performance on any Postgres application, existing or new
pREST
Comments
  • Transform json_agg to a pREST native function

    Transform json_agg to a pREST native function

    Hey guys,

    I'm using pREST on my development environment and I was getting out of memory error due to our SELECT json_agg(s) FROM (<query here>) s. This transformation is very costly for the bank (at least for the size of table I'm working with), and I'm migrating the implementation of QueryCtx (postgres.go:605) to a neutral approach.

    Here is the code that is working partially:

    func (adapter *Postgres) QueryCtx(ctx context.Context, SQL string, params ...interface{}) (sc adapters.Scanner) {
    	// use the db_name that was set on request to avoid runtime collisions
    	db, err := getDBFromCtx(ctx)
    	if err != nil {
    		log.Errorln(err)
    		return &scanner.PrestScanner{Error: err}
    	}
    	log.Debugln("generated SQL:", SQL, " parameters: ", params)
    	p, err := Prepare(db, SQL)
    	if err != nil {
    		log.Errorln(err)
    		return &scanner.PrestScanner{Error: err}
    	}
    	var jsonData []byte
    	rows, err := p.QueryContext(ctx, params...)
    	if err != nil {
    		log.Fatal(err)
    	}
    	var data []map[string]interface{}
    	for rows.Next() {
    		cols, err := rows.Columns()
    		if err != nil {
    			log.Fatal(err)
    		}
    		columns := make([]interface{}, len(cols))
    		columnPointers := make([]interface{}, len(cols))
    		for i := range columns {
    			columnPointers[i] = &columns[i]
    		}
    		if err := rows.Scan(columnPointers...); err != nil {
    			log.Fatal(err)
    		}
    		m := make(map[string]interface{})
    		for i, colName := range cols {
    			val := columnPointers[i].(*interface{})
    			switch (*val).(type) {
    				case []uint8:
    					var f interface{}
    					err = json.Unmarshal((*val).([]byte), &f)
    					m[colName] = f
    				default:
    					m[colName] = *val
    			}
    		}
    		data = append(data, m)
    	}
    	jsonData, _ = json.Marshal(data)
    	if len(jsonData) == 0 {
    		jsonData = []byte("[]")
    	}
    	return &scanner.PrestScanner{
    		Error:   err,
    		Buff:    bytes.NewBuffer(jsonData),
    		IsQuery: true,
    	}
    }```
    
    Error got from Datagrip:
    <img width="1075" alt="image" src="https://user-images.githubusercontent.com/4984147/199071959-2a64a979-923e-4ed0-a24a-c13105be295d.png">
    
  • router - add db name in context queries to avoid collision

    router - add db name in context queries to avoid collision

    features

    CI

    1. skips CI on release phases that were not being skipped on PRs

    postgres adapter

    1. extends current interface, allowing to pass context on queries
    2. fallbacks to currentDB if no dbname found on context
    3. adds mocks to new extended interface
    4. fixes linter errors
    5. adds GetFromPool func here

    API

    1. moves most implementations to use queries with Ctx from the postgres adapter
    2. adds default request timeout to 60 seconds (configurable - example)
    3. Sets maxIdleConn to 0, this avoids long lasting queries cut off by the API to leak memory into postgres
  • restructuring for use of more than one database adapter

    restructuring for use of more than one database adapter

    Hello everyone

    I use the MySQL database. I would like to restructure the pRest project to support other database adpaters in addition to Postgres.

    I implemented the restructuring on a fork and will run all the tests as it appears in CONTRIBUTING.md.

    After that, can I send the PRs for analysis? Basically they are used to change the direct calls of postegres for calls of the new interface Adapter.

    This is my first contribution to open-source, so if you're doing something wrong, please excuse me and correct me.

  • Error with pREST (Docs is not very clear about it)

    Error with pREST (Docs is not very clear about it)

    pREST version (or commit ref): 4.0 pREST endpoint: PostgreSQL version: 10.3 Operating system: Windows 10 (client machine where I have pREST binary) Go version: none Log gist:

    Description

    Am confused with the installation part and subsequently on how it is working. I have downloaded the binary file and set it with the prest.toml file as mentioned in snap: image

    Now when i execute the installed binary through cmd.exe, i get the following

    image

    Hoping that pREST is running, i tried http://127.0.0.1:8000/databases , http://127.0.0.1:8000/postgres/public/Mytable but am getting the below error

    image

    Kindly help as am new to this.

    Thanks and Regards AJ

  • wip: pass database name into runQuery context

    wip: pass database name into runQuery context

    This PR just to demonstrate the problem and hotfix we applied. I suppose it is necessary to rework the patch if you want to accept into master: probably to not pass SQL like string, but to create a structure with content which will contain database name and SQL.

    Also: The patch fixes just select race-conditions, but not insert and some other cases

    ref: #497

  • JOIN query only works for some table combinations

    JOIN query only works for some table combinations

    Describe the bug When I try to create a JOIN get request, it works for some combinations of database tables and for some I get this error:

    "could not perform JoinByRequest: Invalid identifier"

    I am using Postman to send the query. Credentials and URL are all okay.

    (1) Here is a query that does work: <db_url>/public/bodyfat?_join=inner:model_fitting_results:bodyfat.fitting_id:$eq:bodyfat.fitting_id (2) And this one does not: <db_url>/public/reconstruction_pipeline_eval_results?_join=inner:reconstruction_pipeline_results:reconstruction_pipeline_eval_results.reconstruction_pipeline_results_id:$eq:reconstruction_pipeline_results.reconstruction_pipeline_results_id

    In case (1) I link the tables bodyfat and model_fitting_results via the fitting_id field (which is primary key in model_fitting_results, and foreign key in bodyfat). In case (2) I do basically the same: reconstruction_pipeline_eval_results linked with reconstruction_pipeline_results via reconstruction_pipeline_results_id.

    All data is correct in the database tables, and the two queries both work when executing them via SQL on the database directly and not via the API. Could it be that the second query is too long?

    Thanks for the help in advance!

    Desktop (please complete the following information):

    • pREST version (or commit ref): v1.0.14
    • pREST endpoint: /{DATABASE}/{SCHEMA}/{TABLE}
    • PostgreSQL version: 13.5
    • deployed on a Kubernetes cluster as per https://github.com/prest/prest/tree/main/install-manifests
  • allow count() as group function

    allow count() as group function

    Currently there are several aggregate (here called "group") functions implemented: https://docs.prestd.com/query-statements/#groupselect-functions-support https://www.postgresql.org/docs/13/functions-aggregate.html

    However that is not the complete list of aggregate functions natively supported by postgres.

    I would like to add count(columnName|*) here.

    maybe one can explore whether some of the other functions are also worth implementing. But I would like to keep the focus on this issue on count(), because I actively need that currently, to do something like:

    select column, count(column) from table group by column ;

    to have a list of all unique values of "column" with their associated frequency.

  • EXECUTING SQL SCRIPTS,error

    EXECUTING SQL SCRIPTS,error

    Deploy using Docker when I test the 'EXECUTING SQL SCRIPTS' sh curl -X POST "http://localhost:3000/_QUERIES/data/update" -d '{"name":"test"}' { "error": "could not get script data/update, could not load /home/fanwu/queries/data/update.write.sql" }

    ##the sql file I wrote is here: ll /home/fanwu/queries/data/update.write.sql -rw-r--r--. 1 root root 131 9月 15 14:50 /home/fanwu/queries/data/update.write.sql ##the Configuration file: sh: grep "queries" prest.toml [queries] location = "/home/fanwu/queries/" —————————————————— I want to know what went wrong and why the file could not be loaded. please help me,Thanks

  • could not execute template SQL

    could not execute template SQL

    Description

    While using Prest to working with an existing SQL script(under foo folder), it informs issues as below. I tried to double-check in document and somewhere else to clarify, but not sure what's wrong. Hope you can take a look at it and give ideas to solve

    URL to run http://127.0.0.1:6000/_QUERIES/foo/some_get?field1=bar

    Config file: location = "./queries/"

    SQL file: SELECT * FROM table WHERE name = "{{.field1}}";

    Error(using Postman tool to test):

    {
        "error": "could not parse script foo/some_get, could not execute template template: queries\\foo\\some_get.read.sql: \"queries\\\\foo\\\\some_get.read.sql\" is an incomplete or empty template"
    }
    

    Windows7 Structure folder:

    --prest
    --------queries
    ----------------foo
    
    
    • pREST version (or commit ref): 0.3.4
    • PostgreSQL version: 11.1
    • Operating system: Windows 7 , Windows 8.1
    • API testing tool: Postman
  • Adds Exposure Setting for sensitive endpoints on TOML

    Adds Exposure Setting for sensitive endpoints on TOML

    In this PR we are setting some variables that enable users to disable certain endpoints for security reasons, enabling less infrastructure team/firewall necessity.

    The settings will be put on prest.toml as shown below:

    [sensitive]
    disableall = false
    databases = false
    schemas = false
    

    fixed: #741

  • Change JSON_AGG to JSONB_AGG, enabling better performance on the DBMS

    Change JSON_AGG to JSONB_AGG, enabling better performance on the DBMS

    This change enables us to perform better on our aggregation and API response time, without changing the whole serializing code.

    If this PR gets merged, we can work on the future on a new serializer.

    fixed: #730

  • test: use `T.Setenv` to set env vars in tests

    test: use `T.Setenv` to set env vars in tests

    This PR replaces os.Setenv with t.Setenv. Starting from Go 1.17, we can use t.Setenv to set environment variable in test. The environment variable is automatically restored to its original value when the test and all its subtests complete. This ensures that each test does not start with leftover environment variables from previous completed tests.

    This saves us at least 2 lines (error check, and cleanup) on every instance, or in some cases adds cleanup that we forgot.

    Reference: https://pkg.go.dev/testing#T.Setenv

    func TestFoo(t *testing.T) {
    	// before
    	os.Setenv(key, "new value")
    	defer os.Unsetenv(key)
    	
    	// after
    	t.Setenv(key, "new value")
    }
    
  • Error in Template Functions

    Error in Template Functions

    Hi guys, I'm using the inFormat template function for an IN clause in a SQL file, as shown in the docs example. I've discovered a detail: it can't do the IN because of a formatting error.

    Using it as the query params it turns out that it only joins the options.

    http://localhost:8081/_QUERIES/dir/customquery?field1=test1,test2

    SELECT * FROM table WHERE name IN {{inFormat "field1"}};
    

    And what I got was basically the combination of these terms.

    name in ('test1,test2')
    

    The solution I found was to add quotes to complete the name (i.e.: ?field1=test1','test2), however, this is a terrible solution and looks like SQL Injection (LOL). Going deeper into the code and doing some tests on this function, I noticed that it tries to transform the string into a string slice.

    https://github.com/prest/prest/blob/b7107b18d277e85f3155d5eb015dfd9c8a8a729c/template/funcregistry.go#L42-L50

    Following this concept, what I've done was: The function receives the string and after that, it's sliced into []string.

    str := fr.TemplateData[key].(string)
    split := strings.Split(str, ",")
    query = fmt.Sprintf("('%s')", strings.Join(split, "', '"))
    

    I saw that there is a split function but I couldn't use it inside inFormat.

    • pREST: latest version
    • pREST endpoint: custom query
    • PostgreSQL version: 12
    • OS: Manjaro Linux x86_64
    • Go version: go1.19.4 Linux/amd64
    • Log gist: https://gist.github.com/caiolul/14e908f4f845b3b2cf31c84ec9268b11
  • Different JOIN behaviour after prest upgrade

    Different JOIN behaviour after prest upgrade

    Describe the bug Performing the same query when joining tables on different versions of prestd (with the same database backend) produces different outputs.

    • prestd 1.0.5 -> OK
    • prestd 1.0.14 -> NOK

    To Reproduce Steps to reproduce the behavior:

    1. Request to endpoint scans?_join=inner:bodyfat:scans.scancode:$eq:bodyfat.scancode&scans.scancode=$in.000005256e0dc77adbe8f6b25b75eda6748f0818&bodyfat.method=tina
    2. Attached the CSV file of both output as well as the XLS file for comparison. In the XLS I marked the differences with red. As you see we have different results for the same column scans.gender and also for the number of returned columns.

    Expected behavior Same and correct query results in both cases.

    Desktop (please complete the following information):

    • pREST version (or commit ref): as written above
    • pREST endpoint: as written above
    • PostgreSQL version: 14.x
    • OS: [e.g. Debian Tid]: Both API running in Kubernetes Popds
    • Go version: NA
    • Log gist: NA

    Additional context Database table structure:

                                         Table "public.scans"
        Column     |          Type          | Collation | Nullable |           Default            
    ---------------+------------------------+-----------+----------+------------------------------
     scancode      | character varying(255) |           | not null | 
     creation_date | bigint                 |           | not null | 
     synthetic     | boolean                |           | not null | 
     creator       | character varying(50)  |           | not null | 
     gender        | character varying(50)  |           | not null | 'neutral'::character varying
     clothes       | boolean                |           |          | 
     fists         | boolean                |           |          | 
     pose          | character varying(50)  |           |          | 
     motion        | character varying(50)  |           |          | 
     scene         | character varying(50)  |           |          | 
     raw_data      | character varying(50)  |           |          | 
     user_id       | character varying(100) |           |          | 
     device_id     | character varying(100) |           |          | 
     age           | bigint                 |           |          | 
     weight        | double precision       |           |          | 
     height        | double precision       |           |          | 
     region        | character varying(50)  |           |          | 
     country       | character varying(50)  |           |          | 
     mocap         | boolean                |           | not null | false
     avatar        | character varying(50)  |           |          | 
     campaign      | character varying(255) |           |          | 
     ref_scan      | character varying(255) |           |          | 
     artefacts     | boolean                |           |          | 
     bad_hair      | boolean                |           |          | 
     bad_feet      | boolean                |           |          | 
     verified      | boolean                |           | not null | false
     ref_motion    | character varying(255) |           |          | 
    Indexes:
        "scans_pkey" PRIMARY KEY, btree (scancode) 
    
                                         Table "public.bodyfat"
         Column      |          Type          | Collation | Nullable |           Default            
    -----------------+------------------------+-----------+----------+------------------------------
     bodyfat_id      | integer                |           | not null | generated always as identity
     scancode        | character varying(255) |           | not null | 
     creation_date   | bigint                 |           | not null | 
     percentage      | numeric(7,2)           |           | not null | 
     method          | character varying(50)  |           | not null | 
     device          | character varying(50)  |           |          | 
     meta_cv_release | character varying(50)  |           |          | 
     fitting_id      | integer                |           |          | 
     gender          | character varying(50)  |           |          | 
    Indexes:
        "bodyfat_pkey" PRIMARY KEY, btree (bodyfat_id)
        "bf_creation_date_idx" btree (creation_date)
        "bf_meta_cv_release_idx" btree (meta_cv_release)
        "bf_method_idx" btree (method)
        "bf_scancode_idx" btree (scancode)
    Foreign-key constraints:
        "fk_fitting_id" FOREIGN KEY (fitting_id) REFERENCES model_fitting_results(fitting_id) ON DELETE CASCADE
        "fk_scancode" FOREIGN KEY (scancode) REFERENCES scans(scancode) ON DELETE CASCADE
    

    Cc: @manhofer query_result_comparison.xlsx WORKING_db_query_result.csv NOT_WORKING_db_query_result.csv

  • import custom queries from DB

    import custom queries from DB

    Problem

    When having multiple instances of prestd, it's now only possible to have custom queries from a folder, this way you'd have to share them and instantiate on every new container.

    Desired behavior

    Be able to create, load and manage scripts directly from the installed DB.

    Possible solution

    New flag with default to folder loading, if not creates a table named prestd_custom_queries that allows creating custom queries and loading them on app start directly from the DB.

  • Docker Image Build Failing

    Docker Image Build Failing

    Successfully tagged ghcr.io/prest/prest:latest
    The push refers to repository [ghcr.io/prest/prest]
    803fc9354a89: Preparing
    9176014dec8d: Preparing
    3ffd6b7b9[188](https://github.com/prest/prest/actions/runs/3420694804/jobs/5695843123#step:16:189): Preparing
    6f0ca1bcc562: Preparing
    1ed5d83a1818: Preparing
    fbbb8386c9be: Preparing
    c77c007a46fa: Preparing
    86bbc29e949d: Preparing
    3a8d602e51a3: Preparing
    d1dec9917839: Preparing
    d38adf39e1dd: Preparing
    4ed121b04368: Preparing
    d9d07d703dd5: Preparing
    86bbc29e949d: Waiting
    fbbb8386c9be: Waiting
    c77c007a46fa: Waiting
    3a8d602e51a3: Waiting
    d1dec9917839: Waiting
    d38adf39e1dd: Waiting
    d9d07d703dd5: Waiting
    unauthorized: unauthenticated: User cannot be authenticated with the token provided.
    4ed121b04368: Waiting
    Error: Process completed with exit code 1.
    

    Docker image build is failing on the master branch due to an invalid token.

Go-Postgresql-Query-Builder - A query builder for Postgresql in Go

Postgresql Query Builder for Go This query builder aims to make complex queries

Nov 17, 2022
Thin clones of PostgreSQL to build powerful development, test, QA, staging environments
 Thin clones of PostgreSQL to build powerful development, test, QA, staging environments

Database Lab Engine (DLE) ⚡ Blazing-fast cloning of PostgreSQL databases ?? Thin clones of PostgreSQL to build powerful development, test, QA, staging

Jan 3, 2023
PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology management, high availability, configuration management, and plugin extensions.

What is PolarDB Cluster Manager PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology manage

Nov 9, 2022
A simple Golang-based application that queries a PostgreSQL database

Qwik-E-Mart Demo App A simple Golang-based application that queries a PostgreSQL database named qwikemart to read and return customer data stored in t

Nov 6, 2021
BQB is a lightweight and easy to use query builder that works with sqlite, mysql, mariadb, postgres, and others.

Basic Query Builder Why Simple, lightweight, and fast Supports any and all syntax by the nature of how it works Doesn't require learning special synta

Dec 7, 2022
A high-performance MySQL proxy

kingshard 中文主页 Overview kingshard is a high-performance proxy for MySQL powered by Go. Just like other mysql proxies, you can use it to split the read

Dec 30, 2022
Goproxy4mysql - Kingshard- a high-performance proxy for MySQL powered by Go

kingshard 中文主页 Fork from github.com/flike/kingshard Overview kingshard is a high

Jan 7, 2022
REST based Redis client built on top of Upstash REST API

An HTTP/REST based Redis client built on top of Upstash REST API.

Jul 31, 2022
pggen - generate type safe Go methods from Postgres SQL queries

pggen - generate type safe Go methods from Postgres SQL queries pggen is a tool that generates Go code to provide a typesafe wrapper around Postgres q

Jan 3, 2023
A reverse proxy for postgres which rewrites queries.

pg-rewrite-proxy A reverse proxy for postgres which rewrites queries. Arbitrary rewriting is supported by supplying an LUA script to the proxy applica

Dec 12, 2022
API Go et db Postgres

API Go et db Postgres Pour faire fonctionner le back il faut suivre ces quelques étapes. -Il faut avoir installer Go et Postgres sur son ordinateur. -

Dec 14, 2021
Golang Redis Postgres to-do Project
Golang Redis Postgres to-do Project

Golang Backend Project Problem Statement Build a to-do application with Golang a

Oct 17, 2022
Re-usable component for subscribing to row-level changes on a postgres database

pgreplicate Re-usable component for subscribing to row-level changes on a postgres database Development In order to run tests the postgres database ne

Dec 21, 2021
Multitenancy in Postgres with Go using Row Level Security (RLS)

tenancy A Go library for multitenancy in Postgres using Row Level Security (RLS). Usage Tenancy as a connection pool. By default, tenancy.Open() begin

Oct 14, 2022
Go-miprimercrud - Mi Primer Crud - GO - Postgres

Mi Primer Crud - GO - Postgres Instalar driver Postgres go get -u github.com/lib

Jan 24, 2022
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Jan 1, 2023
Interactive client for PostgreSQL and MySQL
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Jan 8, 2023
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.

?? dbui dbui is the terminal user interface and CLI for database connections. It provides features like, Connect to multiple data sources and instance

Jan 5, 2023