MySQL replication topology management and HA

downloads release

orchestrator [Documentation]

Orchestrator logo

orchestrator is a MySQL high availability and replication management tool, runs as a service and provides command line access, HTTP API and Web interface. orchestrator supports:

Discovery

orchestrator actively crawls through your topologies and maps them. It reads basic MySQL info such as replication status and configuration.

It provides you with slick visualization of your topologies, including replication problems, even in the face of failures.

Refactoring

orchestrator understands replication rules. It knows about binlog file:position, GTID, Pseudo GTID, Binlog Servers.

Refactoring replication topologies can be a matter of drag & drop a replica under another master. Moving replicas around is safe: orchestrator will reject an illegal refactoring attempt.

Fine-grained control is achieved by various command line options.

Recovery

orchestrator uses a holistic approach to detect master and intermediate master failures. Based on information gained from the topology itself, it recognizes a variety of failure scenarios.

Configurable, it may choose to perform automated recovery (or allow the user to choose type of manual recovery). Intermediate master recovery achieved internally to orchestrator. Master failover supported by pre/post failure hooks.

Recovery process utilizes orchestrator's understanding of the topology and of its ability to perform refactoring. It is based on state as opposed to configuration: orchestrator picks the best recovery method by investigating/evaluating the topology at the time of recovery itself.

The interface

orchestrator supports:

  • Command line interface (love your debug messages, take control of automated scripting)
  • Web API (HTTP GET access)
  • Web interface, a slick one.

Orcehstrator screenshot

Additional perks

  • Highly available
  • Controlled master takeovers
  • Manual failovers
  • Failover auditing
  • Audited operations
  • Pseudo-GTID
  • Datacenter/physical location awareness
  • MySQL-Pool association
  • HTTP security/authentication methods
  • There is also an orchestrator-mysql Google groups forum to discuss topics related to orchestrator
  • More...

Read the Orchestrator documentation

Authored by Shlomi Noach:

Related projects

Developers

Get started developing Orchestrator by reading the developer docs. Thanks for your interest!

License

orchestrator is free and open sourced under the Apache 2.0 license.

Comments
  • Orchestrator promotes a replica with lag on Mariadb

    Orchestrator promotes a replica with lag on Mariadb

    We have a 3 node MariaDB 10.5.10 setup on Centos. 1 Primary and 2 replicas with semi-sync enabled. Our current orchestrator version is 3.2.4

    We had a scenario where the replicas were lagging by few hours, master was not reachable so one of the replicas was promoted as primary in spite of the huge lag. This resulted in a data loss. Ideally orchestrator should wait for the replica's relay logs to be applied on the replica then promote as a master. This seems to be the behavior on MySQL based on my testing but not on Mariadb.

    --Test case: Tests against MySQL and Mariadb are done with these orchestrator parameters in /etc/orchaestrator.conf.json

    "DelayMasterPromotionIfSQLThreadNotUpToDate": true, "debug": true

    Restart orchestrator on all 3 nodes I)Test on MariaDB: Start a 3 node Mariadb cluster (Semi-sync enabled) 1.Create and add data to a test table create table test (colA int, colB int, colC datetime, colD int);

    insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000);

    1. Stop slave SQL_THREAD on replicas (Node 2, 3)

    2. Wait for few secs and add some more data to Node 1 (master) insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000); insert into test values (rand()*100,rand()*1000,now(),rand()*10000);

    3. Stop mysqld on Master (Node 1)

    4. You will see the orchestrator promoting a replica without the data added in Step #3.

    Test on MySQL 5.7.32: Repeat the same test on 3 node MySQL You will notice orchestrator promoting one of the replicas with out any data loss ie seeing 14 rows !!!

    Thank You Mohan

  • EnforceSemiSyncReplicas & RecoverLockedSemiSyncMaster - actively enable/disable semi-sync replicas to match master's wait count

    EnforceSemiSyncReplicas & RecoverLockedSemiSyncMaster - actively enable/disable semi-sync replicas to match master's wait count

    This is a WIP PR that attempts to address https://github.com/openark/orchestrator/issues/1360.

    There are tons of open questions and things missing, but this is the idea.

    Open questions (all answered in https://github.com/openark/orchestrator/pull/1373#pullrequestreview-693389059):

    1. ~EnableSemiSync also manages the master flag. do we really want that? should we not have an EnableSemiSyncReplica?~

    2. ~Should there be two modes: EnforceSemiSyncReplicas: exact|enough (exact would handle MasterWithTooManySemiSyncReplicas and LockedSemiSyncMaster, and enough would only handle LockedSemiSyncMaster)?~

    3. ~LockedSemiSyncMasterHypothesis waits ReasonableReplicationLagSeconds. I'd like there to be another variable to control the wait time. This seems like it's overloaded.~

    TODO:

    • [x] properly succeed failover; currently it kinda retries even though it succeeded, not sure why
    • [x] discuss downtime behavior with shlomi
    • [x] possibly implement MasterWithIncorrectSemiSyncReplicas, see PoC: https://github.com/binwiederhier/orchestrator/pull/1
    • [x] when a replica is downtimed but replication is enabled, MasterWithTooManySemiSyncReplicas does not behave correctly
    • [x] MaybeEnableSemiSyncReplica does not manage the master flag though it previously did (in the new logic only)
    • [x] excludeNotReplicatingReplicas should be a specific instance, not all non-replicating instances!
    • [x] re-test old logic
    • [x] handle master failover semi-sync enable/disable
    • [x] semi-sync replica priority (or come up with better concept)
    • [x] enabled RecoverLockedSemiSyncMaster without exact mode
    • [x] perform sanity checks in checkAndRecover* functions BEFORE enabling/disabling replicas
    • [x] add ReasonableLockedSemiSyncSeconds with fallback to ReasonableReplicationLagSeconds
  • GTID not found properly (5.7) and some graceful-master-takeover issues

    GTID not found properly (5.7) and some graceful-master-takeover issues

    Hi,

    I am testing orchestrator with 5.7.17, Master and two slaves. Have moved one of the slaves to change the topology like A-B-C and then executed orchestrator -c graceful-master-takeover -alias myclusteralias

    The issues found are:

    1. GTID appears as disabled in the master, the web interface shows the button to enable it, when obviously it is enabled in all the replication chain (GTID_MODE=ON). Slaves are showed with GTID enabled.
    2. This issue causes that the takeover doesn't use GTID (I guess)
    3. Instance B was in read-only before the takeover, after the takeover, the read-only is not disabled, is this a feature or something that should I add via hooks? Should be nice to have a parameter to end the process in the status that you prefer, depending on the takeover reasons/conditions.
    4. Also, for any reason the role change old-master-> new slave doesn't work. It executes a CHANGE MASTER but apparently the replication username in the old master is empty, failing the change master operation (orchestrator user has SELECT ON mysql.slave_master_info in the cluster).
    5. Finally, should be nice to add a feature to force to refactor the topology when you have one master and several slaves below. It requires moving slaves below the new elected master, just before the master-takeover. The process will take a bit longer, moving the slaves, and waiting until they are ready.

    Thanks for this amazing tool! Regards, Eduardo

  • hook for graceful master switch

    hook for graceful master switch

    I have been running some graceful master takeover testing using ProxySQL and Orchestrator together, and I believe it would be a good idea to have a hook that is triggered even earlier than PreFailoverProcesses. The issue with PreFailoverProcesses is that it is triggered after the demoted master has already been placed by Orchestrator in read_only mode, as shown by this extract from the log:

    Mar 03 14:25:10 mysql3 orchestrator[25032]: [martini] Started GET /api/graceful-master-takeover/mysql1/3306 for 192.168.56.1
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will demote mysql1:3306 and promote mysql2:3306 instead
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Stopped slave on mysql2:3306, Self:mysql-bin.000009:3034573, Exec:mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will set mysql1:3306 as read_only
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO instance mysql1:3306 read_only: true
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO auditType:read-only instance:mysql1:3306 cluster:mysql1:3306 message:set as true
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will advance mysql2:3306 to master coordinates mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Will start slave on mysql2:3306 until coordinates: mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Stopped slave on mysql2:3306, Self:mysql-bin.000009:3034573, Exec:mysql-bin.000010:18546301
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO executeCheckAndRecoverFunction: proceeding with DeadMaster detection on mysql1:3306; isActionable?: true; skipProcesses: false
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: detected DeadMaster failure on mysql1:3306
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running 1 OnFailureDetectionProcesses hooks
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2055: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running OnFailureDetectionProcesses hook 1 of 1: echo 'Detected DeadMaster on mysql1:3306. Affected replicas: 1' >> /tmp/recovery.log
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2056: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun(echo 'Detected DeadMaster on mysql1:3306. Affected replicas: 1' >> /tmp/recovery.log,[])
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun/running: bash /tmp/orchestrator-process-cmd-358000144
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO CommandRun successful. exit status 0
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Completed OnFailureDetectionProcesses hook 1 of 1 in 4.556463ms
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2057: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO Completed OnFailureDetectionProcesses hook 1 of 1 in 4.556463ms
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: done running OnFailureDetectionProcesses hooks
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2058: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2059: register-failure-detection
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO executeCheckAndRecoverFunction: proceeding with DeadMaster recovery on mysql1:3306; isRecoverable?: true; skipProcesses: false
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2060: write-recovery
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: will handle DeadMaster event on mysql1:3306
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 DEBUG orchestrator/raft: applying command 2061: write-recovery-step
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO auditType:recover-dead-master instance:mysql1:3306 cluster:mysql1:3306 message:problem found; will recover
    Mar 03 14:25:10 mysql3 orchestrator[25032]: 2018-03-03 14:25:10 INFO topology_recovery: Running 1 PreFailoverProcesses hooks
    

    For the ProxySQL use case, this returns errors to the application as soon as the host is set in read_only mode. I would like to use the proposed hook to have ProxySQL set the old master to offline_soft and give active connections a chance to finish work to minimize errors the application is returned.

  • MySQL can't recover with GTID

    MySQL can't recover with GTID

    @shlomi-noach: I'm trying to test the recovery of orchestrator and having some problems. When I stop the master of MySQL, who enable the GTID, the slave can't be promoted to master. This content appear in the recover log: All errors: PseudoGTIDPattern not configured; cannot use Pseudo-GTID I find this word in the document: "At this time recovery requires either GTID (Oracle or MariaDB), Pseudo GTID or Binlog Servers." Is it means the GTID only useful for Oracle or MariaDB, not MySQL?

    By the way, I'm the first time to use the raft, should I do some extra thing to start the raft? I have create and edit /etc/profile.d/orchestrator-client.sh according to the document. But I don't know how to start the raft. I can use the command like "orchestrator-client -c which-api", but error on "orchestrator-client -c raft-leader" of word "raft-state: not running with raft setup". What can I do for this?

    Thanks!

  • Credentials not set after master takeover

    Credentials not set after master takeover

    This issue was introduced between 3.2.3 (working) and 3.2.5 (broken).

    Setup: The replication credentials are stored in a metadata table on the database server, Orchestrator knows about these with ReplicationCredentialsQuery.

    In 3.2.3 - issue graceful-master-takeover and orchestrator automatically connects the old master to the new one.

    In 3.2.5 - after graceful-master-takeover the old master is set as slave but username is missing (password possibly too).

    Last IO error | "Fatal error: Invalid (empty) username when attempting to connect to the master server. Connection attempt terminated."
    
  • the replication not detach to the new master

    the replication not detach to the new master

  • 3.0.2 fails to run on RHEL 6.7

    3.0.2 fails to run on RHEL 6.7

    Hi, I was happily running 2.1.5 on RHEL 6.7, but when I install the new 3.0.2 version using rpm I get the following error:

    [root@d010108059224 orchestrator]# cat /etc/system-release
    Red Hat Enterprise Linux Server release 6.7 (Santiago)
    
    [root@d010108059224 orchestrator]# /usr/local/orchestrator/orchestrator --debug http &
    [1] 1908
    [root@d010108059224 orchestrator]# /usr/local/orchestrator/orchestrator: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /usr/local/orchestrator/orchestrator)
    
    [1]+  Exit 1                  /usr/local/orchestrator/orchestrator --debug http
    

    Is this version meant to be run on RHEL 7?

  • refactor(go mod): migrate to go modules

    refactor(go mod): migrate to go modules

    During the migration process, some code is updated directly in the vendor folder, causing inconsistent with the upstream code repository. I move this part of the code to the migrate folder.

    fixes #1355

    Signed-off-by: cndoit18 [email protected]

  • Slaves lagging by couple of hours are elected as master by orchestrator

    Slaves lagging by couple of hours are elected as master by orchestrator

    We are seeing instances where slaves with couple of hours of lag are elected as masters. Is there any configuration for that not to happen?

    @shlomi-noach

  • remote error: tls: bad certificate

    remote error: tls: bad certificate

    Question

    How can I debug the remote error: tls: bad certificate below. It's not clear for me which part of orchestrator have tls problems.

    config

    cat /var/lib/orchestrator/orchestrator-sqlite.conf.json
    {
        "Debug": true,
        "EnableSyslog": false,
        "ListenAddress": ":3000",
        "AutoPseudoGTID": true,
        "RaftEnabled": true,
        "RaftDataDir": "/var/lib/orchestrator",
        "RaftBind": "104.248.131.78",
        "RaftNodes": ["mysql-001.livesystem.at", "mysql-002.livesystem.at", "mysql-003.livesystem.at"] ,
        "BackendDB": "sqlite",
        "SQLite3DataFile": "/var/lib/orchestrator/data/orchestrator.sqlite3",
        "MySQLTopologyCredentialsConfigFile": "/var/lib/orchestrator/orchestrator-topology.cnf",
        "InstancePollSeconds": 5,
        "DiscoverByShowSlaveHosts": false,
        "FailureDetectionPeriodBlockMinutes": 60,
        "UseSSL": true,
        "SSLPrivateKeyFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_privatekey.pem",
        "SSLCertFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_cert.pem",
        "SSLCAFile": "/var/lib/orchestrator/pki/ca_cert.pem",
        "SSLSkipVerify": false,
      }
    

    debug output

    root@mysql-001:~# cd /usr/local/orchestrator && orchestrator --debug --config=/var/lib/orchestrator/orchestrator-sqlite.conf.json --stack http
    2019-05-07 10:14:49 INFO starting orchestrator, version: 3.0.14, git commit: f4c69ad05010518da784ce61865e65f0d9e0081c
    2019-05-07 10:14:49 INFO Read config: /var/lib/orchestrator/orchestrator-sqlite.conf.json
    2019-05-07 10:14:49 DEBUG Parsed topology credentials from /var/lib/orchestrator/orchestrator-topology.cnf
    2019-05-07 10:14:49 DEBUG Connected to orchestrator backend: sqlite on /var/lib/orchestrator/data/orchestrator.sqlite3
    2019-05-07 10:14:49 DEBUG Initializing orchestrator
    2019-05-07 10:14:49 DEBUG Migrating database schema
    2019-05-07 10:14:49 DEBUG Migrated database schema to version [3.0.14]
    2019-05-07 10:14:49 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32
    2019-05-07 10:14:49 INFO Starting Discovery
    2019-05-07 10:14:49 INFO Registering endpoints
    2019-05-07 10:14:49 INFO continuous discovery: setting up
    2019-05-07 10:14:49 DEBUG Setting up raft
    2019-05-07 10:14:49 DEBUG Queue.startMonitoring(DEFAULT)
    2019-05-07 10:14:49 INFO Starting HTTPS listener
    2019-05-07 10:14:49 INFO Read in CA file: /var/lib/orchestrator/pki/ca_cert.pem
    2019-05-07 10:14:49 DEBUG raft: advertise=104.248.131.78:10008
    2019-05-07 10:14:49 DEBUG raft: transport=&{connPool:map[] connPoolLock:{state:0 sema:0} consumeCh:0xc42008b500 heartbeatFn:<nil> heartbeatFnLock:{state:0 sema:0} logger:0xc420911400 maxPool:3 shutdown:false shutdownCh:0xc42008b560 shutdownLock:{state:0 sema:0} stream:0xc42026b9a0 timeout:10000000000 TimeoutScale:262144}
    2019-05-07 10:14:49 DEBUG raft: peers=[104.248.131.78:10008 142.93.100.13:10008 142.93.161.104:10008]
    2019-05-07 10:14:49 DEBUG raft: logStore=&{dataDir:/var/lib/orchestrator backend:<nil>}
    2019-05-07 10:14:50 INFO raft: store initialized at /var/lib/orchestrator/raft_store.db
    2019-05-07 10:14:50 INFO new raft created
    2019/05/07 10:14:50 [INFO] raft: Node at 104.248.131.78:10008 [Follower] entering Follower state (Leader: "")
    2019-05-07 10:14:50 INFO continuous discovery: starting
    2019-05-07 10:14:50 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:51 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2019/05/07 10:14:51 [INFO] raft: Node at 104.248.131.78:10008 [Candidate] entering Candidate state
    2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to 142.93.100.13:10008: dial tcp 142.93.100.13:10008: connect: connection refused
    2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:51 [DEBUG] raft: Votes needed: 2
    2019/05/07 10:14:51 [DEBUG] raft: Vote granted from 104.248.131.78:10008. Tally: 1
    2019/05/07 10:14:53 [WARN] raft: Election timeout reached, restarting election
    2019/05/07 10:14:53 [INFO] raft: Node at 104.248.131.78:10008 [Candidate] entering Candidate state
    2019/05/07 10:14:53 [ERR] raft: Failed to make RequestVote RPC to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [DEBUG] raft: Votes needed: 2
    2019/05/07 10:14:53 [DEBUG] raft: Vote granted from 104.248.131.78:10008. Tally: 1
    2019/05/07 10:14:53 [DEBUG] raft: Vote granted from 142.93.100.13:10008. Tally: 2
    2019/05/07 10:14:53 [INFO] raft: Election won. Tally: 2
    2019/05/07 10:14:53 [INFO] raft: Node at 104.248.131.78:10008 [Leader] entering Leader state
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [INFO] raft: pipelining replication to peer 142.93.100.13:10008
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [DEBUG] raft: Node 104.248.131.78:10008 updated peer set (2): [104.248.131.78:10008 142.93.100.13:10008 142.93.161.104:10008]
    2019-05-07 10:14:53 DEBUG orchestrator/raft: applying command 2: leader-uri
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [WARN] raft: Failed to contact 142.93.161.104:10008 in 508.458369ms
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019-05-07 10:14:53 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:54 [WARN] raft: Failed to contact 142.93.161.104:10008 in 998.108974ms
    2019/05/07 10:14:54 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
    2019/05/07 10:14:54 [WARN] raft: Failed to contact 142.93.161.104:10008 in 1.450057377s
    2019-05-07 10:14:54 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019/05/07 10:14:54 [INFO] raft: pipelining replication to peer 142.93.161.104:10008
    2019-05-07 10:14:55 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
    2019-05-07 10:14:55 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:56 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:57 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:58 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:14:59 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:00 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
    2019-05-07 10:15:00 DEBUG orchestrator/raft: applying command 3: request-health-report
    2019/05/07 10:15:00 http: TLS handshake error from 104.248.131.78:47866: remote error: tls: bad certificate
    2019/05/07 10:15:00 http: TLS handshake error from 142.93.100.13:51332: remote error: tls: bad certificate
    2019/05/07 10:15:00 http: TLS handshake error from 142.93.161.104:47940: remote error: tls: bad certificate
    2019-05-07 10:15:00 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:01 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:02 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:03 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
    2019-05-07 10:15:05 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
    2019-05-07 10:15:10 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
    2019-05-07 10:15:10 DEBUG orchestrator/raft: applying command 4: request-health-report
    2019/05/07 10:15:10 http: TLS handshake error from 104.248.131.78:47870: remote error: tls: bad certificate
    2019/05/07 10:15:10 http: TLS handshake error from 142.93.100.13:51334: remote error: tls: bad certificate
    2019/05/07 10:15:10 http: TLS handshake error from 142.93.161.104:47942: remote error: tls: bad certificate
    2019-05-07 10:15:15 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
    
  • Orchestrator GUI incorrectly shows recovery option for intermediate database in chained replication

    Orchestrator GUI incorrectly shows recovery option for intermediate database in chained replication

    https://github.com/openark/orchestrator/issues/1463

    Problem: If we've got replication chain A->B->C, and C is down, GUI shows 'Recover' dropdown for node B, but there is no possible recovery action available in such a case.

    Cause: The root cause of the problem is the analysis logic in Analysis_dao.go:GetReplicationAnalysis(). The condition for setting AllIntermediateMasterReplicasNotReplicating does not check if there are any replicas reachable. So the case when all replicas are dead (no recovery action possible) and the case when some replicas are still reachable, but are not replicating (recovery action possible) are undistingushable.

    Solution: Improve the analysis logic. Report AllIntermediateMasterReplicasNotReplicating only if all replicas are not replicating, but there are still some reachable replicas.

    This commit also contains improvement of not trying to query the node which is not reachable (ping node before examining it)

  • Orchestrator analysis-locked-hypothesis test is unstable

    Orchestrator analysis-locked-hypothesis test is unstable

    https://github.com/openark/orchestrator/issues/1464

    Problem: analysis-locked-hypothesis test is unstable

    Cause: By default instance is polled every 5 seconds, but we wait only for 2 seconds, so it may happen that instance is not polled before we get the analysis result.

    Solution: Wait for 7 seconds after disabling instance. It is 7 because it has to be > 5 but LockedSemiSyncMasterHypothesis is reported only within 6 secs window (InstancePollSeconds + ReasonableInstanceCheckSeconds), then it switches to LockedSemiSyncMaster status.

    Related issue: https://github.com/openark/orchestrator/issues/0123456789

    Description

    This PR [briefly explain what is does]

  • Orchestrator GUI incorrectly shows recovery option for intermediate database in chained replication

    Orchestrator GUI incorrectly shows recovery option for intermediate database in chained replication

    In a chained replication environment such as A -> B -> C. The loss of the leaf node, C, makes Orchestrator GUI show the Recover button on B when no action is possible. This can be confusing for users unfamiliar with MySQL topology and may think there is a possible action.

    To reproduce this issue, you can create a cluster using 3 nodes with anydbver:

    ./anydbver deploy hn:ps0 ps:5.7 node1 hn:ps1 ps:5.7 master:default node2 hn:ps2 ps:5.7 master:node1 node3 hn:orc orchestrator master:default

    Shutdown the ps2 instance and access the GUI. It will show the instance ps1 to recover.

  • Orchestrator switches incorrectly, causing database service failure, problem analysis

    Orchestrator switches incorrectly, causing database service failure, problem analysis

    Hi I may have found a bug, please help When I execute the change master command at (192.168.73.128:4307), the DistributePairs function writes the downed mysql information (192.168.73.128:4308) to consul, causing the database to fail to serve

    1、This is my database topology image

    2、Database_instance table information in sqlite database(192.168.73.128:4308 MYSQL is down) BA7F6AC07A2E1744BE5FD846B1E2DF7A

    3、This code filters the real mysql master(192.168.73.128:3307), retains the mysql information(192.168.73.128:4308) that has been down, and is written to consul in the following code C5D29285BB1B743E5DCCE4B39BCFF2EF 4、The downed mysql information(192.168.73.128:4308) is written to consul in this code 6}D5K41DFGPS~7V5O$I32IA

    5、consul-template finds that the key of consul has changed, and updates the information of mysql that has been down to haproxy.cfg, causing a failure

    If need additional information from me, please contact me Please help, if it is a bug, please help to fix it, thank you

  • GTID auto position isn't set when using move-below or move-up (but is when using relocate)

    GTID auto position isn't set when using move-below or move-up (but is when using relocate)

    It seems certain commands seem to be disabling GTID auto position and I'm having trouble determining why. This is with Orch v3.2.6 and MySQL 5.7.

    If I run either of:

     sudo orchestrator -c move-below -i host2:3306 -d host3:3306
    sudo orchestrator -c move-up -i host2:3306
    

    The auto_position setting from SHOW SLAVE STATUS flips to 0, even if it was previously set to 1. This is true if I manually run CHANGE MASTER TO MASTER_AUTO_POSITION=1 or use the enable-gtid command prior to running.

    However, if I use relocate, auto-position is preserved, e.g.:

    sudo orchestrator -c relocate -i host2:3306 -d host3:3306
    

    I've tried this with AutoPseudoGTID enabled and disabled (we prefer to use full GTIDs whenever possible and actually do not want Pseudo GTIDs as it adds a ton of data to our PMM dashboards).

    If move-* commands are intended to be used without GTID, it might be nice to have them fail if GTID is enabled? Or at least the docs updated to indicate these break GTID/auto-position? In looking at the code, this seems like it is unintentional?

    Here's the debug info when running move-below and relocate. I've changed hostnames, IPs, etc. but otherwise the sequence is the same:

    move-below:

    ~$ sudo orchestrator -c move-below -i host10:3306 -d host2:3306
    2022-10-11 15:15:35 DEBUG Hostname unresolved yet: host10
    2022-10-11 15:15:35 DEBUG Cache hostname resolve host10 as host10
    2022-10-11 15:15:35 DEBUG Hostname unresolved yet: host2
    2022-10-11 15:15:35 DEBUG Cache hostname resolve host2 as host2
    2022-10-11 15:15:35 DEBUG Connected to orchestrator backend: sqlite on /var/lib/orchestrator/orchestrator.db
    2022-10-11 15:15:35 DEBUG Initializing orchestrator
    2022-10-11 15:15:35 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32
    2022-10-11 15:15:35 DEBUG Hostname unresolved yet: host1
    2022-10-11 15:15:35 DEBUG Cache hostname resolve host1 as host1
    2022-10-11 15:15:35 DEBUG Hostname unresolved yet: host1
    2022-10-11 15:15:35 DEBUG Cache hostname resolve host1 as host1
    2022-10-11 15:15:35 INFO Will move host10:3306 below host2:3306
    2022-10-11 15:15:35 INFO auditType:begin-maintenance instance:host10:3306 cluster:host1:3306 message:maintenanceToken: 1, owner: root, reason: move below host2:3306
    2022-10-11 15:15:35 INFO auditType:begin-maintenance instance:host2:3306 cluster:host1:3306 message:maintenanceToken: 2, owner: root, reason: host10:3306 moves below this
    2022-10-11 15:15:35 INFO Stopped replication on host10:3306, Self:mysql-10-bin.000002:54824454, Exec:mysql-01-bin.000031:363768758
    2022-10-11 15:15:35 INFO Stopped replication on host2:3306, Self:mysql-02-bin.000002:54824454, Exec:mysql-01-bin.000031:363768758
    2022-10-11 15:15:35 DEBUG ChangeMasterTo: will attempt changing master on host10:3306 to host2:3306, mysql-02-bin.000002:54824454
    2022-10-11 15:15:35 INFO ChangeMasterTo: Changed master on host10:3306 to: host2:3306, mysql-02-bin.000002:54824454. GTID: false
    2022-10-11 15:15:35 INFO Started replication on host10:3306
    2022-10-11 15:15:35 INFO Started replication on host2:3306
    2022-10-11 15:15:35 INFO auditType:move-below instance:host10:3306 cluster:host1:3306 message:moved host10:3306 below host2:3306
    2022-10-11 15:15:35 INFO auditType:end-maintenance instance:host2:3306 cluster:host1:3306 message:maintenanceToken: 2
    2022-10-11 15:15:35 INFO auditType:end-maintenance instance:host10:3306 cluster:host1:3306 message:maintenanceToken: 1
    host10:3306<host2:3306
    

    relocate:

    ~$ sudo orchestrator -c relocate -i host10:3306 -d host2:3306
    2022-10-11 15:16:00 DEBUG Hostname unresolved yet: host10
    2022-10-11 15:16:00 DEBUG Cache hostname resolve host10 as host10
    2022-10-11 15:16:00 DEBUG Hostname unresolved yet: host2
    2022-10-11 15:16:00 DEBUG Cache hostname resolve host2 as host2
    2022-10-11 15:16:00 DEBUG Connected to orchestrator backend: sqlite on /var/lib/orchestrator/orchestrator.db
    2022-10-11 15:16:00 DEBUG Initializing orchestrator
    2022-10-11 15:16:00 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32
    2022-10-11 15:16:00 INFO Will move host10:3306 below host2:3306 via GTID
    2022-10-11 15:16:00 INFO auditType:begin-maintenance instance:host10:3306 cluster:host2:3306 message:maintenanceToken: 4, owner: root, reason: move below host2:3306
    2022-10-11 15:16:00 DEBUG Hostname unresolved yet: host1
    2022-10-11 15:16:00 DEBUG Cache hostname resolve host1 as host1
    2022-10-11 15:16:00 DEBUG Hostname unresolved yet: host1
    2022-10-11 15:16:00 DEBUG Cache hostname resolve host1 as host1
    2022-10-11 15:16:00 INFO Stopped replication on host10:3306, Self:mysql-10-bin.000002:54993956, Exec:mysql-02-bin.000002:54993956
    2022-10-11 15:16:00 DEBUG ChangeMasterTo: will attempt changing master on host10:3306 to host2:3306, mysql-01-bin.000031:363922016
    2022-10-11 15:16:00 INFO ChangeMasterTo: Changed master on host10:3306 to: host2:3306, mysql-01-bin.000031:363922016. GTID: true
    2022-10-11 15:16:00 INFO Started replication on host10:3306
    2022-10-11 15:16:00 INFO auditType:move-below-gtid instance:host10:3306 cluster:host2:3306 message:moved host10:3306 below host2:3306
    2022-10-11 15:16:00 INFO auditType:end-maintenance instance:host10:3306 cluster:host2:3306 message:maintenanceToken: 4
    2022-10-11 15:16:00 INFO auditType:relocate-below instance:host10:3306 cluster:host2:3306 message:relocated host10:3306 below host2:3306
    host10:3306<host2:3306
    

    Config:

    {
      "AutoPseudoGTID": true,
      "UseSuperReadOnly" : false,
      "Debug": false,
      "EnableSyslog": false,
      "ListenAddress": ":3000",
      "BackendDB": "sqlite",
      "SQLite3DataFile": "/var/lib/orchestrator/orchestrator.db",
      "MySQLTopologyUser": "svc_orchestrator",
      "MySQLTopologyPassword": "346ASDF3456jdfowier2tas",
      "MySQLTopologyCredentialsConfigFile": "",
      "MySQLTopologySSLPrivateKeyFile": "",
      "MySQLTopologySSLCertFile": "",
      "MySQLTopologySSLSkipVerify": true,
      "MySQLTopologyUseMutualTLS": false,
      "MySQLConnectTimeoutSeconds": 1,
      "DefaultInstancePort": 3306,
      "RaftEnabled": false,
      "RaftBind": "3.232.63.146",
      "RaftDataDir": "/var/lib/raft",
      "DefaultRaftPort": 10008,
      "RaftNodes": [],
      "DiscoverByShowSlaveHosts": true,
      "DiscoveryIgnoreHostnameFilters": [],
      "InstancePollSeconds": 5,
      "UnseenInstanceForgetHours": 240,
      "SnapshotTopologiesIntervalHours": 0,
      "InstanceBulkOperationsWaitTimeoutSeconds": 10,
      "HostnameResolveMethod": "default",
      "MySQLHostnameResolveMethod": "@@report_host",
      "SkipBinlogServerUnresolveCheck": true,
      "ExpiryHostnameResolvesMinutes": 60,
      "RejectHostnameResolvePattern": "",
      "ReasonableReplicationLagSeconds": 10,
      "ProblemIgnoreHostnameFilters": [],
      "VerifyReplicationFilters": false,
      "ReasonableMaintenanceReplicationLagSeconds": 20,
      "CandidateInstanceExpireMinutes": 60,
      "AuditLogFile": "",
      "AuditToSyslog": false,
      "RemoveTextFromHostnameDisplay": ".hosts.secretcdn.net:3306",
      "ReadOnly": false,
      "AuthenticationMethod": "multi",
      "HTTPAuthUser": "svc_http_orchestrator",
      "HTTPAuthPassword": "SomeRandomPass",
      "AuthUserHeader": "",
      "PowerAuthUsers": [
        "*"
      ],
      "SlaveLagQuery": "",
      "DetectClusterAliasQuery": "SELECT cluster_name FROM meta.cluster WHERE anchor=1",
      "DetectClusterDomainQuery": "",
      "DetectInstanceAliasQuery": "SELECT @@hostname",
      "DetectPromotionRuleQuery": "",
      "DataCenterPattern": "[.]([^.]+)[.][^.]+[.]secretcdn[.]net",
      "PhysicalEnvironmentPattern": "[.]([^.]+[.][^.]+)[.]secretcdn[.]net",
      "PromotionIgnoreHostnameFilters": [],
      "DetectSemiSyncEnforcedQuery": "",
      "ServeAgentsHttp": false,
      "AgentsServerPort": ":3001",
      "AgentsUseSSL": false,
      "AgentsUseMutualTLS": false,
      "AgentSSLSkipVerify": false,
      "AgentSSLPrivateKeyFile": "",
      "AgentSSLCertFile": "",
      "AgentSSLCAFile": "",
      "AgentSSLValidOUs": [],
      "UseSSL": true,
      "UseMutualTLS": false,
      "SSLSkipVerify": true,
      "MySQLOrchestratorSSLSkipVerify": true,
      "SSLPrivateKeyFile": "/etc/vaultly/orchestrator/server-key.pem",
      "SSLCertFile": "/etc/vaultly/orchestrator/server-cert.pem",
      "SSLCAFile": "/etc/vaultly/orchestrator/ca-cert.pem",
      "SSLValidOUs": [],
      "URLPrefix": "",
      "StatusEndpoint": "/api/status",
      "StatusSimpleHealth": true,
      "StatusOUVerify": false,
      "AgentPollMinutes": 60,
      "UnseenAgentForgetHours": 6,
      "StaleSeedFailMinutes": 60,
      "SeedAcceptableBytesDiff": 8192,
      "PseudoGTIDPatternIsFixedSubstring": false,
      "PseudoGTIDMonotonicHint": "asc:",
      "PseudoGTIDPattern": "",
      "DetectPseudoGTIDQuery": "",
      "BinlogEventsChunkSize": 10000,
      "SkipBinlogEventsContaining": [],
      "ReduceReplicationAnalysisCount": true,
      "FailureDetectionPeriodBlockMinutes": 60,
      "RecoveryPollSeconds": 10,
      "RecoveryPeriodBlockSeconds": 3600,
      "RecoveryIgnoreHostnameFilters": [],
      "RecoverMasterClusterFilters": [
        "_master_pattern_"
      ],
      "RecoverIntermediateMasterClusterFilters": [
        "_intermediate_master_pattern_"
      ],
      "OnFailureDetectionProcesses": [
        "echo 'Detected {failureType} on {failureCluster}. Affected replicas: {countSlaves}' >> /tmp/recovery.log"
      ],
      "PreFailoverProcesses": [
        "echo 'Will recover from {failureType} on {failureCluster}' >> /tmp/recovery.log"
      ],
      "PostFailoverProcesses": [
        "echo '(for all types) Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
      ],
      "PostUnsuccessfulFailoverProcesses": [],
      "PostMasterFailoverProcesses": [
        "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Promoted: {successorHost}:{successorPort}' >> /tmp/recovery.log"
      ],
      "PostIntermediateMasterFailoverProcesses": [
        "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
      ],
      "CoMasterRecoveryMustPromoteOtherCoMaster": true,
      "DetachLostSlavesAfterMasterFailover": true,
      "ApplyMySQLPromotionAfterMasterFailover": true,
      "MasterFailoverDetachSlaveMasterHost": false,
      "MasterFailoverLostInstancesDowntimeMinutes": 0,
      "PostponeSlaveRecoveryOnLagMinutes": 0,
      "OSCIgnoreHostnameFilters": [],
      "GraphiteAddr": "",
      "GraphitePath": "",
      "GraphiteConvertHostnameDotsToUnderscores": true
    }
    
PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology management, high availability, configuration management, and plugin extensions.

What is PolarDB Cluster Manager PolarDB Cluster Manager is the cluster management component of PolarDB for PostgreSQL, responsible for topology manage

Nov 9, 2022
Golang MySql binary log replication listener

Go MySql binary log replication listener Pure Go Implementation of MySQL replication protocol. This allow you to receive event like insert, update, de

Oct 25, 2022
A river for elasticsearch to automatically index mysql content using the replication feed.

Mysql River Plugin for ElasticSearch The Mysql River plugin allows to hook into Mysql replication feed using the excellent python-mysql-replication an

Jun 1, 2022
mysql to mysql 轻量级多线程的库表数据同步

goMysqlSync golang mysql to mysql 轻量级多线程库表级数据同步 测试运行 设置当前binlog位置并且开始运行 go run main.go -position mysql-bin.000001 1 1619431429 查询当前binlog位置,参数n为秒数,查询结

Nov 15, 2022
Enhanced PostgreSQL logical replication

pgcat - Enhanced postgresql logical replication Why pgcat? Architecture Build from source Install Run Conflict handling Table mapping Replication iden

Dec 21, 2022
Streaming replication for SQLite.

Litestream Litestream is a standalone streaming replication tool for SQLite. It runs as a background process and safely replicates changes incremental

Jan 9, 2023
WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Jan 1, 2023
A Go rest API project that is following solid and common principles and is connected to local MySQL database.
A Go rest API project that is following solid and common principles and is connected to local MySQL database.

This is an intermediate-level go project that running with a project structure optimized RESTful API service in Go. API's of that project is designed based on solid and common principles and connected to the local MySQL database.

Dec 25, 2022
BQB is a lightweight and easy to use query builder that works with sqlite, mysql, mariadb, postgres, and others.

Basic Query Builder Why Simple, lightweight, and fast Supports any and all syntax by the nature of how it works Doesn't require learning special synta

Dec 7, 2022
Interactive client for PostgreSQL and MySQL
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Jan 8, 2023
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.

?? dbui dbui is the terminal user interface and CLI for database connections. It provides features like, Connect to multiple data sources and instance

Jan 5, 2023
MySQL Storage engine conversion,Support mutual conversion between MyISAM and InnoDB engines.

econvert MySQL Storage engine conversion 简介 此工具用于MySQL存储引擎转换,支持CTAS和ALTER两种模式,目前只支持MyISAM和InnoDB存储引擎相互转换,其它引擎尚不支持。 注意:当对表进行引擎转换时,建议业务停止访问或者极少量访问时进行。 原

Oct 25, 2021
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

Jun 9, 2022
CRUD API example is written in Go using net/http package and MySQL database.
CRUD API example is written in Go using net/http package and MySQL database.

GoCrudBook CRUD API example is written in Go using net/http package and MySQL database. Requirements Go MySQL Code Editor Project Structure GoCrudBook

Dec 10, 2022
Support MySQL or MariaDB for gopsql/psql and gopsql/db

mysql Support MySQL or MariaDB for github.com/gopsql/psql. You can make MySQL SELECT, INSERT, UPDATE, DELETE statements with this package. NOTE: Pleas

Dec 9, 2021
A proxy is database proxy that de-identifies PII for PostgresDB and MySQL

Surf Surf is a database proxy that is capable of de-identifying PII and anonymizing sentive data fields. Supported databases include Postgres, MySQL,

Dec 14, 2021
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)
Mogo: a lightweight browser-based logs analytics and logs search platform for some datasource(ClickHouse, MySQL, etc.)

mogo Mogo is a lightweight browser-based logs analytics and logs search platform

Dec 30, 2022
Go-fiber - Implement CRUD Data Go and Mysql using Authentication & Authorization

Implement CRUD Data Go and Mysql using Authentication & Authorization

Jun 8, 2022