Skip to main content

Posts

Showing posts from 2021

PostgreSQL Daliy | PosgteSQL adninistration

List databases: postgres-# \l                                   List of databases    Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges -----------+----------+----------+-------------+-------------+-----------------------  postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |  template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +            |          |          |             |             | postgres=CTc/postgres  template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +            |          |          |             |             | postgres=CTc/postgres  test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | (4 rows) Connect to database: postgres-# \connect postgres You are now connected to database "postgres" as user "postgres". List schema: postgres-# \dn[S+]  information_schema | postgres | postgres=UC/postgres+|                     |          | =U/postgres     

Install Postgresql 12.8 on RHEL 7 and Cent OS

Install Postgresql 12.8 on RHEL 7: Download following rpm packages from - https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/ postgresql12-libs-12.8-1PGDG.rhel7.x86_64.rpm postgresql12-12.8-1PGDG.rhel7.x86_64.rpm postgresql12-server-12.8-1PGDG.rhel7.x86_64.rpm postgresql12-contrib-12.8-1PGDG.rhel7.x86_64 Install rpm packages for Postgresql: # yum install postgresql12-libs-12.8-1PGDG.rhel7.x86_64.rpm # yum install postgresql12-12.8-1PGDG.rhel7.x86_64.rpm # yum install postgresql12-server-12.8-1PGDG.rhel7.x86_64.rpm # yum install postgresql12-contrib-12.8-1PGDG.rhel7.x86_64 It will install following utilities at /usr/pg-sql12/bin clusterdb   dropuser           pg_basebackup  pg_controldata  pg_isready      pg_restore     pg_test_timing  postgresql-12-check-db-dir  reindexdb createdb    initdb             pgbench        pg_ctl          pg_receivewal   pg_rewind      pg_upgrade      postgresql-12-setup         vacuumdb createuser  oid2name   pg_checksums   pg_dump       

Singlestore database | pipeline on Activescale S3 bucket

Singlestore database on Activescale S3 bucket: CREATE DATABASE <DB_NAME> ON S3 "<BucketName>" CONFIG '{"compatibility_mode": true, "endpoint_url": "http://<endpointName>"}' CREDENTIALS '{"aws_access_key_id":"<yourKey>","aws_secret_access_key":"<yourAccesKey>"}'; You might have faced following error while creating Singlestore database having files on S3 bucket on Activscale. " The AWS Access Key Id you provided does not exist in our records. " Create Database Pipeline: CREATE PIPELINE library AS LOAD DATA S3 '<bucketName>' CONFIG '{"compatibility_mode": true, "endpoint_url": "http://<endpointName>"}' CREDENTIALS '{"aws_access_key_id": "<yourKey>", "aws_secret_access_key": "<yourKey>"}' INTO TABLE `<tableName>` FIELDS TERMINATED BY

Kubernetess - You must be logged in to the server (Unauthorized):

Kubernetes - You must be logged in to the server (Unauthorized): Check expiration of the certificates: #kubeadm alpha certs check-expiration OR find /etc/kubernetes/pki/ -type f -name "*.crt" -print|egrep -v 'ca.crt$' |xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After' Renew all certificates: #kubeadm alpha certs renew all Check expiration of the certificates: #kubeadm alpha certs check-expiration Backup existing config file: #cat cp /etc/kubernetes/admin.conf $HOME/.kube/config > cp /etc/kubernetes/admin.conf.orig Replace the config file #cp /etc/kubernetes/admin.conf $HOME/.kube/config # kubectl get pods --all-namespaces NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE kube-system   coredns-66bff467f8-djhr7                         1/1     Running   1          450d kube-system   coredns-66bff467f8-dn2t5                         1/1     Running   1          450d kube-system  

MySQL InnoDB Cluster Limitations

  MySQL InnoDB cluster limitations: A group can consist of maximum 9 servers . Attempting to add another server to a group with 9 members causes the request to join to be refused. Limitations and issues described for multi-primary mode groups can also apply in single-primary mode clusters during a failover event, while the newly elected primary flushes out its applier queue from the old primary. Individual transactions that result in GTID contents which are large enough that it cannot be copied between group members over the network within a 5 second window can cause failures in the group communication . To avoid this issue try and limit the size of your transactions as much as possible. For example, split up files used with LOAD DATA INFILE into smaller chunks. Group Replication cannot currently make use of replication Event Checksums . The certification process does not take into account gap locks, table lock, and named locks . Setting a transaction isolation level to

Singlestore multi node architecture components

 Singlestore multi node architecture components:   Cluster: Cluster contains aggregator nodes and leaf nodes. Cluster is made of all nodes on which Singlestore DB is installed.  Master Aggregator: Cluster contains one master aggregator, aggregator responsible for cluster monitoring and  failover. Master aggregator process DDL, DML, Failover, Reference Tables, CML Child Aggregator: Cluster may have zero or more child aggregator, all it depends on query volume. The child aggregator process DML, and help master aggregator to reduce the load.  Leaf: Cluster could have many leaves. The leaf node stores subset of data set. It could be a part of availability group when high availability is configured.  Availability Group: It is a group of leaves - leaf nodes, stores redundant data. The purpose of availability group is to deliver high availability in case of leaf node failure. Redundancy is configurable via redundancy level.  Master Partition: Data partition stored primarily on leaf node. 

MariaDB administration B| MariaDB Columnstore | MariaDB Maxscale | MariaDB Cluster Management | MariaDB Storage Manager

Maria DB: systemctl status mariadb systemctl start mariadb systemctl restart mariadb systemctl stop mariadb MariaDB columnstore: systemctl status mariadb-columnstore systemctl start mariadb-columnstore systemctl stop mariadb-columnstore systemctl restart mariadb-columnstore Maxsale: systemctl status maxscale systemctl start maxscale systemctl stop maxscale systemctl restart maxscale systemctl enable maxscale MairaDB Cluster Management API: systemctl start mariadb-columnstore-cmapi systemctl stop mariadb-columnstore-cmapi systemctl restart mariadb-columnstore-cmapi systemctl status mariadb-columnstore-cmapi MariaDB columnstore StorageManager: systemctl status mcs-storagemanager systemctl start mcs-storagemanager systemctl stop mcs-storagemanager systemctl restart mcs-storagemanager

ERROR 1419 (HY000): You do not have the SUPER Privilege

While creating a trigger, stored procedure getting ERROR 1419 errors: Got ERROR 1419 (HY000): You do not have the SUPER Privilege and Binary Logging is Enabled “You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)”. In this situation MySQL is trying to protect database, because binary logging is enabled for replication.   Following options are available to fix it: Option 1, turn on log_bin_trust_function_creators: When binary logging is enable for replication, MySQL prevent function based data modification / treat functions be deterministic and not allow data modification. Following setting of log_bin_trust_function_creators will allow data modification using function, trigger, and stored procedure. mysql> SET GLOBAL log_bin_trust_function_creators = 1; Ref> https://dev.mysql.com/doc/refman/8.0/en/stored-programs-logging.html Option 2, grant supper privileges: Grant super privileges to

MySQL Enterprise Incremental Backup Python | MEB Incremental Bckup Python | mysqlbackup

 MySQL Enterprise Incremental Backup: ## Script Name : MEB_IncreBkup.py ## Script Purpose: Execute incremental backup of the MySQL Enterprise, compress it, check status, if script fail send notification, and email backup log # Import python libraries import os import time import datetime import pipes import socket host = socket . gethostname () date = time . strftime ( '%Y%b%d-%H%M%S' ) date1 = time . strftime ( '%Y-%m-%d.%H-%M-%S' ) incremental = "--incremental --incremental-base=history:last_backup" backupdir = "--backup-dir=/tmp/backup" + date backupimage = "--backup-image=/mysql/NFS" + host + ".

MySQL InnoDB performance | MySQL InnoDB Stats

  InnoDB Stats Parameters: InnoDB stats parameter controls the indexed column’s index statistics       persistent to disk or     not persistence to disk There are two options available to control them,  Globally and  locally at table level. This parameter needs to be tune according to the nature of the application, DML activities against the database and tables, for stable execution plan of the query or for better performance of the query. There are two options are available to store index statistics persistent to disk or non-persistent to disk. Following parameter available for MySQL InnodB stats, starting from MySQL 8. In older version of MySQL such 5.6 and 5.7 innodb_stats_sample_pages was used. Starting form MySQL 8.0 it is deprecated and we have to use innodb_stats_persistent_sample_pages Ref.: https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_stats_auto_recalc •    innodb_stats_auto_recalc •    innodb_stats_include_delete_marked •    innodb_stats