Posts

Showing posts with the label PostgreSQL Tuning
Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Script To Listing Postgresql dead tuble tables

In this Shell script will displays schema name and table name in which table having more than dead tubles in a postgresql server How to create & run shell script file ? create  a shell script file here created tables_dead_tubles.sh using vim editor and change the environment variable as well as db name as per your environment, i - for inserting mode then paste the following script then press Esc button then enter :wq for saving script then run the script using sh tables_dead_tubles.sh cat tables_dead_tubles.sh echo " " echo " " HOSTNAME=`hostname` PSQL="/opt/PostgreSQL/9.3/bin/psql" PORT=5432 HOST="localhost" DB="template1" USER="postgres" echo "Enter the dead tuble count for example if you give 5000 means the script will dispays the schema name as well as table name How many table having more than 5000 dead tubles in database "; read count echo "------***WHAT ARE TABLES HAVING MORE THAN $count DEAD TUB...

PostgreSQL Tuning

Following steps will help you to improve the Postgresql database server performance: First  you have to issue the top command then see the process which one process is taking more CPU utilization then note that PID, if that process is postgres process means use this script you can find out query and their status and timing else you can use following query to finding query SELECT now() - query_start as "runtime",state,query from pg_stat_activity WHERE pid= 32696; runtime | state | query ----------------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------- 00:00:01.93486 | active | select timestatus::date from mhrornad_akl.edit_mut_new_audit WHERE ccode='270100010000760000' and report_status='Generated' and edit_mut_no=214 Postgresql query tuning Type: For modification...

Daily monitoring PostgreSQL master and slave server and schedule vacuum and change postgreSQL parametrer

Here i expalined that""How to increase the performance  postgresql server and i written script for vacuum database and changing PostgreSQL performance  parameter cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) bash-4.2$ Checking the master and slave IP and issues the top command any process is taken more cpu utilization:  master 192.168.1.1 and slave 192.168.2.5: MASTER TOP: bash-4.2$ top -o %CPU top - 15:37:26 up 8 days, 2:03, 1 user, load average: 11.43, 23.29, 19.44 Tasks: 449 total, 16 running, 433 sleeping, 0 stopped, 0 zombie %Cpu(s): 21.6 us, 3.6 sy, 0.0 ni, 44.6 id, 27.9 wa, 0.0 hi, 0.1 si, 2.3 st KiB Mem: 49458536 total, 49169124 used, 289412 free, 29352 buffers KiB Swap: 16777212 total, 147776 used, 16629436 free. 39245096 cached Mem SLAVE TOP: bash-4.2$ top -o %CPU top - 15:54:06 up 8 days, 2:20, 1 user, load average: 0.28, 0.30, 0.36 Tasks: 549 total, 2 running, 547 sleeping, 0 stopped, 0 zomb...

PostgreSQL pg_stat_statements Extension

pg_stat_statements is a module that needs to be loaded and is not available in the default configuration. Loading it is quite easy. Create the extension as usual: postgres@oel7:/home/postgres/ [PG6] sqh Null display is "NULL". Timing is on. psql (9.5alpha2) Type "help" for help. (postgres@[local]:4448) [postgres] > create extension pg_stat_statements; CREATE EXTENSION Time: 281.765 ms (postgres@[local]:4448) [postgres] > \dx List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- btree_gist | 1.1 | public | support for indexing common datatypes in GiST pg_stat_statements | 1.3 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural languag...

find TOP 10 Long Running Queries using pg_stat_statements

This is the second day with the PostgreSQL pg_stat_statements module. In the previous post, I have shared basic steps to configure and enable pg_stat_statements module for tracking query statistics. In this post, I have prepared one small DBA script to find query statistics such a way that we can easily see the long executed queries. The PostgreSQL database administrator can use this script for investigating performance related issues. SELECT pd.datname ,pss.query AS SQLQuery ,pss.rows AS TotalRowCount ,(pss.total_time / 1000 / 60) AS TotalMinute ,((pss.total_time / 1000 / 60)/calls) as TotalAverageTime FROM pg_stat_statements AS pss INNER JOIN pg_database AS pd ON pss.dbid=pd.oid ORDER BY 1 DESC LIMIT 10;

Monitor ALL SQL Query Execution Statistics using pg_stat_statements Extension

Are you facing performance related issues in PostgreSQL? Do you want to know all your SQL Query Execution Statistics like: How many times same query executed, What is total and average time for queries and others. If you are looking for these questions, this post is very helpful for you. PostgreSQL provides pg_stat_statements module or extension which automatically records different types of statistic all running queries. During the activity of Performance Optimization, We are always keen for long running queries of our Database Server. We should configure pg_stat_statements module so that we can easily use require statistics for Performance Tuning. Steps to configure and enable pg_stat_statements module: First, Install pg_stat_statements Extension: CREATE EXTENSION pg_stat_statements; Once we install the extension, It starts to log require query execution information in pg_stat_statements table. Select the data of pg_stat_statements: SELECT *FROM pg_stat_statements; If ...

How to Monitor with PostgreSQL Indexes

BTree My issue here is that the BTree index will be huge since afaict it will store duplicate values (it has too, since it can't assume the table is physically sorted). If the BTree is huge I end up having to read both the index and the parts of the table that the index points too... Not necessarily — Having a btree index that is  'covering'  will be the fastest read time, and if that is all you want (ie if you can afford the extra storage), then it is your best bet. BRIN My understanding is that I can have a small index here at the expense of reading useless pages. Using a small  pages_per_range  means that the index is bigger (which is a problem with BRIN since I need to read the whole index), having a big  pages_per_range  means that I'll read a lot of useless pages. If you can't afford the storage overhead of a covering btree index, BRIN is ideal for you, because you have clustering already in place (this  is crucial  for BRIN ...