PostgreSQL Pgbadger Installation On Linux
PgBadger is a tool that analyzes PostgreSQL log files and generates reports on execution of SQL and server operation. Statistical reports analyzed from a number of perspectives can be useful not only for grasping the usual database operation but also as a hint for performance improvement.
The report output by pgBadger has the following features.
- Graph output in HTML format
- Many statistical objects
- Daily, Weekly unit creation possible
Many of the analysis results can be displayed in graph form by outputting the report in HTML format (output in text format or JSON format is also possible). By analyzing the log information graphed, it is much more prospective for people to see than the raw log data. So, if you do not need to handle it with another tool separately, we recommend report output in HTML format.
2.Many statistical objects
A lot of data useful for database performance analysis is output in the report as shown below.
- Queries (total number of executions, maximum number of executions and their queries, total of execution time, maximum execution time and their queries, proportions of query types)
- Connection (total number of connections, peak date and time)
- Sessions (Total Sessions, Peak Date Time, Total Session Time, Session Average Time)
- Checkpoint (number of buffers, execution time)
- Temporary files (total number of files, maximum size, average size)
- Vacuum (number of times of vacuum execution, number of times automatic vacuum is executed)
- Lock (total lock count, percentage of lock type, maximum wait count and its query, longest wait time and its query)
- Log level (percentage of each level, sum of each level)
3.Daily, Weekly unit creation possible
PgBadger can create a report on a daily or weekly basis using data on the date and time each log was described. Continuous use of this makes it possible to compare and verify database operations on each day and each week.
query statistics
PgBadger's query statistics have statistics on concrete queries and generalized queries. To generalize a query,
UPDATE pgbench_tellers SET tbalance + 3645 WHERE tid = 2;
UPDATE pgbench_tellers SET tbalance + 1632 WHERE tid = 10;
We generalize the query such as
UPDATE pgbench_tellers SET tbalance + 0 WHERE tid = 0;
It is to recognize it as the same kind of query.
By including statistics on generalized queries, more practical statistics are derived.
Installation of PgBadger
Environment:-
- CentOS 6.8
- PostgreSQL 9.4.1
- pgbadger-6.4
STEP 1.First we will show you how to install from the rpm package. Repository This is https://yum.postgresql.org/repopackages.php the link destination of the rpm file choose the Linux distribution of your from.
If you do not know which OS/arch you are using, please run the following:
Method 1: Find distribution
cat /etc/redhat-release : This command will give you an output like:
Fedora release 24 (Twenty Four)
Scientific Linux release 6.8 (Carbon)
CentOS release 6.8 (Final)
CentOS Linux release 7.2.1511 (Core)
or similar. So they mean you are using Fedora 24 or Scientific Linux/CentOS 6.8 or CentOS 7.2.
method 2: Find architecture
uname -m : This command will give you an output like x86_64 or i686.
So, if the first output is Fedora 18 and the second one is x86_64, go to Repository Packages page https://yum.postgresql.org/repopackages.php, and find the repository RPMs in the PostgreSQL version that you are looking for.
STEP 2.Register the repository with the yum command and install pgBadger from that
repository
# Yum install http://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-centos94-9.4- 1.noarch.rpm
# Yum install pgbadger
STEP 3.Edit The Postgresql.Conf As Follows.
pgBadger creates reports from PostgreSQL logs, PostgreSQL logs should be set so that the information required for reporting is described.
# If the execution time of the query is longer than the specified time, log the query text and execution time in the log
Log_min_duration_statement = 0
# Information to prefix to the log message
Log_line_prefix = '% t [% p]: [% l - 1] user =% u, db =% d' # When using standard error output
# Log_line_prefix = 'user =% u, db =% d' When using # syslog
#% T = timestamp (no millisecond units)
#% P = Process ID
#% L = Session line number
#% U = User name
#% D = Database name
# User, db designation is optional
# Log the execution of the checkpoint in the log
Log_checkpoints = on
# Log the client's connection
Log_connections = on
# Log client disconnects
Log_disconnections = on
Leave lock wait longer than the time specified by # deadlock_timeout (default 1 second) in the log
Log_lock_waits = on
# Leave logs that temporary files were created (all 0's)
Log_temp_files = 0
# Log language is limited to English
Lc_messages = 'C'
STEP 4.we need to restart database server. Example below
systemctl restart postgresql
STEP 5.Create One Report From One Log File
In the example below the log file that is being written from the start the database server and the $ PGDATA / pg_log / postgresql-2015-05-12_170333.log .
We are going to run some queries with pgbench to gather sample data.
Pgbench is a module intended for benchmarking, but here it is used to execute many queries.
$ Pgbench -i
$ Pgbench - c 10 - t 1000 # Execute with assumption of client 10, transaction 1000
When the description of the executed query is accumulated in the log file, it finally creates a report.
Even so, you can just create a PostgreSQL log file in the pgBadger command.
$ Pgbadger $ PGDATA / pg_log / postgresql-2015-05-12_170333.log
[========================>] Parsed 10485879 bytes of 10485879 (100.00%), queries: 6578786, events: 1
LOG: ok, generating html report ...
When analysis of the log is completed, a report file named out.html is created in the current directory (file name, output format, save directory, etc. can be specified as option).
Since I created a report in HTML format, let's look at the contents of the report using a web browser.
Here If you would like to see the HTML report without creating a report http://dalibo.github.io/pgbadger/samplev7.html refer to have been published in the sample report.
Opening the file in the browser opens the following Overview screen.
You can open the page of each analysis item from the tab at the top of the screen.
STEP 6.Perform continuous log analysis (incremental mode)
Incremental mode is used for continuous analysis results.
Running pgBadger in incremental mode creates the next daily weekly report (incremental report).
$ Pgbadger - I $ PGDATA / pg_log / postgresql.log - O / var / www / html / pg_reports # - I: Incremental mode, - O: Output directory (required when using incremental mode)
[==========================>] Parsed 23520870 bytes of 2352870 (100.00%), queries: 2, events: 2
LOG: Ok, generating HTML daily report into / var / www / html / pg_reports / 2015/05/13 / ...
LOG: Ok, generating HTML weekly report into / var / www / html / pg_reports / 2015 / week-20 / ...
LOG: Ok, generating global index to access incremental reports ...
$ Ls / var / www / html / pg_reports # This creates a directory with years, an index page linking all reports, and a file saving the last parsed line.
2015 index.html LAST_PARSED
$ Ls / var / www / html / pg_reports / 2015 # A report is created by creating a directory with week, in the directory with year.
05 week-20
When you create a report in incremental mode, an index page linking each report page is created.In addition, log files once analyzed can be analyzed omitting previously analyzed parts.This is accomplished by saving the last parsed line in a file named LAST_PARSED.
$ Cat / var / www / html / pg_reports / LAST_PARSED # The last parsed line is saved.
2015-05-13 13:58:15 23520870 2015-05-13 13:58:15 JST [7040]: [6-1] user = postgres, db = postgres STATEMENT: SELECT * FROM test;
You can also create incremental reports manually, but in fact you may want to use cron to automate the creation of incremental reports.
$ Crontab -e
0 4 * * * / usr / bin / pgbadger -I -q /usr/local/pgsql/data/pg_log/postgresql.log# Create a report every day at 04:00
Comments
Post a Comment