How To configure Slony Replication
postgreSQL has been
always used as a good database for small and medium companies. Nowadays some
big companies started using it too, for it’s good performance and no costs for
licenses only for maintenance.
Everyday getting more
important data, the need of daily backup start to become super important, but,
sometimes a restore of that backup takes time and you don’t have this time. So
what to do? We should have a real time replication and I will explain it using
Slony.
The website of Slony says “Slony-I is
a “master to multiple slaves” replication system for PostgreSQL”
(more info here),
that means, you are going to have a “master” and “slaves”
that we are going to call “nodes“. I’m going to show how to make with one slave and the process
to add more is the same.
Slony works replicating changes on master to slaves calling
triggers on database. Therefore, only the operations INSERT, UPDATE, or DELETE
can be replicated, all others (ALTER, DROP, CREATE…) can’t. So always remember,
if you have to create,alter or drop a table for example, you have to do on
master and all nodes.
I’m going to explain everything with PostgreSQL 9.3 and Slony
2.2, the command for PostgreSQL 8.x and Slony 1.8 are similar but not the
same. Assuming that you have already a master database ready a copy to be
our slave, we will install slony.
yum install slony1-93
After installation we need to create an user for Slony, here I’m
going to create an user I will call it “slony” (but you can create
whatever you want) and you put a password if you want.
create user slony superuser password 'foo';
and add on pg_hba.conf of the master
the ip of slave(node) and the new user that we just created.
# TYPE DATABASE USER ADDRESS METHOD
# Slony configuration
#master
host all slony 128.0.0.1/32 md5
#slave
host all slony 10.0.10.31/32 md5
I’m using md5 as method because I
created my user with password, if you don’t put a password you will have to use trust for
example (more info here). Both my localhost and my slave can access
the database using slony user. After that we will need do a reload on
PostgreSQL to get the new configuration.
[root@master ~]# /etc/init.d/postgresql-9.3 reload
And now test if connection
works
[root@slave ~]# psql -h 10.45.10.30 -U slony Tutorialdba_db
psql (9.3.5)
Type "help" for help.
Tutorialdba_db=#
Now that we have everything
about connection working we are going to configure Slony. I’ll do on master but
I could also do on slave, the most important is that both of them should be
equal.
[root@master ~]# vi /etc/slony1-93/slon_tools.conf
I’m only going to tell the
most important points we should change. First we need to set a name for our
replication cluster, I’m using the default.
[...] $CLUSTER_NAME = 'replication'; [...]
Second we need to set the
servers that are going to be part of our cluster. In this case we will have
only master and one slave.
[...]
add_node(node => 1,
host => 'mastertutorialdba.com',
dbname => 'Tutorialdba_db',
port => 5432,
user => 'slony',
password => 'foo');
add_node(node => 2,
host => 'slavetutorialdba.com',
dbname => 'Tutorialdba_db',
port => 5432,
user => 'slony',
password => 'foo');
[...]
For now we have a name for our node and two servers on this
node. Third we are going to set what tables and sequences we are going to
replicate to slave. A table to be eligible must have a primary key, they will
be on pkeyedtables, otherwise you will add it on keyedtables.
[...]
"pkeyedtables" => [
'users'
'posts',
],
[...] "sequences" => [
'users_id_seq',
'posts_id_seq',
],
[...]
"keyedtables" => {
'table_without_pk' => 'index_name',
},
[...]
We are done for now, don’t
forget to make a copy to the other server, both need same file! Now on database
master we are going to make a dump of the schema for the new slave.
[root@master ~]# sudo su - postgres
-bash-4.1$ pg_dumpall -s -h ip_master | psql -h ip_slave Tutorialdba_db
Now that we have Slony configured and our slave ready, at master
we create our cluster (that one we set a name on slon_tools.conf).
[root@master /usr/pgsql-9.3/bin]# cd /usr/pgsql-9.3/bin/
[root@master /usr/pgsql-9.3/bin]# ./slonik_init_cluster | ./slonik
Don’t leave the folder! We
have created our cluster now we will start the daemon of Slony, first on our
master that is our node number 1.
[root@master /usr/pgsql-9.3/bin]# ./slon_start 1
[root@master /usr/pgsql-9.3/bin]# /etc/init.d/slony1-93 start
Now we have to do the same at
our slave that is our node number 2
[root@slave /usr/pgsql-9.3/bin]# ./slon_start 2
[root@slave /usr/pgsql-9.3/bin]# /etc/init.d/slony1-93 start
Good! We have our system
ready to start replication, now we have to tell it what it will replicate and
where it will be replicated.
On master we will create our
set, by default we will have only one set but you can create others, the set 1
is usually the schema public.
If is necessary create other
sets should be like that:
"set_hacknet_2010" => {
"set_id"=> 2, "table_id" => 100 , "sequence_id" => 100,
"pkeyedtables" => [
"hacknet_2010.users",
"hacknet_2010.posts"
],
"sequences" => [
"hacknet_2010.users_id_seq",
"hacknet_2010.posts_id_seq"
]
},
Now we will create our set on
database so then we can start our replication.
[root@master ~]# cd /usr/pgsql-9.3/bin
[root@master /usr/pgsql-9.3/bin]# ./slonik_create_set 1 | ./slonik
And with the set created we
will tell Slony to where it should send. The command is slonik_subscribe_set
<set number> <node number>
[root@master ~]# cd /usr/pgsql-9.3/bin
[root@master /usr/pgsql-9.3/bin]# ./slonik_subscribe_set 1 2 | ./slonik
Done! Now it should be
working and you can check it reading the log file to see if have any error and
to check how long will take to replicate you can use that query:
Tutorialdba_db=# select st_origin, st_received, st_last_event, st_last_received,
st_last_event_ts, st_last_received_ts, st_lag_time from _replication.sl_status order by 2;
Comments
Post a Comment