Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

PostgreSQL Slony-I Replication Setup

Basic Knowledge  Of  Slony-I 2.1
Slony-I Perform:
  •  Slony-I can replicate data between different PostgreSQL major versions
  •  Slony-I can replicate data between different hardware or operating systems
  •  Slony-I allows you to only replicate some of the tables to slave
  •  Slony-I allows you to replicate some tables to one slave and other tables to another slave
  •  Slony-I allows different database servers to be the origin(master) for different tables
What cannot replicate Slony-I:
  • BLOB object changes
  • Direct DDL changes
  • User,Role changes
Slony-I  few more to know:
  • Its Asynchronous replication
  • Cannot detect network failure
  • Replicate’s with the help of remote triggers
  • Replicating table’s must have primary key
Slony-I Supports:
  • Add/Dropnode,table
  • DDL changes(Slinik_executescript)
  • Truncate table
  • Swapping providers
  • Perform SwitchOver/Failover
  • Archive Log Shipping
Slony-I Utilities:
  1. Slon - For each node in the cluster, there will be a slon(1) process to manage replication activity for that node. slon is the daemon application that “runs” Slony-I replication. A slon instance must be run for each node in a Slony-I cluster.
  2. Slonik - The slonik(1) command processor processes scripts in a “little language” that are used to submit events to update the configuration of a Slony-I cluster. This includes such things as adding and removing nodes, modifying communications paths, adding or removing subscriptions.
  3. Slony_logshipper - slony_logshipper is a tool designed to help apply logs. It runs as a daemon and scans the archive directory for new .SQL files which it then applies to the target database.
Slony-I Modes:
  1. Cascade Replication (Forward =yes/no)  - some of your slaves(replicas) feed off of another replica. This is called a cascaded
  2. OMIT_COPY
Slony-I  Setup Methods:
  1. -with-perltools
  2. preample scripts
Slony-I  Installation Types
  1. Binay method installation
  2. Source method installation
Slony-I   catalog consists:
  • 132 functions
  • 19 Tables/Indexes
  • 2 Views
Let's start with replication methods, in perltools method,you need to configure slony at the time of source installation to enable built-in perl scripts. These scripts start with "SLONIK_" and they are designed to carry replication administrative tasks.

My demo for two methods shell(slonik) & Perl is on Localhost Single instance(5432) with two databases Master & Slave replicating one table "rep_table". For replication, master/Slave should hold same table structure. If you have many tables use pg_dump/pg_restore structure dump option. Since am replicating one table I just created the same on Master/Slave.
Note: Set environment variables like PGDATA,PGPORT,PGHOST,PGPASSWORD & PGUSER.

Source Installation:
Download the Slony-I 2.1 source(http://slony.info/downloads/) 

#bunzip2 slony1-2.1.0.tar.bz2
#tar -xvf slony1-2.1.0.tar
# cd slony1-2.1.0
#./configure --prefix=/opt/PostgreSQL/9.1/bin 
             --with-pgconfigdir=/opt/PostgreSQL/9.1/bin 
             --with-perltools=/opt/PostgreSQL/9.1/bin
             // Exclude --with-perltools if not needed
# make
# make install
Basic setup on Master/Slave
createdb -p 5432 master
createdb -p 5432 slave

psql -p 5432 -d master -c "create table rep_table(id int primary key);"
psql -p 5432 -d slave -c "create table rep_table(id int primary key);"

Insert some data on master to replicate to slave
psql -p 5432 -d master -c "insert into rep_table values(generate_series(1,10));"
Method 1:  --with-perltools :

1. Create on standard .conf file, with information like, Log location, No. of Nodes, Set of Tables etc.,
$CLUSTER_NAME = 'myrep';
$LOGDIR = '/opt/PostgreSQL/9.1/slonylogs';
$MASTERNODE = 1;
$DEBUGLEVEL = 2;

&add_node(node => 1,host => 'localhost',dbname => 'master',port => 5432,user => 'postgres',password => 'postgres');
&add_node(node => 2,host => 'localhost',dbname => 'slave',port => 5433,user => 'postgres',password => 'postgres');

$SLONY_SETS =
{
    "set1" =>
    {
        "set_id" => 1,
        "table_id" => 1,
        "pkeyedtables" =>
                       [rep_table,],
    },
};
Initialize, Create-set & subscribe-set, these are the three phases of slony replication. For each phase, "slonik_" perl scripts are created in the location mentioned at the time of source installation with option "--with-perltools". In my case its, "/opt/PostgreSQL/9.1/bin". Above CONF file is used in all phases.

2. Initialize the cluster. Here slonik, cross-checks the nodes connection.
cd /opt/PostgreSQL/9.1/bin
./slonik_init_cluster -c slon.conf 
./slonik_init_cluster -c slon.conf| ./slonik
3. Create a set, means which set of tables to replicate from Node 1 to Node 2.
./slonik_create_set -c slon.conf 1 
./slonik_create_set -c slon.conf 1|./slonik 
4. Start Slon daemons. Each node will have two slon process to carry work. Each node slon process should be started.
./slon_start -c slon.conf 1
./slon_start -c slon.conf 2
5. Subscribe Set, from here slony maintains data consistency between two nodes by allowing Master for all DML's and Denying them on Slave.
./slonik_subscribe_set -c slon.conf 1 2 
./slonik_subscribe_set -c slon.conf 1 2|./slonik
After the above steps now your slave will have replicated data.

Method 2: With standard scripts:

In Standard script methods, there are many way to implement, but to understand clearly I have split as same as Perl we did above like Initialize, create-set & subscribe set. All scripts are binded with SLONIK command.

1. Create two .conf files for Master & Slave Node. 
vi master_slon.conf
cluster_name=myrep
pid_file='/opt/PostgreSQL/9.1/data/master_slon.pid'
conn_info='host=localhost dbname=master user=postgres port=5432'

vi slave_slon.conf
cluster_name=myrep
pid_file='/opt/PostgreSQL/9.1/data/slave_slon.pid'
conn_info='host=localhost dbname=slave1 user=postgres port=5432'
2. Initialize the cluster.
#!/bin/bash
# Initialize Cluster (init_cluster.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

#Add Node
init cluster (id = 1, comment = 'Primary Node For the Slave postgres');
store node (id = 2, event node = 1, comment = 'Slave Node For The Primary postgres');

#Setting Store Paths ...
echo  'Stored all nodes in the slony catalogs';
store path(server = 1, client = 2, conninfo='host=127.0.0.1 dbname=master user=postgres port=5432');
store path(server = 2, client = 1, conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432');
_eof_

$./init_cluster.sh
3. Create a set. 
#!/bin/bash
# Create Set for set of tables (create-set.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

try { create set (id = 1 ,origin = 1 , comment = 'Set for public'); } on error { echo  'Could not create set1'; exit 1;}

set add table (set id = 1 , origin = 1, id = 1, full qualified name = 'public.rep_table1', comment = 'Table action with primary key');
_eof_

$./create-set.sh
4. To start Slon daemons, use custom script which comes along with source tarbal under "/tools" location "start_slon.sh". Modify the script by changing .conf file locations for Master/slave startup scripts. This script will give flexibility to use and track all slon process with the help of PID's mentioned in .conf file. 
Usage: ./master_start_slon.sh [start|stop|status]

-bash-4.1$ ./master_start_slon.sh  start
-bash-4.1$ ./slave_start_slon.sh  start

Sample STATUS output:

-bash-4.1$ ./master_start_slon.sh status
---------------------
Slony Config File    : /opt/PostgreSQL/9.1/slony_scripts/bash_slony/master_slon.conf
Slony Bin Path       : /opt/PostgreSQL/9.1/bin
Slony Running Status : Running...
Slony Running (M)PID : 28487
---------------------
4. Subscribe set. 
#!/bin/bash
# Subscribe Set (subscribe-set.sh)

slonik <<_eof_
cluster name = myrep;
node 1 admin conninfo='host=127.0.0.1 dbname=master user=postgres port=5432';
node 2 admin conninfo='host=127.0.0.1 dbname=slave1 user=postgres port=5432';

try { subscribe set (id = 1, provider = 1 , receiver = 2, forward = yes, omit copy = false); } on error { exit 1; } echo  'Subscribed nodes to set 1';
_eof_

$./subscribe-set.sh
Now your slave database will have replicated data in "rep_table" table.
These two methods will help to understand the basic setup of slony replication. 

Drawbacks

  • Not a complete clustering solution
  • Complexity of setup
  • Limitations on schema changes
  • Performance of large bulk loads and large object replication
  • Write overhead and associated maintenance (vacuum etc.)
  • Multiple points of monitoring required
  • Master-slave, replication lag
The main drawback to Slony-I even as a replication system is the complexity of its setup and administration. The design of the system, with the database itself being used for queueing row updates, also significantly increases the amount of data writing and I/O done by the DBMS.

Also, since Slony-I is asynchronous master-slave, all writes have to be segregated to the master. Additionally, there is a noticeable lag (1-3 seconds) between the master and the slaves which may cause users to have an inconsistent view of the data.

Comments

Popular posts from this blog

Oracle DBMS SCHEDULER Examples

How to find the server is whether standby (slave) or primary(master) in Postgresql replication ?

7 Steps to configure BDR replication in postgresql

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database

How to Enable/Disable autovacuum on PostgreSQL