wiki:Postgres_Slony_Replication

Postgresql does not provide any inbuilt replication solution. There are lots of 3rd party replication products available for postgresql. Slony is one of them. Slony is a trigger based replication solution, that is it used triggers to push data to the slave. Slony is supposed to be one of the most stable replication solutions for postgresql.

SLONY 1 INSTALLATION:

Download latest stable version based on our postgresql version, from www.slony.info.

INSTALL PREREQUIREMENT:

apt-get build-dep postgresql-8.4-slony1

INSTALL:

  1. tar -xvf slony_tar.bz2
  1. ./configure
  1. checkinstall

SLONY 1 CONFIGUREATION:

  1. Define environment variables on master & slave

#!/bin/sh

REPLICATIONUSER=repuser
CLUSTERNAME=replcluster
MASTERDBNAME=repltestdb
SLAVEDBNAME=repltestdb
MASTERHOST=192.168.1.1
SLAVEHOST=192.168.1.2
MASTERPORT=5432
SLAVEPORT=5432

export REPLICATIONUSER CLUSTERNAME MASTERDBNAME SLAVEDBNAME MASTERHOST SLAVEHOST MASTERPORT SLAVEPORT

  1. Create plpgsql language

#createlang -h $MASTERHOST -p $MASTERPORT plpgsql $MASTERDBNAME ( Run on Master)

3) Dump database tables masters to slave .

Dump the tables on master
#pg_dump -s -U $MASTERDBA -h $MASTERHOST -p $MASTERPORT $MASTERDBNAME > replmaster.sql

Import the tables on slave
#psql -U $SLAVEDBA -h $SLAVEHOST -p $SLAVEPORT $SLAVEDBNAME < replmaster.sql

4) now configure the databases for replication

vim replconfig.cnf

# define the namespace the replication system uses in our example it is
# replcluster
cluster name = replcluster;
# admin conninfo's are used by slonik to connect to the nodes one for each
# node on each side of the cluster, the syntax is that of PQconnectdb in
# the C-API
node 1 admin conninfo = 'dbname=repltestdb host=192.168.2.1 port=5432 user=repuser';
node 2 admin conninfo = 'dbname=repltestdb host=192.168.2.2 port=5432 user=repuser';
# init the first node. Its id MUST be 1. This creates the schema
# _$CLUSTERNAME containing all replication system specific database
# objects.
init cluster ( id=1, comment = 'Master Node');
# Add unique keys to table that do not have one.
# This command adds a bigint column named "_Slony-I_$CLUSTERNAME_rowID" to the table which will have a default value of nextval('_$CLUSTERNAME.s1_rowid_seq') and have UNIQUE & NOT NULL constraints applied on it.
# table add key (node id = 1, fully qualified name = 'table_name');
# Slony-I organizes tables into sets. The smallest unit a node can
# subscribe is a set.
# you need to have a set add table() for each table you wish to replicate
create set (id=1, origin=1, comment='parent child table')
set add table (set id=1, origin=1, id=1, fully qualified name = 'public.parent', comment='parent table');
set add table (set id=1, origin=1, id=2, fully qualified name = 'public.child', comment='child table');
# Create the second node (the slave) tell the 2 nodes how to connect to
# each other and how they should listen for events.
store node (id=2, comment = 'Slave node');
store path (server = 1, client = 2, conninfo='dbname=repltestdb host=192.168.1.2 port=5432 user=repuser);
store path (server = 2, client = 1, conninfo='dbname=repltestdb host=192.168.1.1 port=5432 user=repuser');
store listen (origin=1, provider = 1, receiver =2);
store listen (origin=2, provider = 2, receiver =1);

  1. Pass the config file to slonik for creating required triggers & config tables. (Run this on Master)

#slonik replconfig.cnf   (Run on Master)

  1. Lets start the replication daemons on master & slave

#slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$MASTERDBA host=$MASTERHOST port=$MASTERPORT" > slon.log & (Run on Master)

#slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$SLAVEDBA host=$SLAVEHOST port=$SLAVEPORT" > slon.log &(Run on slave)

Check out the output in slon.log files

  1. Now everything is setup and from the slon.log files on master and slave you can see that both the servers are trying to sync with each other. But still replication is not on way. To start replication we need to make the slave subscribe to the master. Here is the required config file for doing this

startrepl.cnf
# This defines which namespace the replication system uses
cluster name = replcluster;
# connection info for slonik to connect to master & slave
node 1 admin conninfo = 'dbname=repltestdb host=192.168.1.1 port=5432 user=repuser';
node 2 admin conninfo = 'dbname=repltestdb host=192.168.1.2 port=5432 user=repuser';
# Node 2 subscribes set 1
subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);

Passing this file to slonik will do the trick and replication would start happening.

#slonik startrepl.cnf (Run on master)

Now simply make some inserts, updates and deletes on the master and check out whether they are happening on the slave as well. Officially, since replication is on full swing all changes in master tables should be replicated on the slave. 

Please note that new tables & changes to table structures wont be replicated automatically. So whenever a new table is created or an existing table is altered the changes has to be manually propagated to slave and the scripts need to be run to make appropriate changes in the triggers and config tables.

Another important thing to note is that postgresql on master and slave should be able to communicate with both the ip addresses. For this add the ip addresses in the pgsql/data/pg_hba.conf.

For the able replication i had added the lines 
host all all 172.16.3.211/32 trust
host all all 172.16.3.173/32 trust

to the pg_hba.conf file in both master & slave.

Please Note:

  1. Replication user must have superuser & replication permissions.
  1. Should enable remote access with trust in pg_hba.conf
  1. Better use same version of postgresql & slony1 both master & slave.
  2. It will not affect slave,if any changes in schema in master database after run replication.Evenno data will replicated further. 
Last modified 4 years ago Last modified on 01/10/14 22:51:23