Monday, 14 October 2013

MySQL Replication Filters: replicate-ignore-table & ON DELETE CASCADE

It's no secret that you shouldn't rely on replication filtering. Recently a customer asked if an `ON DELETE CASCADE` would still affect the data on a slave if the table was being ignored through replication filtering. It was one of those occasions that I couldn't give a confident answer without a quick test but alas the gut was right and the obvious answer is yes, "ON DELETE|UPDATE CASCADE" will change data even if you replicate using replication filters.

I tested using RBR and SBR. If anything I was questioning the behaviour of RBR here but it turns out to be consistent with SBR. I had a master-slave setup already deployed for some other testing so it was easy to implement the FKs needed for this. The dataset is the trusty world db available from dev.mysql.com and I needed to change the constraints on the table to match the conditions proposed.

Master

mysql> ALTER TABLE City DROP FOREIGN KEY City_ibfk_1;
Query OK, 5000 rows affected (0.06 sec)
Records: 5000 Duplicates: 0 Warnings: 0

mysql> ALTER TABLE City ADD CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`Code`)
ON DELETE CASCADE;
Query OK, 5000 rows affected (0.06 sec)
Records: 5000 Duplicates: 0 Warnings: 0

mysql> SELECT COUNT(*) FROM City WHERE CountryCode = 'UKR';
+----------+
| COUNT(*) |
+----------+
| 57 |
+----------+
1 row in set (0.00 sec)

Slave

mysql> SELECT COUNT(*) FROM City WHERE CountryCode = 'UKR';
+----------+
| COUNT(*) |
+----------+
| 57 |
+----------+
1 row in set (0.00 sec)

So our row counts match. I'll update the my.cnf file for the replication filtering.

Slave

cat /etc/my.cnf
...
replicate-ignore-table = "world.City"
...

Restart MySQL on the slave and check the status;
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.0.101
Master_User: repl
Master_Port: 3306
...
Replicate_Ignore_Table: world.City
...
1 row in set (0.00 sec)

Filter exists, lets test it. Update a row on the master and compare the values between the instances;

Master

mysql> UPDATE City SET population=90000 WHERE ID = 3482;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> SELECT * FROM City WHERE ID = 3482;
+------+------------+-------------+---------------+------------+
| id | Name | CountryCode | District | Population |
+------+------------+-------------+---------------+------------+
| 3482 | Mukatševe | UKR | Taka-Karpatia | 90000 |
+------+------------+-------------+---------------+------------+
1 row in set (0.00 sec)

Checking the slave we can see that it remains the old value;

Slave

mysql> SELECT * FROM City WHERE ID = 3482;
+------+------------+-------------+---------------+------------+
| ID | Name | CountryCode | District | Population |
+------+------------+-------------+---------------+------------+
| 3482 | Mukatševe | UKR | Taka-Karpatia | 89000 |
+------+------------+-------------+---------------+------------+
1 row in set (0.00 sec)

Now if we delete the parent record in the Country table we can see that the CASCADE affects the 'ignored' table on the Slave too.

Master

mysql> DELETE FROM Country WHERE Code = 'UKR';
Query OK, 1 row affected (0.00 sec)

mysql> SELECT COUNT(*) FROM City WHERE CountryCode = 'UKR';
+----------+
| COUNT(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)

Slave

mysql> SELECT COUNT(*) FROM City WHERE CountryCode = 'UKR';
+----------+
| COUNT(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)

If you're running with FK Constraints with ON DELETE|UPDATE CASCADE and replication filtering on your slaves you might not have the data you think you have.

No comments:

Post a Comment