I am beginning the development of a new web application which needs to be designed to scale from the word go.
I am looking into database replication and the concept of master/slave replication (or publisher/subscriber - I think these are the same things but please correct me if I am wrong).
The topology I have been reading about involves the master handling all of the database writes and the slave handling all of the reads.
My question is:
In a scenario where the slave must have as close to real-time data as possible, will transactional replication be less of a load on the master database than if it was being read directly?
Also, will the performance of the slave not be affected by the transactional replication as though it were being written to?
Generally, it feels like the number of operations are being multiplied by the introduction of replication
e.g.
- data is written to master database (by client)
- data is read from master database (for replication)
- data is written to slave database (for replication)
- data is read from slave database (by client)
To me, it feels as though any benefit received from adding an additional server will be lossed in the addition of the extra read and write process.
Am I looking at this too simplictically? or am I missing something?
Thanks