14
  • MySQL Master version: 5.5.16-1
  • MySQL Slave version: 5.5.18-1

The master's snapshot is created by:

mysql> FLUSH TABLES WITH READ LOCK;
shell> mysqldump --all-databases --master-data > dbname_`date +%F`.sql

This dump file is imported on the slave (which is started with --skip-slave-start option) without error:

shell> pv dbname_`date +%F`.sql | mysql -u root -p

But I got the following error when executing the mysql> start slave;:

    Last_SQL_Errno: 1062
    Last_SQL_Error: Error 'Duplicate entry '115846' for key
'PRIMARY'' on query. Default database: 'db'. Query: 'INSERT INTO
request_posted (id, user_id, channel, message, link, picture, name, ...

There is only one record with ID 115846 on the master:

mysql> select count(*) from request_posted where id=115846;
Current database: db

+----------+
| count(*) |
+----------+
|        1 |
+----------+
1 row in set (0.01 sec)

Try to skip some queries with:

mysql> STOP SLAVE; 
mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; 
mysql> START SLAVE;

didn't help. I don't want to skip those errors by adding:

slave-skip-errors = 1062

to my.cnf file because it may bring slave inconsistent.

What may be the reason for this error?


UPDATE

This is not how I usually setup mySQL replication

Which steps that you think I don't follow the document?

I wonder if you will encounter the same problem if you were to setup the entire configuration rather that passing the mysqldump command.

No, it works as normally if I also change the master to corresponding coordinates.

I would try dropping the database on the slave, make sure the binlogs are clear, and start again. Also check the table in question on the master to assure the indexes do not have errors.

Is delete (move) all the datadir enough? I did that and get the same result.


Reply to @Dmytro Leonenko

'show slave status\G' on slave to ensure that it is properly configured, MASTER_LOG_POS is 0

Only 'show slave statug\G' after import but before 'start slave;' can give us the answer

I backed up the datadir, delete all and run mysql_install_db, import the dump file, execute change master to and here's the results:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: 
                  Master_Host: x.x.x.x
                  Master_User: xx
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: 
          Read_Master_Log_Pos: 4
               Relay_Log_File: mysqld-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: 
             Slave_IO_Running: No
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 106
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
1 row in set (0.00 sec)

I'm wondering why Master_Log_Pos is 4?

quanta
  • 50,327
  • 19
  • 152
  • 213
  • 1
    There can only be one record with that id, hence the error, it will never get written. When you issue `SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1` Does the query causing the error change in the least? Was the slave binlog position setup correctly? – thinice Dec 02 '11 at 14:51
  • Everytime I skip counter, it changes to another ID. The `--master-data` option is already write the binary log coordinates to the dump file. I only need to change master to master_host, master_user, master_password. – quanta Dec 02 '11 at 15:04
  • This is not how I usually setup mySQL replication (I generally setup replication per URL here: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html) However, reading through the mysqldump options, there is no reason why it shouldn't work. I wonder if you will encounter the same problem if you were to setup the entire configuration rather that passing the mysqldump command. – Rilindo Dec 05 '11 at 04:12
  • Do you mean I shouldn't use the `--master-data` option when creating a data snapshot? If it still happens when I use `--lock-all-tables` option and `change master to master_log_file='', master_log_pos='', ...`, what may be the causes of? – quanta Dec 05 '11 at 04:25
  • No, that is not what I am saying. As I alluded to, what you have done should be work as intended - as far I can see, there is no existing bug with that option. However, it doesn't mean that there isn't, so as an isolation step, I would follow the convention provided by mySQL first through that URL. t. If that works, at least you have the direction to start troubleshooting. If that doesn't, well, we have a different problem. :) – Rilindo Dec 05 '11 at 13:47
  • This is an interesting to try, though. I'll have to try this on my setup when I get home and see if I can reproduce it. – Rilindo Dec 05 '11 at 13:49
  • Wait, I remembered something: http://dev.mysql.com/doc/refman/5.5/en/replication-compatibility.html - The first two paragraphs suggests that while replicating to a new version is supported, you may have problems if older version on the master use statements that is not supported in the new version on the slave. That said, you are using the same major version - you just have appear to have only minor versions. I wonder if that is enough to break it? – Rilindo Dec 05 '11 at 13:57
  • Last thing before I leave for work. :) http://dev.mysql.com/doc/refman/5.5/en/news-5-5-16.html http://dev.mysql.com/doc/refman/5.5/en/news-5-5-17.html http://dev.mysql.com/doc/refman/5.5/en/news-5-5-18.html – Rilindo Dec 05 '11 at 13:58
  • I would try dropping the database on the slave, make sure the binlogs are clear, and start again. Also check the table in question on the master to assure the indexes do not have errors. – jeffatrackaid Dec 05 '11 at 20:09
  • Actually, jeffatrack may have a good point. In fact, if I were to replicate all the databases, I usually stop mysql on the slave, rename the db directory, copy the db from the master, start up the master and then reconfigure it for the slave. – Rilindo Dec 06 '11 at 21:37
  • As a follow up, I went and tested in my lab and I was able to reproduce the issue. Unfortunately, the solution is to enter the logfile and the position number as well as the other master info (e.g. CHANGE MASTER TO MASTER_HOST='XX.XX.XX.XX', MASTER_USER='repl', MASTER_PASSWORD='slavepass', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=535;), even though that info is already specified. Not a real answer, but at least it is reproducible. :) Perhaps the DBAs at dba.stackexchange.com have a better answer? – Rilindo Dec 07 '11 at 04:57
  • updated my question. – quanta Dec 12 '11 at 02:45

6 Answers6

8

What to try to fix your problem:

  1. You should remove master.info on slave first and restart mysql
  2. issue CHANGE MASTER TO MASTER_HOST='XX.XX.XX.XX', MASTER_USER='repl', MASTER_PASSWORD='slavepass';
  3. do mysqldump with '--flush-logs' option on master
  4. 'mysql -u user -p < dump.sql' on slave
  5. 'show slave status\G' on slave to ensure that it is properly configured, MASTER_LOG_POS is 0
  6. 'start slave;' on slave.

What to check also:

  • Binlog format: MIXED
  • server_ids are different on master and slave
Dmytro Leonenko
  • 454
  • 1
  • 7
  • 24
  • Issuing the whole change master string (including the log name and position number) actually fixes it, but at this point, I think the core question is why Quanta has to re-enter the log-name and position number when that is already in the dump file. – Rilindo Dec 08 '11 at 14:48
  • Only 'show slave statug\G' after import but before 'start slave;' can give us the answer – Dmytro Leonenko Dec 08 '11 at 15:26
  • appended the requested info into my original post. – quanta Dec 12 '11 at 02:46
  • How did you end up with "Master_Host: x.x.x.x Master_User: xx" without issuing "CHANGE MASTER ...". Or you just didn't mentioned it in answer? Have you checked binlog format and what was the command line for mysqldump? – Dmytro Leonenko Dec 12 '11 at 06:30
  • I already mentioned that in my post "_import the dump file, execute `change master to`_". I'm using MIXED-based logging. I've tested with MySQL 5.0.77 (statement-based), it also causes this error. The full mysqldump is `mysqldump -u root -p --all-databases --master-data --flush-logs > alldb_$(date +%F).sql` – quanta Dec 12 '11 at 07:48
  • The most strange thing is that you're getting "Read_Master_Log_Pos: 4" in status. Can you check it in dump file? With '--flush-logs' it should be 0 for sure. Could you also make "REPAIR NO_WRITE_TO_BINLOG TABLE ;" on master (if it's not InnoDB)? – Dmytro Leonenko Dec 12 '11 at 09:13
  • Furthermore I can't see Master_Log_File which should be there along with Read_Master_Log_Pos. It's quite confusing. Something is definitely wrong – Dmytro Leonenko Dec 12 '11 at 09:30
  • With `--flush-logs` and `--master-data` option, in the dump file, it shows `CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000025', MASTER_LOG_POS=98;` – quanta Dec 12 '11 at 14:44
  • OK. Can you then add --delete-master-logs to mysqldump and try to re-make all the steps above? – Dmytro Leonenko Dec 12 '11 at 16:05
5

The problem is caused by setting up the master on a running production server BEFORE doing the dump (as far as I can tell). So, there are queries written in the master_log that have already been executed on the data residing on the slave. I never actually saw a solution on the mysql website or mailing list. So, I came up with the following solution that solved my problem.

on slave:

mysql> STOP SLAVE;
mysql> FLUSH PRIVILEGES;  # dump likly included users too

on master:

mysql> RESET MASTER;

on slave:

mysql> RESET SLAVE;
mysql> START SLAVE;

by the way, I ran my dump with the following on the slave:

mysqldump -uROOTUSER -pROOTPASSWORD -hMYSQLMASTER.EXAMPLE.COM --all-databases --delete-master-logs | mysql -uROOTUSER -pROOTPASSWORD

I hope this helps someone else.

http://dev.mysql.com/doc/refman/5.0/en/reset-master.html

http://dev.mysql.com/doc/refman/5.0/en/reset-slave.html

BroknDodge
  • 179
  • 1
  • 3
  • edited to include FLUSH PRIVILEGES; I realized after another error that even tho my users had been imported with the dump, their privileges weren't active yet. – BroknDodge Jan 09 '13 at 21:45
  • RESET MASTER should be run on the slave, not on the master, see https://www.percona.com/blog/2013/02/08/how-to-createrestore-a-slave-using-gtid-replication-in-mysql-5-6/ – Jon Dec 07 '17 at 07:25
3

If you don't want REDO the complete procedure, a good fix would be to use

STOP SLAVE;    
SET GLOBAL sql_slave_skip_counter=1;
START SLAVE;

If there are too many such errors, a good idea would be to automate it using a bash script.

Ref: Fixing Duplicate Entry Error

Ut xD
  • 238
  • 3
  • 13
1

I had the exact problem and Ut xd's link helped. but the command in that link had syntax error and here is the version that worked for me:

while [ 1 ]; do if [ `mysql -uroot -ppassword -e"show slave status \G;" | grep "Duplicate entry" | wc -l` -eq 2 ] ; then mysql -uroot -ppassword -e"stop slave; set global sql_slave_skip_counter=1; start slave;"; fi; sleep 1; mysql -uroot -ppassword -e"show slave status\G"; done

It basically checks if there is duplicate entry error and skip this event from master. and do it in a loop.

perlwle
  • 141
  • 1
  • 2
  • 8
  • 1
    This would be a much better answer if you'd explain what that code does and format the code to be more readable. – kasperd May 20 '17 at 03:50
0

to bypass the duplicate record at Mysql Slave DB, you can simply put an entry in /etc/my.cnf file on the slave DB as:

slave-skip-errors = 1062
Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • I tried this, restarted the mysqld on the slave and it's still stopping on error 1062. Any hints how to make it work? – Scott Apr 20 '22 at 20:20
0

In my case the issue is resolved by the following commands

by the following steps

STOP SLAVE;
RESET SLAVE;
START SLAVE;
Karthik
  • 114
  • 4