Skip to content

MySQL性能优化与运维

在实际生产环境中,数据库的性能和稳定性至关重要。本章将深入探讨MySQL的性能优化技巧和运维管理策略,帮助您构建高效、可靠的数据库系统。

查询优化

查询优化是提升数据库性能的关键环节,通过合理的优化可以显著提高查询速度和系统响应能力。

慢查询分析

慢查询日志是识别性能瓶颈的重要工具,它记录了执行时间超过指定阈值的SQL语句。

开启慢查询日志

sql
-- 查看慢查询日志相关参数
SHOW VARIABLES LIKE 'slow_query_log%';
SHOW VARIABLES LIKE 'long_query_time';

-- 开启慢查询日志
SET GLOBAL slow_query_log = ON;

-- 设置慢查询时间阈值(秒)
SET GLOBAL long_query_time = 1;

-- 设置慢查询日志文件路径
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';

-- 记录未使用索引的查询
SET GLOBAL log_queries_not_using_indexes = ON;

在MySQL配置文件(my.cnf)中永久设置:

ini
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = 1

分析慢查询日志

bash
# 使用mysqldumpslow工具分析慢查询日志
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log

# 常用参数说明:
# -s:排序方式(t=时间,c=次数,r=行数等)
# -t:显示前N条记录
# -g:匹配模式

# 示例输出解释
# Count: 5  Time=2.34s (11s)  Lock=0.00s (0s)  Rows=1000.0 (5000)
# Count: 执行次数
# Time: 平均执行时间
# Lock: 平均锁定时间
# Rows: 平均返回行数

使用pt-query-digest分析

Percona Toolkit提供了更强大的查询分析工具:

bash
# 安装Percona Toolkit
# Ubuntu/Debian
sudo apt-get install percona-toolkit

# CentOS/RHEL
sudo yum install percona-toolkit

# 分析慢查询日志
pt-query-digest /var/log/mysql/slow.log

# 分析实时查询(需要开启performance_schema)
pt-query-digest --processlist h=localhost,u=root,p=password

# 生成查询报告
pt-query-digest /var/log/mysql/slow.log > query_analysis_report.txt

执行计划分析

EXPLAIN命令是分析查询性能的重要工具,它显示MySQL如何执行SQL语句。

基本EXPLAIN用法

sql
-- 分析简单查询
EXPLAIN SELECT * FROM students WHERE name = '张三';

-- 分析复杂查询
EXPLAIN 
SELECT s.name, c.course_name, e.grade
FROM students s
JOIN enrollments e ON s.id = e.student_id
JOIN courses c ON e.course_id = c.id
WHERE s.major = '计算机科学'
ORDER BY e.grade DESC
LIMIT 10;

-- 使用EXPLAIN FORMAT=JSON获取详细信息
EXPLAIN FORMAT=JSON 
SELECT * FROM students WHERE major = '计算机科学';

EXPLAIN输出字段详解

sql
-- 示例表结构
CREATE TABLE employees (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(50),
    department_id INT,
    salary DECIMAL(10,2),
    hire_date DATE,
    INDEX idx_department (department_id),
    INDEX idx_salary (salary)
);

-- EXPLAIN输出示例及解释
EXPLAIN SELECT * FROM employees WHERE department_id = 5 AND salary > 50000;

EXPLAIN输出字段说明:

  • id:查询序列号,相同id表示同一组查询
  • select_type:查询类型
    • SIMPLE:简单查询
    • PRIMARY:最外层查询
    • SUBQUERY:子查询
    • DERIVED:派生表查询
  • table:正在访问的表
  • partitions:匹配的分区
  • type:连接类型(重要指标)
    • system:只有一行数据
    • const:通过主键或唯一索引查找
    • eq_ref:通过主键或唯一索引连接
    • ref:通过非唯一索引查找
    • range:范围查找
    • index:全索引扫描
    • ALL:全表扫描
  • possible_keys:可能使用的索引
  • key:实际使用的索引
  • key_len:使用的索引长度
  • ref:索引比较的列
  • rows:扫描的行数(重要指标)
  • filtered:过滤百分比
  • Extra:额外信息
    • Using index:使用覆盖索引
    • Using where:使用WHERE过滤
    • Using temporary:使用临时表
    • Using filesort:使用文件排序

优化技巧

sql
-- 1. 避免SELECT *
EXPLAIN SELECT id, name FROM students WHERE major = '计算机科学';
-- 而不是
EXPLAIN SELECT * FROM students WHERE major = '计算机科学';

-- 2. 合理使用索引
-- 创建复合索引优化查询
CREATE INDEX idx_major_name ON students(major, name);

-- 3. 避免函数操作导致索引失效
EXPLAIN SELECT * FROM students WHERE YEAR(birth_date) = 1990;  -- 索引失效
-- 优化为
EXPLAIN SELECT * FROM students WHERE birth_date BETWEEN '1990-01-01' AND '1990-12-31';

-- 4. 避免隐式类型转换
EXPLAIN SELECT * FROM students WHERE student_id = 12345;  -- student_id是字符串类型
-- 优化为
EXPLAIN SELECT * FROM students WHERE student_id = '12345';

-- 5. 大表分页优化
-- 不好的分页方式
EXPLAIN SELECT * FROM orders ORDER BY created_at DESC LIMIT 100000, 20;

-- 优化的分页方式(使用主键定位)
EXPLAIN 
SELECT * FROM orders 
WHERE id < 1000000 
ORDER BY id DESC 
LIMIT 20;

优化技巧

避免常见性能陷阱

sql
-- 1. 避免OR条件导致索引失效
EXPLAIN SELECT * FROM students WHERE name = '张三' OR major = '计算机科学';
-- 优化为UNION
EXPLAIN 
SELECT * FROM students WHERE name = '张三'
UNION
SELECT * FROM students WHERE major = '计算机科学';

-- 2. 避免NULL值判断
EXPLAIN SELECT * FROM students WHERE email IS NULL;
-- 考虑设置默认值而非NULL

-- 3. 合理使用LIMIT
-- 大偏移量LIMIT优化
SELECT * FROM (
    SELECT id FROM orders 
    ORDER BY created_at DESC 
    LIMIT 100000, 20
) AS tmp
JOIN orders ON orders.id = tmp.id;

-- 4. 使用EXISTS替代IN(子查询优化)
EXPLAIN SELECT * FROM students WHERE id IN (
    SELECT student_id FROM enrollments WHERE grade > 90
);
-- 优化为
EXPLAIN SELECT * FROM students WHERE EXISTS (
    SELECT 1 FROM enrollments WHERE student_id = students.id AND grade > 90
);

数据库备份与恢复

数据备份是数据库运维的基础工作,确保在发生故障时能够快速恢复数据。

备份类型

物理备份

物理备份直接复制数据库文件,速度快但可移植性差。

bash
# 冷备份(停止MySQL服务后复制文件)
sudo systemctl stop mysql
sudo cp -r /var/lib/mysql /backup/mysql_backup_$(date +%Y%m%d)
sudo systemctl start mysql

# 热备份(使用Percona XtraBackup)
# 安装XtraBackup
sudo apt-get install percona-xtrabackup

# 全量备份
xtrabackup --backup --target-dir=/backup/xtrabackup/full

# 增量备份
xtrabackup --backup --target-dir=/backup/xtrabackup/inc1 --incremental-basedir=/backup/xtrabackup/full

逻辑备份

逻辑备份导出SQL语句,可移植性好但速度较慢。

bash
# 使用mysqldump进行逻辑备份
# 备份单个数据库
mysqldump -u root -p school > school_backup.sql

# 备份多个数据库
mysqldump -u root -p --databases school ecommerce > multiple_dbs_backup.sql

# 备份所有数据库
mysqldump -u root -p --all-databases > all_dbs_backup.sql

# 备份特定表
mysqldump -u root -p school students courses > tables_backup.sql

# 压缩备份
mysqldump -u root -p school | gzip > school_backup.sql.gz

# 备份时排除某些表
mysqldump -u root -p school --ignore-table=school.logs > school_backup_no_logs.sql

备份策略

bash
#!/bin/bash
# MySQL自动备份脚本

# 配置变量
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
MYSQL_USER="backup_user"
MYSQL_PASSWORD="backup_password"
MYSQL_HOST="localhost"

# 创建备份目录
mkdir -p $BACKUP_DIR

# 全量备份(每周一次)
if [ $(date +%u) -eq 1 ]; then
    mysqldump -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASSWORD \
        --all-databases --single-transaction --routines --triggers \
        > $BACKUP_DIR/full_backup_$DATE.sql
fi

# 增量备份(每天一次)
mysqldump -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASSWORD \
    --all-databases --single-transaction --flush-logs --master-data=2 \
    > $BACKUP_DIR/incr_backup_$DATE.sql

# 压缩备份文件
gzip $BACKUP_DIR/*_$DATE.sql

# 删除7天前的备份
find $BACKUP_DIR -name "*.sql.gz" -mtime +7 -delete

# 记录备份日志
echo "Backup completed at $(date)" >> $BACKUP_DIR/backup.log

恢复操作

bash
# 恢复数据库
mysql -u root -p school < school_backup.sql

# 恢复压缩的备份
gunzip < school_backup.sql.gz | mysql -u root -p school

# 恢复到指定时间点(需要二进制日志)
# 1. 查找二进制日志文件
mysql -u root -p -e "SHOW MASTER LOGS;"

# 2. 使用mysqlbinlog提取SQL语句
mysqlbinlog --start-datetime="2023-01-01 10:00:00" \
           --stop-datetime="2023-01-01 11:00:00" \
           /var/lib/mysql/mysql-bin.000001 > recovery.sql

# 3. 执行恢复SQL
mysql -u root -p school < recovery.sql

高可用与集群

高可用架构确保数据库服务在发生故障时仍能正常运行,提高系统的可靠性和连续性。

主从复制

主从复制是MySQL高可用的基础,通过将主库的数据同步到从库实现读写分离和数据备份。

配置主库

ini
# 主库配置文件(my.cnf)
[mysqld]
# 服务器ID(唯一)
server-id = 1

# 启用二进制日志
log-bin = mysql-bin

# 二进制日志格式
binlog-format = ROW

# 同步数据库(可选)
binlog-do-db = school

# 忽略同步的数据库(可选)
binlog-ignore-db = mysql

# 二进制日志过期时间(天)
expire_logs_days = 7

# 最大连接数
max_connections = 1000
sql
-- 在主库上创建复制用户
CREATE USER 'repl_user'@'%' IDENTIFIED BY 'repl_password';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'%';
FLUSH PRIVILEGES;

-- 查看主库状态
SHOW MASTER STATUS;

配置从库

ini
# 从库配置文件(my.cnf)
[mysqld]
# 服务器ID(唯一,不能与主库相同)
server-id = 2

# 启用中继日志
relay-log = mysql-relay-bin

# 只读模式(防止误写)
read_only = 1

# 最大连接数
max_connections = 1000
sql
-- 在从库上配置主库连接信息
CHANGE MASTER TO
MASTER_HOST='master_host_ip',
MASTER_USER='repl_user',
MASTER_PASSWORD='repl_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=154;

-- 启动从库复制
START SLAVE;

-- 查看从库状态
SHOW SLAVE STATUS\G

-- 检查复制是否正常
-- Seconds_Behind_Master为0表示同步正常
-- Slave_IO_Running和Slave_SQL_Running都为Yes表示正常

读写分离

python
# Python示例:读写分离实现
import pymysql

class MySQLCluster:
    def __init__(self):
        self.master_conn = pymysql.connect(
            host='master_host',
            user='app_user',
            password='app_password',
            database='school'
        )
        self.slave_conn = pymysql.connect(
            host='slave_host',
            user='app_user',
            password='app_password',
            database='school'
        )
    
    def execute_write(self, sql, params=None):
        """写操作(使用主库)"""
        with self.master_conn.cursor() as cursor:
            cursor.execute(sql, params)
            self.master_conn.commit()
            return cursor.lastrowid
    
    def execute_read(self, sql, params=None):
        """读操作(使用从库)"""
        with self.slave_conn.cursor() as cursor:
            cursor.execute(sql, params)
            return cursor.fetchall()
    
    def close(self):
        self.master_conn.close()
        self.slave_conn.close()

# 使用示例
cluster = MySQLCluster()

# 写操作
cluster.execute_write(
    "INSERT INTO students (name, major) VALUES (%s, %s)",
    ('新学生', '计算机科学')
)

# 读操作
students = cluster.execute_read(
    "SELECT * FROM students WHERE major = %s",
    ('计算机科学',)
)

分库分表

当单库单表数据量过大时,需要通过分库分表来提升性能。

水平分表

sql
-- 按用户ID范围分表
-- 用户表1(ID 1-1000000)
CREATE TABLE users_0 (
    id INT PRIMARY KEY,
    name VARCHAR(50),
    email VARCHAR(100),
    created_at TIMESTAMP
);

-- 用户表2(ID 1000001-2000000)
CREATE TABLE users_1 (
    id INT PRIMARY KEY,
    name VARCHAR(50),
    email VARCHAR(100),
    created_at TIMESTAMP
);

-- 按时间分表(月表)
CREATE TABLE orders_202301 (
    id INT PRIMARY KEY,
    user_id INT,
    amount DECIMAL(10,2),
    created_at DATE
);

CREATE TABLE orders_202302 (
    id INT PRIMARY KEY,
    user_id INT,
    amount DECIMAL(10,2),
    created_at DATE
);

垂直分表

sql
-- 将大表拆分为多个小表
-- 原始大表
CREATE TABLE user_profiles (
    id INT PRIMARY KEY,
    name VARCHAR(50),
    email VARCHAR(100),
    address TEXT,
    biography TEXT,
    preferences JSON,
    created_at TIMESTAMP,
    updated_at TIMESTAMP
);

-- 拆分后的基础信息表
CREATE TABLE user_basic_info (
    id INT PRIMARY KEY,
    name VARCHAR(50),
    email VARCHAR(100),
    created_at TIMESTAMP,
    updated_at TIMESTAMP
);

-- 拆分后的详细信息表
CREATE TABLE user_details (
    id INT PRIMARY KEY,
    user_id INT,
    address TEXT,
    biography TEXT,
    preferences JSON,
    FOREIGN KEY (user_id) REFERENCES user_basic_info(id)
);

使用中间件实现分库分表

java
// Java示例:使用ShardingSphere实现分库分表
@Configuration
public class ShardingConfig {
    
    @Bean
    public DataSource dataSource() throws SQLException {
        // 配置数据源
        Map<String, DataSource> dataSourceMap = new HashMap<>();
        dataSourceMap.put("ds0", createDataSource("jdbc:mysql://localhost:3306/db0"));
        dataSourceMap.put("ds1", createDataSource("jdbc:mysql://localhost:3306/db1"));
        
        // 配置分片规则
        ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
        
        // 配置分表规则
        TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration("t_order", "ds${0..1}.t_order_${0..1}");
        orderTableRuleConfig.setTableShardingStrategyConfig(
            new InlineShardingStrategyConfiguration("order_id", "t_order_${order_id % 2}")
        );
        orderTableRuleConfig.setDatabaseShardingStrategyConfig(
            new InlineShardingStrategyConfiguration("user_id", "ds${user_id % 2}")
        );
        
        shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);
        
        return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, new Properties());
    }
    
    private DataSource createDataSource(String url) {
        // 创建数据源
        HikariDataSource dataSource = new HikariDataSource();
        dataSource.setJdbcUrl(url);
        dataSource.setUsername("username");
        dataSource.setPassword("password");
        return dataSource;
    }
}

监控与调优

有效的监控和调优策略能够帮助及时发现和解决性能问题。

监控工具

内置监控

sql
-- 查看服务器状态
SHOW STATUS;

-- 查看全局状态变量
SHOW GLOBAL STATUS LIKE 'Threads_connected';
SHOW GLOBAL STATUS LIKE 'Questions';
SHOW GLOBAL STATUS LIKE 'Slow_queries';

-- 查看进程列表
SHOW PROCESSLIST;

-- 查看InnoDB状态
SHOW ENGINE INNODB STATUS\G

-- 查看表状态
SHOW TABLE STATUS LIKE 'students'\G

第三方监控工具

bash
# 使用Prometheus + Grafana监控MySQL

# 1. 安装mysqld_exporter
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.14.0/mysqld_exporter-0.14.0.linux-amd64.tar.gz
tar xvfz mysqld_exporter-0.14.0.linux-amd64.tar.gz
cd mysqld_exporter-0.14.0.linux-amd64

# 2. 创建MySQL用户用于监控
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'exporter_password';
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
FLUSH PRIVILEGES;

# 3. 创建配置文件.my.cnf
[client]
user=exporter
password=exporter_password

# 4. 启动mysqld_exporter
./mysqld_exporter --config.my-cnf=".my.cnf"

# 5. 配置Prometheus抓取指标
# prometheus.yml
scrape_configs:
  - job_name: 'mysql'
    static_configs:
      - targets: ['localhost:9104']

配置优化

内存配置

ini
# MySQL内存配置优化(my.cnf)
[mysqld]
# InnoDB缓冲池大小(建议设置为物理内存的50-70%)
innodb_buffer_pool_size = 2G

# InnoDB日志文件大小
innodb_log_file_size = 256M

# InnoDB日志缓冲区大小
innodb_log_buffer_size = 16M

# 查询缓存大小(MySQL 8.0已移除)
# query_cache_size = 64M

# 排序缓冲区大小
sort_buffer_size = 2M

# 连接缓冲区大小
join_buffer_size = 2M

# 读缓冲区大小
read_buffer_size = 1M

# 读随机缓冲区大小
read_rnd_buffer_size = 4M

连接配置

ini
# 连接相关配置
[mysqld]
# 最大连接数
max_connections = 1000

# 最大连接错误数
max_connect_errors = 100000

# 等待连接超时时间
wait_timeout = 28800
interactive_timeout = 28800

# 线程缓存大小
thread_cache_size = 100

# 表缓存大小
table_open_cache = 4000

# 表定义缓存大小
table_definition_cache = 4000

日志配置

ini
# 日志配置
[mysqld]
# 错误日志
log_error = /var/log/mysql/error.log

# 慢查询日志
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1

# 通用查询日志
general_log = 0
general_log_file = /var/log/mysql/general.log

# 二进制日志
log-bin = mysql-bin
expire_logs_days = 7
binlog-format = ROW

性能调优实践

sql
-- 1. 索引优化
-- 查看未使用索引的表
SELECT 
    object_schema,
    object_name,
    index_name,
    count_read,
    count_write
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE index_name IS NOT NULL
AND count_read = 0
AND count_write > 0;

-- 2. 查询优化
-- 查看执行时间最长的语句
SELECT 
    DIGEST_TEXT,
    COUNT_STAR,
    AVG_TIMER_WAIT/1000000000 AS avg_time_ms,
    MAX_TIMER_WAIT/1000000000 AS max_time_ms
FROM performance_schema.events_statements_summary_by_digest
ORDER BY AVG_TIMER_WAIT DESC
LIMIT 10;

-- 3. 锁等待分析
-- 查看锁等待情况
SELECT 
    r.trx_id waiting_trx_id,
    r.trx_mysql_thread_id waiting_thread,
    r.trx_query waiting_query,
    b.trx_id blocking_trx_id,
    b.trx_mysql_thread_id blocking_thread,
    b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;

综合项目

为了巩固所学知识,建议进行以下综合实践项目:

优化现有数据库查询

sql
-- 1. 性能分析和优化示例

-- 原始慢查询
SELECT 
    s.name,
    s.student_id,
    c.course_name,
    e.grade,
    t.name AS teacher_name
FROM students s
JOIN enrollments e ON s.id = e.student_id
JOIN courses c ON e.course_id = c.id
JOIN teachers t ON c.teacher_id = t.id
WHERE s.major = '计算机科学'
AND e.grade > 80
AND c.semester = '2023-1'
ORDER BY e.grade DESC
LIMIT 100;

-- 分析执行计划
EXPLAIN FORMAT=JSON 
SELECT 
    s.name,
    s.student_id,
    c.course_name,
    e.grade,
    t.name AS teacher_name
FROM students s
JOIN enrollments e ON s.id = e.student_id
JOIN courses c ON e.course_id = c.id
JOIN teachers t ON c.teacher_id = t.id
WHERE s.major = '计算机科学'
AND e.grade > 80
AND c.semester = '2023-1'
ORDER BY e.grade DESC
LIMIT 100;

-- 优化步骤:

-- 1. 创建复合索引
CREATE INDEX idx_students_major ON students(major);
CREATE INDEX idx_enrollments_grade_student ON enrollments(grade, student_id);
CREATE INDEX idx_courses_semester_teacher ON courses(semester, teacher_id);

-- 2. 使用覆盖索引优化
CREATE INDEX idx_enrollments_cover ON enrollments(student_id, course_id, grade);

-- 3. 重写查询以提高性能
SELECT 
    s.name,
    s.student_id,
    c.course_name,
    e.grade,
    t.name AS teacher_name
FROM students s
JOIN (
    SELECT student_id, course_id, grade
    FROM enrollments
    WHERE grade > 80
    ORDER BY grade DESC
    LIMIT 100
) e ON s.id = e.student_id
JOIN courses c ON e.course_id = c.id
JOIN teachers t ON c.teacher_id = t.id
WHERE s.major = '计算机科学'
AND c.semester = '2023-1';

-- 4. 使用查询缓存(如果适用)
SELECT SQL_CACHE 
    s.name,
    s.student_id,
    c.course_name,
    e.grade,
    t.name AS teacher_name
FROM students s
JOIN enrollments e ON s.id = e.student_id
JOIN courses c ON e.course_id = c.id
JOIN teachers t ON c.teacher_id = t.id
WHERE s.major = '计算机科学'
AND e.grade > 80
AND c.semester = '2023-1'
ORDER BY e.grade DESC
LIMIT 100;

搭建主从复制架构

bash
#!/bin/bash
# MySQL主从复制部署脚本

# 主库配置
MASTER_IP="192.168.1.100"
MASTER_USER="repl_user"
MASTER_PASSWORD="repl_password"

# 从库配置
SLAVE_IP="192.168.1.101"

# 1. 在主库上执行
echo "=== 配置主库 ==="
mysql -u root -p << EOF
-- 创建复制用户
CREATE USER '$MASTER_USER'@'%' IDENTIFIED BY '$MASTER_PASSWORD';
GRANT REPLICATION SLAVE ON *.* TO '$MASTER_USER'@'%';
FLUSH PRIVILEGES;

-- 锁定表并获取二进制日志位置
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
EOF

# 2. 备份主库数据
echo "=== 备份主库数据 ==="
mysqldump -u root -p --all-databases --master-data=2 --single-transaction > master_backup.sql

# 3. 在从库上恢复数据
echo "=== 在从库上恢复数据 ==="
mysql -u root -p < master_backup.sql

# 4. 配置从库
echo "=== 配置从库 ==="
mysql -u root -p << EOF
-- 配置主库连接信息
CHANGE MASTER TO
MASTER_HOST='$MASTER_IP',
MASTER_USER='$MASTER_USER',
MASTER_PASSWORD='$MASTER_PASSWORD',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=154;

-- 启动从库复制
START SLAVE;

-- 查看从库状态
SHOW SLAVE STATUS\G
EOF

# 5. 解锁主库表
echo "=== 解锁主库表 ==="
mysql -u root -p -e "UNLOCK TABLES;"

echo "=== 主从复制配置完成 ==="

设计并实施数据库备份与恢复策略

bash
#!/bin/bash
# 完整的MySQL备份与恢复策略

# 配置变量
BACKUP_BASE_DIR="/backup/mysql"
LOG_FILE="$BACKUP_BASE_DIR/backup.log"
RETENTION_DAYS=30

# 邮件通知配置
SMTP_SERVER="smtp.example.com"
SMTP_PORT=587
EMAIL_FROM="dbadmin@example.com"
EMAIL_TO="admin@example.com"

# 日志函数
log_message() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE
}

# 发送邮件通知
send_notification() {
    local subject="$1"
    local message="$2"
    
    # 使用mail命令发送邮件(需要配置邮件系统)
    echo "$message" | mail -s "$subject" $EMAIL_TO
    
    # 或使用curl发送到Webhook
    # curl -X POST -H "Content-Type: application/json" \
    #      -d "{\"subject\":\"$subject\",\"message\":\"$message\"}" \
    #      https://your-webhook-url.com/notify
}

# 全量备份函数
full_backup() {
    local backup_dir="$BACKUP_BASE_DIR/full"
    local date_str=$(date +%Y%m%d_%H%M%S)
    local backup_file="$backup_dir/full_backup_$date_str.sql"
    
    mkdir -p $backup_dir
    
    log_message "开始全量备份"
    
    # 执行全量备份
    if mysqldump -u backup_user -pbackup_password \
        --all-databases \
        --single-transaction \
        --routines \
        --triggers \
        --events \
        --hex-blob \
        --master-data=2 \
        > $backup_file; then
        
        # 压缩备份文件
        gzip $backup_file
        
        log_message "全量备份完成: $backup_file.gz"
        send_notification "MySQL全量备份成功" "全量备份已完成: $backup_file.gz"
        
        # 清理旧备份
        find $backup_dir -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
    else
        log_message "全量备份失败"
        send_notification "MySQL全量备份失败" "全量备份执行失败,请检查日志"
        return 1
    fi
}

# 增量备份函数
incremental_backup() {
    local backup_dir="$BACKUP_BASE_DIR/incr"
    local date_str=$(date +%Y%m%d_%H%M%S)
    local backup_file="$backup_dir/incr_backup_$date_str.sql"
    
    mkdir -p $backup_dir
    
    log_message "开始增量备份"
    
    # 执行增量备份
    if mysqldump -u backup_user -pbackup_password \
        --all-databases \
        --single-transaction \
        --flush-logs \
        --master-data=2 \
        > $backup_file; then
        
        # 压缩备份文件
        gzip $backup_file
        
        log_message "增量备份完成: $backup_file.gz"
        send_notification "MySQL增量备份成功" "增量备份已完成: $backup_file.gz"
        
        # 清理旧备份(保留7天)
        find $backup_dir -name "*.sql.gz" -mtime +7 -delete
    else
        log_message "增量备份失败"
        send_notification "MySQL增量备份失败" "增量备份执行失败,请检查日志"
        return 1
    fi
}

# 恢复函数
restore_backup() {
    local backup_file="$1"
    
    if [ ! -f "$backup_file" ]; then
        log_message "备份文件不存在: $backup_file"
        return 1
    fi
    
    log_message "开始恢复备份: $backup_file"
    
    # 停止MySQL服务
    sudo systemctl stop mysql
    
    # 恢复备份
    if gunzip < $backup_file | mysql -u root -proot_password; then
        log_message "备份恢复完成: $backup_file"
        send_notification "MySQL备份恢复成功" "备份恢复已完成: $backup_file"
        
        # 启动MySQL服务
        sudo systemctl start mysql
        return 0
    else
        log_message "备份恢复失败: $backup_file"
        send_notification "MySQL备份恢复失败" "备份恢复失败: $backup_file"
        
        # 启动MySQL服务
        sudo systemctl start mysql
        return 1
    fi
}

# 主函数
main() {
    case "$1" in
        full)
            full_backup
            ;;
        incr)
            incremental_backup
            ;;
        restore)
            if [ -z "$2" ]; then
                echo "请指定要恢复的备份文件"
                exit 1
            fi
            restore_backup "$2"
            ;;
        *)
            echo "用法: $0 {full|incr|restore <backup_file>}"
            exit 1
            ;;
    esac
}

# 执行主函数
main "$@"

通过以上综合项目实践,您将掌握MySQL性能优化与运维的核心技能,包括查询优化、备份恢复、高可用架构以及监控调优等重要方面。这些技能对于构建和维护高性能、高可用的数据库系统至关重要。