MySQL 在线添加NDB集群数据节点

在线添加NDB集群数据节点

在向NDB集群添加新数据节点所需的基本步骤。无论您是为数据节点进程使用ndbd还是ndbmtd二进制文件,此过程都适用。
假设您已经有一个正在运行的NDB集群,在线添加数据节点需要以下步骤:
1.编辑集群配置config.ini文件,添加与要添加的节点相对应的新[ndbd]部分。在集群使用多个管理服务器的情况下,需要对管理服务器使用的 所有config.ini文件进行这些更改。

必须注意,在config.ini文件中添加的任何新数据节点的节点id不能与现有节点使用的节点id重叠。如果您的API节点使用动态分配的节点id, 并且这些id与您想要用于新数据节点的节点id相匹配,则可以强制任何此类API节点“迁移”,如本过程后面所述。

2.滚动重启所有NDB Cluster管理服务器。
必须使用–reload或–initial选项重新启动所有管理服务器,以强制读取新配置。

3.对所有现有的NDB Cluster数据节点执行滚动重启。在重新启动现有数据节点时,没有必要(通常甚至不希望)使用–initial。
如果您使用的API节点具有动态分配的id,与您希望分配给新数据节点的任何节点id相匹配,则在此步骤中重新启动任何数据节点进程之前,必 须重新启动所有API节点(包括SQL节点)。这将导致具有先前未显式分配的节点id的任何API节点放弃这些节点id并获取新的节点id。

4.对连接到NDB集群的所有SQL或API节点执行滚动重启。

5.启动新的数据节点。
新的数据节点可以按任何顺序启动。它们也可以并发地启动,只要它们是在所有现有数据节点的滚动重启完成之后,并且在继续下一步之前启动 的。

6.在NDB集群管理客户端中执行一个或多个CREATE NODEGROUP命令,创建新数据节点所属的新节点组。

7.在所有数据节点(包括新节点)之间重新分配集群的数据。通常,这是通过在mysql客户端为每个NDBCLUSTER表,发出ALTER TABLE… ALGORITHM=INPLACE,REORGANIZE PARTITION语句。

异常:对于使用MAX_ROWS选项创建的表,此语句不起作用;相反,使用AALTER TABLE … ALGORITHM=INPLACE MAX_ROWS=…重新整理这样的表 您还应该记住,不赞成以这种方式使用MAX_ROWS来设置分区数量,
NDB 7.5.4及以后的版本,应该使用PARTITION_BALANCE;

只需要对添加新节点组时已经存在的表执行此操作。添加新节点组后创建的表中的数据将自动分布;但是,添加到任何给定表TBL中的在添加新 节点之前就存在的数据,在重新组织该表之前不会使用新节点进行分发。

8.
ALTER ONLINE TABLE … REORGANIZE PARTITION重新组织分区,但不回收“旧”节点上释放的空间。您可以通过在mysql客户端中为每个 NDBCLUSTER表发出一个OPTIMIZE table语句来实现这一点。

这适用于内存中NDB表的可变宽度列所使用的空间。内存表的固定宽度列不支持OPTIMIZE TABLE;磁盘数据表也不支持。

您可以添加所需的所有节点,然后连续发出几个CREATE NODEGROUP命令,将新的节点组添加到集群中。

在线添加NDB集群数据节点示例
提供一个详细的示例,说明如何在线添加新的NDB集群数据节点,从在单个节点组中具有2个数据节点的NDB集群开始,到在两 个节点组中具有4个数据节点的集群结束。

开始配置
为了说明,我们假设一个最小的配置,并且集群使用config.ini文件,该文件只包含以下信息:

[root@mgmd mysql-cluster]# cat config.ini
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2      # Number of replicas
DataMemory=1024M    # How much memory to allocate for data storage
IndexMemory=256M    # How much memory to allocate for index storage
                    # For DataMemory and IndexMemory, we have used the
                    # default values. Since the "world" database takes up
                    # only about 500KB, this should be more than enough for
                    # this example NDB Cluster setup.
ServerPort=2202     # This the default value; however, you can use any
                    # port that is free for all the hosts in the cluster
                    # Note1: It is recommended that you do not specify the port
                    # number at all and simply allow the default value to be used
                    # instead
                    # Note2: The port was formerly specified using the PortNumber
                    # TCP parameter; this parameter is no longer available in NDB
                    # Cluster 7.5.
[ndb_mgmd]
# Management process options:
HostName=10.10.10.102            # Hostname or IP address of MGM node
DataDir=/var/lib/mysql-cluster     # Directory for MGM node log files
[ndbd]
# Options for data node "A":
HostName=10.10.10.104            # Hostname or IP address
NodeId=2                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[ndbd]
# Options for data node "B":
HostName=10.10.10.105            # Hostname or IP address
NodeId=3                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[mysqld]
# SQL node options:
HostName=10.10.10.103            # Hostname or IP address
                                   # (additional mysqld connections can be
                                   # specified for this node for various
                                   # purposes such as running ndb_restore)


我们还假设您已经使用适当的命令行或my.cnf选项启动了集群,并且在管理客户端中运行SHOW会产生类似于下面所示的输出:

[root@mgmd /]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=4    @10.10.10.103  (mysql-5.7.48 ndb-7.5.36)

最后,我们假设集群中创建了以下数据表,如下所示:

mysql> use jycs
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+-----------------+
| Tables_in_jycs  |
+-----------------+
| btest           |
| city            |
| country         |
| countrylanguage |
| ctest           |
| example         |
| fish            |
+-----------------+
7 rows in set (0.01 sec)

mysql> select count(*) from btest;
+----------+
| count(*) |
+----------+
|        1 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from city;
+----------+
| count(*) |
+----------+
|     4079 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from country;
+----------+
| count(*) |
+----------+
|      239 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from countrylanguage;
+----------+
| count(*) |
+----------+
|      984 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from ctest;
+----------+
| count(*) |
+----------+
|        3 |
+----------+
1 row in set (0.01 sec)

mysql> select count(*) from example;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from fish;
+----------+
| count(*) |
+----------+
|        6 |
+----------+
1 row in set (0.00 sec)

在本例中,我们展示了用于数据节点进程的单线程ndbd。如果您正在使用多线程的ndbmtd,您也可以应用这个示例,方法是将ndbmtd替换为后面 步骤中出现的ndbd。

步骤1:更新配置文件。
在文本编辑器中打开集群全局配置文件,并添加与2个新数据节点对应的[ndbd]部分。(我们给这些数据节点的id为5和6,并假设它们将分别运 行在地址为10.10.10.106和10.10.10.107的主机上。)在你添加了新的部分之后,config.ini文件的内容应该如下所示:

[root@mgmd mysql-cluster]# cat config.ini
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2      # Number of replicas
DataMemory=1024M    # How much memory to allocate for data storage
IndexMemory=256M    # How much memory to allocate for index storage
                    # For DataMemory and IndexMemory, we have used the
                    # default values. Since the "world" database takes up
                    # only about 500KB, this should be more than enough for
                    # this example NDB Cluster setup.
ServerPort=2202     # This the default value; however, you can use any
                    # port that is free for all the hosts in the cluster
                    # Note1: It is recommended that you do not specify the port
                    # number at all and simply allow the default value to be used
                    # instead
                    # Note2: The port was formerly specified using the PortNumber
                    # TCP parameter; this parameter is no longer available in NDB
                    # Cluster 7.5.
[ndb_mgmd]
# Management process options:
HostName=10.10.10.102            # Hostname or IP address of MGM node
DataDir=/var/lib/mysql-cluster     # Directory for MGM node log files
[ndbd]
# Options for data node "A":
HostName=10.10.10.104            # Hostname or IP address
NodeId=2                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[ndbd]
# Options for data node "B":
HostName=10.10.10.105            # Hostname or IP address
NodeId=3                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[ndbd]
# Options for data node "C":
HostName=10.10.10.106            # Hostname or IP address
NodeId=5                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[ndbd]
# Options for data node "D":
HostName=10.10.10.107            # Hostname or IP address
NodeId=6                           # Node ID for this data node
DataDir=/usr/local/mysql/data      # Directory for this data node's data files
[mysqld]
# SQL node options:
HostName=10.10.10.103            # Hostname or IP address
                                   # (additional mysqld connections can be
                                   # specified for this node for various
                                   # purposes such as running ndb_restore)

完成必要的更改后,保存文件。

步骤2:重新启动管理服务器。
重新启动集群管理服务器需要发出单独的命令来停止管理服务器,然后重新启动它,如下所示:
1.使用管理客户端Stop命令停止管理服务器,如下所示

ndb_mgm> 1 stop
Node 1 has shutdown.
Disconnecting to allow Management Server to shutdown

ndb_mgm>

2.由于关闭管理服务器将导致管理客户端终止,因此必须从系统shell启动管理服务器。为简单起见,我们假设config.ini与管理服务器二进制 文件位于同一目录中,但实际上,您必须为配置文件提供正确的路径。您还必须提供–reload或–initial选项,以便管理服务器从文件而不是 其配置缓存中读取新配置。如果您的shell的当前目录与管理服务器二进制文件所在的目录相同,那么您可以调用管理服务器,如下所示:

ndb_mgmd -f  config.ini --reload --configdir=/var/lib/mysql-cluster

如果您的shell的当前目录与管理服务器二进制文件所在的目录不相同那么您可以调用管理服务器,如下所示:

[root@mgmd /]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini --reload --configdir=/var/lib/mysql-cluster
MySQL Cluster Management Server mysql-5.7.48 ndb-7.5.36

如果你在重启ndb_mgm进程后在管理客户端检查SHOW的输出,你现在应该看到如下所示:

[root@mgmd /]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)
id=5 (not connected, accepting connect from 10.10.10.106)
id=6 (not connected, accepting connect from 10.10.10.107)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=7 (not connected, accepting connect from 10.10.10.103)

步骤3:执行现有数据节点的滚动重启。
这一步完全可以在集群管理客户端中使用RESTART命令完成,如下所示:

在重启数据节点之前需要先关闭SQL节点,否则在重启数据节点时会出现如下错误

ndb_mgm> 2 restart
Node 2: Node shutdown initiated
Node 2: Node shutdown completed, restarting, no start.
Node 2 is being restarted

ndb_mgm> Node 2: Start initiated (version 7.5.36)
Node 2: Forced node shutdown completed. Occured during startphase 5. Caused by error 2303: 'System error, node killed during  node restart by other node(Internal error, programming error or missing error message, please report a bug). Temporary error,  restart node'.

查看数据节点日志:

2025-07-01 19:27:47 [ndbd] INFO     -- Node 2 killed this node because it could not copy a subscription during node restart.  Copy subscrip
2025-07-01 19:27:47 [ndbd] INFO     -- NDBCNTR (Line: 303) 0x00000002
2025-07-01 19:27:47 [ndbd] INFO     -- Error handler shutting down system
2025-07-01 19:27:47 [ndbd] INFO     -- Error handler shutdown completed - exiting
2025-07-01 19:27:47 [ndbd] ALERT    -- Node 2: Forced node shutdown completed. Occured during startphase 5. Caused by error  2303: 'System error, node killed during node restart by other node(Internal error, programming error or missing error message,  please report a bug). Temporary error, restart node'.

从Node 2 killed this node because it could not copy a subscription during node restart. 可知是在复制订阅时出现,就是因为SQL节 点还在运行。

关闭SQL节点:

[root@mysqld data]# service mysql.server stop
Shutting down MySQL..... SUCCESS!

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=5 (not connected, accepting connect from 10.10.10.106)
id=6 (not connected, accepting connect from 10.10.10.107)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=7 (not connected, accepting connect from 10.10.10.103)

重启数据节点:

ndb_mgm> 2 restart
Node 2: Node shutdown initiated
Node 2: Node shutdown completed, restarting, no start.
Node 2 is being restarted

ndb_mgm> Node 2: Start initiated (version 7.5.36)
Node 2: Started (version 7.5.36)

ndb_mgm> 3 restart
Node 3: Node shutdown initiated
Node 3: Node shutdown completed, restarting, no start.
Node 3 is being restarted

ndb_mgm> Node 3: Start initiated (version 7.5.36)
Node 3: Started (version 7.5.36)

ndb_mgm>

步骤4:执行所有集群API节点的滚动重启。

[root@mysqld data]# service mysql.server restart
 ERROR! MySQL server PID file could not be found!
Starting MySQL.. SUCCESS!
[root@mysqld data]#

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)
id=5 (not connected, accepting connect from 10.10.10.106)
id=6 (not connected, accepting connect from 10.10.10.107)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=7    @10.10.10.103  (mysql-5.7.48 ndb-7.5.36)

步骤5:执行新数据节点的初始启动。
从新数据节点的每个主机上的系统shell中,使用–initial选项启动数据节点,如下所示:

[root@ndbdc /]# ndbd -c 10.10.10.102 --initial
2025-07-01 19:49:26 [ndbd] INFO     -- Angel connected to '10.10.10.102:1186'
2025-07-01 19:49:26 [ndbd] INFO     -- Angel allocated nodeid: 5

[root@ndbdd /]#  ndbd -c 10.10.10.102 --initial
2025-07-01 19:49:34 [ndbd] INFO     -- Angel connected to '10.10.10.102:1186'
2025-07-01 19:49:34 [ndbd] INFO     -- Angel allocated nodeid: 6

与重新启动现有数据节点的情况不同,您可以并发地启动新的数据节点;你不需要等待一个开始后再开始另一个。

等到两个新数据节点都启动后,再继续下一步。一旦新的数据节点已经启动,您可以在管理客户端的SHOW命令的输出中看到,它们还不属于任何 节点组(如此处粗体所示):

ndb_mgm> Node 5: Started (version 7.5.36)
Node 6: Started (version 7.5.36)

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)
id=5    @10.10.10.106  (mysql-5.7.48 ndb-7.5.36, no nodegroup)
id=6    @10.10.10.107  (mysql-5.7.48 ndb-7.5.36, no nodegroup)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=7    @10.10.10.103  (mysql-5.7.48 ndb-7.5.36)

步骤6:创建一个新的节点组。
可以通过在集群管理客户端中发出CREATE NODEGROUP命令来实现这一点。该命令将包含在新节点组中的数据节点的节点id的逗号分隔列表作为参 数,如下所示:

ndb_mgm> CREATE NODEGROUP 5,6
Nodegroup 1 created

通过再次发出SHOW,您可以验证数据节点5和6已经加入了新的节点组:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=2    @10.10.10.104  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0, *)
id=3    @10.10.10.105  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 0)
id=5    @10.10.10.106  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 1)
id=6    @10.10.10.107  (mysql-5.7.48 ndb-7.5.36, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.10.10.102  (mysql-5.7.48 ndb-7.5.36)

[mysqld(API)]   1 node(s)
id=7    @10.10.10.103  (mysql-5.7.48 ndb-7.5.36)

步骤7:重新分配集群数据。
创建节点组时,现有数据和索引不会自动分发到新节点组的数据节点,这可以通过发出适当的在管理客户端中使用REPORT命令:

ndb_mgm> ALL REPORT MEMORY
Node 2: Data usage is 0%(173 32K pages of total 32768)
Node 2: Index usage is 0%(80 8K pages of total 32800)
Node 3: Data usage is 0%(173 32K pages of total 32768)
Node 3: Index usage is 0%(80 8K pages of total 32800)
Node 5: Data usage is 0%(17 32K pages of total 32768)
Node 5: Index usage is 0%(0 8K pages of total 32800)
Node 6: Data usage is 0%(17 32K pages of total 32768)
Node 6: Index usage is 0%(0 8K pages of total 32800)

通过对每个NDB表执行ALTER TABLE … ALGORITHM=INPLACE, REORGANIZE PARTITION语句,可以在所有数据节点之间重新分布数据。

mysql> SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE = 'NDBCLUSTER';
+--------------+-----------------------+
| TABLE_SCHEMA | TABLE_NAME            |
+--------------+-----------------------+
| jycs         | btest                 |
| jycs         | city                  |
| jycs         | country               |
| jycs         | countrylanguage       |
| jycs         | ctest                 |
| jycs         | example               |
| jycs         | fish                  |
| mysql        | ndb_apply_status      |
| mysql        | ndb_index_stat_head   |
| mysql        | ndb_index_stat_sample |
| world        | city                  |
| world        | country               |
| world        | countrylanguage       |
+--------------+-----------------------+
13 rows in set (0.02 sec)

mysql> ALTER TABLE jycs.btest ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (21.38 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.city ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (8.68 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.country ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (8.50 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.countrylanguage ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (8.56 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.ctest ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (6.97 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.example ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (12.81 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE jycs.fish ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (15.82 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.ndb_apply_status ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (4.68 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.ndb_index_stat_head ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (5.17 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.ndb_index_stat_sample ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (6.23 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE world.city ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (7.17 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE world.country ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (8.48 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE world.countrylanguage ALGORITHM=INPLACE, REORGANIZE PARTITION;
Query OK, 0 rows affected (7.98 sec)
Records: 0  Duplicates: 0  Warnings: 0


ndb_mgm> ALL REPORT MEMORY
Node 2: Data usage is 0%(161 32K pages of total 32768)
Node 2: Index usage is 0%(76 8K pages of total 32800)
Node 3: Data usage is 0%(161 32K pages of total 32768)
Node 3: Index usage is 0%(76 8K pages of total 32800)
Node 5: Data usage is 0%(113 32K pages of total 32768)
Node 5: Index usage is 0%(58 8K pages of total 32800)
Node 6: Data usage is 0%(113 32K pages of total 32768)
Node 6: Index usage is 0%(58 8K pages of total 32800)

此外,对于每个表,应该在ALTER table语句之后加上一个OPTIMIZE table语句来回收浪费的空间。

mysql> OPTIMIZE TABLE jycs.btest ;
+------------+----------+----------+----------+
| Table      | Op       | Msg_type | Msg_text |
+------------+----------+----------+----------+
| jycs.btest | optimize | status   | OK       |
+------------+----------+----------+----------+
1 row in set (0.06 sec)

mysql> OPTIMIZE TABLE jycs.city;
+-----------+----------+----------+----------+
| Table     | Op       | Msg_type | Msg_text |
+-----------+----------+----------+----------+
| jycs.city | optimize | status   | OK       |
+-----------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE jycs.country;
+--------------+----------+----------+----------+
| Table        | Op       | Msg_type | Msg_text |
+--------------+----------+----------+----------+
| jycs.country | optimize | status   | OK       |
+--------------+----------+----------+----------+
1 row in set (0.00 sec)

mysql> OPTIMIZE TABLE jycs.countrylanguage;
+----------------------+----------+----------+----------+
| Table                | Op       | Msg_type | Msg_text |
+----------------------+----------+----------+----------+
| jycs.countrylanguage | optimize | status   | OK       |
+----------------------+----------+----------+----------+
1 row in set (0.00 sec)

mysql> OPTIMIZE TABLE jycs.ctest;
+------------+----------+----------+----------+
| Table      | Op       | Msg_type | Msg_text |
+------------+----------+----------+----------+
| jycs.ctest | optimize | status   | OK       |
+------------+----------+----------+----------+
1 row in set (0.00 sec)

mysql> OPTIMIZE TABLE jycs.example;
+--------------+----------+----------+----------+
| Table        | Op       | Msg_type | Msg_text |
+--------------+----------+----------+----------+
| jycs.example | optimize | status   | OK       |
+--------------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE jycs.fish;
+-----------+----------+----------+----------+
| Table     | Op       | Msg_type | Msg_text |
+-----------+----------+----------+----------+
| jycs.fish | optimize | status   | OK       |
+-----------+----------+----------+----------+
1 row in set (0.07 sec)

mysql> OPTIMIZE TABLE mysql.ndb_apply_status;
+------------------------+----------+----------+----------+
| Table                  | Op       | Msg_type | Msg_text |
+------------------------+----------+----------+----------+
| mysql.ndb_apply_status | optimize | status   | OK       |
+------------------------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE mysql.ndb_index_stat_head;
+---------------------------+----------+----------+----------+
| Table                     | Op       | Msg_type | Msg_text |
+---------------------------+----------+----------+----------+
| mysql.ndb_index_stat_head | optimize | status   | OK       |
+---------------------------+----------+----------+----------+
1 row in set (0.00 sec)

mysql> OPTIMIZE TABLE mysql.ndb_index_stat_sample;
+-----------------------------+----------+----------+----------+
| Table                       | Op       | Msg_type | Msg_text |
+-----------------------------+----------+----------+----------+
| mysql.ndb_index_stat_sample | optimize | status   | OK       |
+-----------------------------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE world.city;
+------------+----------+----------+----------+
| Table      | Op       | Msg_type | Msg_text |
+------------+----------+----------+----------+
| world.city | optimize | status   | OK       |
+------------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE world.country;
+---------------+----------+----------+----------+
| Table         | Op       | Msg_type | Msg_text |
+---------------+----------+----------+----------+
| world.country | optimize | status   | OK       |
+---------------+----------+----------+----------+
1 row in set (0.01 sec)

mysql> OPTIMIZE TABLE world.countrylanguage;
+-----------------------+----------+----------+----------+
| Table                 | Op       | Msg_type | Msg_text |
+-----------------------+----------+----------+----------+
| world.countrylanguage | optimize | status   | OK       |
+-----------------------+----------+----------+----------+
1 row in set (0.01 sec)

在ALL REPORT MEMORY的输出中执行这些语句后,您可以看到数据和索引现在在所有集群数据节点之间重新分布,如下所示:

ndb_mgm> ALL REPORT MEMORY
Node 2: Data usage is 0%(161 32K pages of total 32768)
Node 2: Index usage is 0%(76 8K pages of total 32800)
Node 3: Data usage is 0%(161 32K pages of total 32768)
Node 3: Index usage is 0%(76 8K pages of total 32800)
Node 5: Data usage is 0%(113 32K pages of total 32768)
Node 5: Index usage is 0%(58 8K pages of total 32800)
Node 6: Data usage is 0%(113 32K pages of total 32768)
Node 6: Index usage is 0%(58 8K pages of total 32800)

由于一次只能执行一个对NDBCLUSTER表的DDL操作,因此必须等待每个ALTER TABLE…REORGANIZE PARTITION语句在发出下一个语句之前完成。

对于在添加新数据节点之后创建的 NDBCLUSTER 表,无需发出 ALTER TABLE … REORGANIZE PARTITION 语句;添加到此类表中的数据会自动在 所有数据节点之间进行分配。然而,在添加新节点之前就已存在的 NDBCLUSTER 表中,无论是现有数据还是新数据都不会使用新节点进行分配, 除非使用 ALTER TABLE … REORGANIZE PARTITION 对这些表进行重组。

发表评论

电子邮件地址不会被公开。