我最近使用LCM更新9个节点集群上的固件。其中一个主人没有回来。我能够将其重新启动回主机操作系统,但它没有网络连接。接口已启动,分配IP地址,只是没有通过桥梁的流量。
在一段时间内与之斗争后,我决定从群集中弹出节点,只需重新奠定新的节点。我启动了删除过程,但节点状态仍然存在marked_for_removal_but_not_detachable.:
Nutanix @ NTNX-
ID:0005ADCA-5F30-1BF1-0000-00000000008D15 :: 21
UUID:705089F7-2435-4A35-83FE-603470BD36D6
名称:10.1.153.15.
IPMI地址:10.1.151.249
控制器VM地址:10.1.153.34
控制器VM NAT地址:
控制器VM NAT端口:
虚拟机管理程序地址:10.1.153.15
主机状态:marked_for_removal_but_not_detachable
Oplog磁盘大小:400 GIB(429,496,729,600字节)(2.4%)
在维护模式下:FALSE(LIFE_CYCLE_MANAGEMENT)
元数据存储状态:从元数据存储中删除节点
节点位置:为此模型显示节点物理位置。请参阅此信息的Prism UI。
节点串行(UUID):
块串行(型号):0123456789(NX-8035-G4)
我跟着这个:ahv |在成功输入维护模式后,节点删除卡住文章,但确认节点状态确实仍然存在marked_for_removal_but_not_detachable.,并且从元数据圈中删除主机:
Nutanix @ NTNX-
地址状态状态加载拥有令牌
t6t6t6t6bp8xooghfmtbxpfevor1fjqbrz0zwdw7lumrx9gua7ghbpinihff.
10.1.153.146 UP正常805.96 MB 11.11%00000000YYJS51SZT1LE5UJXJWRV7DDGQ59Q9OXIRILNAZ33W8UXUAIII1J6
10.1.153.32向上正常789.13 MB 11.11%6t6t6t6t4ihbhbgwptcld3zm1gcapkbfog6wvvxtah1ktljbpt6ovs0onkmrf
10.1.153.31 Up Normal 894.45 MB 11.11% DmDmDmDm0000000000000000000000000000000000000000000000000000
10.1.153.35 UP正常818.87 MB 11.11%KFKFKFKFYFHJTJHIDVV6IQHMWIVFVXXFA5BDGRQRAVJJ777TPMETARIGOLY1
10.1.153.37 UP正常1.45 GB 22.22%Yryryryrz0qud39su9u4hcgmol0voi5vidgihhwjwrewcvnbxsfunj0uh
10.1.153.33 UP正常1.41 GB 11.11%FKFKFKFK2TH6RDGNNXUNDUZUM3TGRZWPBCU4THWX48E7URXBW6PKMYZS1X4
10.1.153.30 UP正常1.5 GB 11.11%MDMDMDMD00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c00000000000000c00000000000000c00000000000000c00000000000000c00000000000000c000000000000c0000c0000000000c000000c000000c0000c00000000c0000c00000000/10cmdmdmdmdmdmdmdmdmdmdmdmd0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000`
10.1.153.36 UP正常812.69 MB 11.11%T6T6T6T6BP8XAOGHFMTBXPFEVOR1FJQBRZ0ZWDW7LUMRX9GUA7GHBPINIHFF
现在已经坐在这个状态。我还尝试使用以下操作手动删除节点:
NCLI主机删除启动ID = 21跳过空间检查= TRUE
它报告了成功启动了节点删除,但我认为操作没有变化。任何帮助是极大的赞赏。
最好的答案Matthearn.
In Production: Incorrect use of the edit-zeus<\/strong> command could lead to data loss or other cluster complications and should not be used unless under the guidance of Nutanix Support.<\/p><\/div><\/section> Which described using \u201cedit-zeus\u201d to manually set the disk-removal status, but the actual code that it specified (changing 17 to 273) may be out of date.\u00a0 I was seeing codes like 4369, 4096, and 4113.\u00a0 Disks that were not scheduled to be removed were all status:<\/p> The \u201czeus-edit\u201d command essentially allows you to replace those values, but it\u2019s smart enough to discard your changes if you pick values that don\u2019t work.\u00a0 I tried 273 and it wouldn\u2019t save it.\u00a0 I tried setting them to 0, 4096, 4113, etc., and then noticed that if I set one to 4369, it generally stayed that way, and also became grayed out in prism.\u00a0 So I set them all to 4369.\u00a0 Immediately they all went gray, and the host I was trying to remove began to show up in prism with only an IP address and no CPU\/Disk\/Memory statistics.\u00a0 It still wouldn\u2019t quite disappear, though.\u00a0<\/p> Which specified to do edit-zeus again, and look for the \u201cnode_status\u201d of the removing node:<\/p> I changed \u201ckToBeRemoved\u201d to \u201ckOkToBeRemoved\u201d and the host immediately became grayed out in prism.\u00a0 A few minutes later it was gone, and the cluster was back down to 3 nodes and healthy and clean.<\/p> Hopefully you can do the same, although if you have a 9-node cluster I\u2019m guessing you\u2019re *not* running CE and should probably just call support. :)<\/p>","className":"post__content__best_answer"}">$ zeus_config_printer | grep data_migration_status
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 4113
data_migration_status: 4369
data_migration_status: 4096
data_migration_status: 4113
data_migration_status: 4369
data_migration_status: 4113
data_migration_status: 4369
data_migration_status: 4113
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
data_migration_status: 0
<\/code><\/pre>$ zeus_config_printer | grep node_status
node_status: kNormal
node_status: kNormal
node_status: kToBeRemoved
node_status: kNormal<\/code><\/pre>