Join us for a virtual Nutanix User Group meeting with Jarian Gibson as he covers Nutanix Cloud Clusters (NC2) on Azure and AWS with Citrix. <\/span><\/p>

Jarian will take a deep dive into NC2 on Azure architecture and Citrix on NC2 on Azure\u00a0that helps you strengthen your business continuity and disaster recovery position. He\u2019ll also provide the latest updates for NC2 on AWS.<\/span><\/p>

Plus, we're\u00a0giving away a Nutanix suitcase to one lucky winner!\u00a0Opt-in when you register\u00a0to be entered to win.\u00a0<\/p>","author":{"id":113632,"url":"\/members\/karlie-beil-113632","name":"Karlie Beil","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/200x200\/1581aab3-bcf6-49f4-b2fb-3d11e8c010dc.png","userTitle":"Community Manager","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Community Manager","color":"#0873ba"},"userLevel":4},"type":"Webinar","url":"https:\/\/next.nutanix.com\/events\/global-nug-nc2-on-azure-and-aws-with-citrix-151","image":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/attachment\/f9693b5b-436b-427a-9b98-531b4040ff24_thumb.png","location":"","startsAt":1678298400,"endsAt":1678302000,"contentType":"event","attendees":[],"attendeeCount":0,"isLoggedInUserAttendee":false,"createdAt":"1675974969"},"phrases":{"Forum":{"{n} year|{n} years":"{n} year|{n} years","{n} month|{n} months":"{n} month|{n} months","{n} day|{n} days":"{n} day|{n} days","{n} hour|{n} hours":"{n} hour|{n} hours","{n} minute|{n} minutes":"{n} minute|{n} minutes","just":"just now","{plural} ago":"{plural} ago"}}}">

解决了

存储节点体系结构


Userlevel 1
徽章 +2

Nutanix存储节点可以包含比群集节点磁盘上现有容量的辅助VM副本更多的辅助VM副本吗?如果是,如何将存储节点放下以进行维护?

图标

最好的答案戴维2021年1月28日,20:57

I believe, and might double check this w Support\/Sales Engineers - in the situation you describe the VMs would continue to functions\/run (no outage of them executing\/reading\/writing to disk) however the cluster\u2019s resiliency would stay Critical (it doesn\u2019t have enough pool capacity to write 2nd copies) until the storage heavy node is back up (i.e. couldn\u2019t loose any additional disk\/nodes\/blocks).<\/p>

\u00a0<\/p>

This is one of the reasons why Storage-heavy nodes need to be added to a cluster in pairs, so storage un-balances during HA events don\u2019t end up causing, \u201csufficient_disk_space_check\u201d warnings.<\/p>

Note <\/strong>that the number of storage heavy nodes may need to be more if you\u2019re using RF=3<\/p>

https:\/\/portal.nutanix.com\/page\/documents\/kbs\/details?targetId=kA0600000008gVyCAI<\/a><\/p>

\u00a0<\/p>","className":"post__content__best_answer"}">

查看原件

该主题已关闭以供评论

8个答复

UserLevel 6
徽章 +5

你好 @Thegman

来自Prism Web控制台指南:仅存储节点配置

如果仅存储节点失败,请确保群集具有足够的容量。

您的群集必须具有足够的能力来处理升级操作,在此期间重新启动节点。

群集中仅存储节点的最小数量必须等于群集的复制因子(RF)。例如,如果群集的RF为两个,请至少添加两个仅存储节点。如果群集的RF为三个,请至少添加三个仅存储节点。

假设您的意思是仅存储而不是存储较重的节点,则仅存储节点不会存储多个与此相同数据的副本,这与弹性无助。因此,建议仅存储节点的数量等于RF因子。这样,每个仅存储节点将具有数据的副本。

存储节点维护过程与常规节点没有区别,除了没有VM可以迁移该节点的事实。

Userlevel 1
徽章 +2

你好 @Alona

我确实意味着存储重量,而不仅仅是存储,但是现在您将此选项提升了……。

使用Nutanix VM的群集在HA的集群中写成2倍,所以如果我错了,请纠正我,但是存储节点的目的是容纳更多的次要副本,不是吗?So what I’m looking for is an understanding of how to bring down the storage heavy node, or the storage only node, for maintenance, when there isn’t enough local disk capacity within the cluster to house the secondary copies stored on the storage node.

For storage only nodes you state that you wouldn’t want just a single instance, so that makes sense to a degree, especially if you have some VMs that has a very large disk requirement that would eat up all the storage on a single node, after all the cluster requires 2 copies of these VMs. What I’m looking for is the “how does this work” ..

我想了解在维护事件中,所有VM是否在集群中都具有HA功能,我认为,在放下存储节点时,需要将数据转移到周围。我的问题是,如果剩余的本地磁盘上没有足够的容量来编写此数据,那么该阈值如何跟踪。在我看来,在这种情况下,在这种情况下,某些VM可能需要一些停机时间。

UserLevel 3
徽章 +5

I believe, and might double check this w Support/Sales Engineers - in the situation you describe the VMs would continue to functions/run (no outage of them executing/reading/writing to disk) however the cluster’s resiliency would stay Critical (it doesn’t have enough pool capacity to write 2nd copies) until the storage heavy node is back up (i.e. couldn’t loose any additional disk/nodes/blocks).

这就是需要成对添加存储较重的节点的原因之一,因此HA事件期间的存储平衡不会最终导致“ suffitife_disk_space_check”警告。

笔记如果您使用rf = 3,则存储重量节点的数量可能需要更多

https://portal.nutanix.com/page/documents/kbs/details?targetId=KA0600000008GVYCAI

Userlevel 1
徽章 +2

谢谢大卫 - 这很有意义,为什么当它仍然可以在关键模式下运行时将其完全放下……您只需要及时完成工作即可。:)

因此,一个好的体系结构利用存储重型节点将包括一个JBOD,它允许1个以上的服务器在两个存储较重节点之间适当地连接和分配磁盘。

UserLevel 3
徽章 +5

我强烈建议您查看/使用“ nutanix sizer”来帮助您这样的右尺寸工作量……这将有助于说明像这样的一些细微差别。


考虑到重新复制的量,您可能需要将CVM/存储流量与来宾VM流量分开,并取决于在节点中使用10GB+ NIC的所讨论数据量……以帮助策展人进行弹性扫描。

对于一些背景检查:

https://nutanixbible.com/“意识条件和容忍度”部分

https://portal.nutanix.com/page/documents/kbs/details?targetId=KA0600000008I1DCAA

https://next.nutanix.com/how-it-works-22/curator-scans-types-and-frequency-frequency-requency-where-is-is-is-s-re-free-space-33516

Userlevel 1
徽章 +2

嗨,大卫 - 感谢您的答复。至于将存储与VM流量分开,这意味着每个节点又有2个10GB端口(现在您需要每个主机的4个上游端口),如果您按照切换成本和数据中心硬件进行数学来缩放HCI,那么您很快就需要请参阅该模型从成本的角度分开。您可能会避免在此堆栈中有一个数组,但是缺乏网络合并,并且需要为每个节点许可,在扩展时会迅速赶上。

UserLevel 3
徽章 +5

哦,我很清楚各种供应商的“每10GB端口许可”……。you could employ basic VLAN separation of CVM/ESXi from Guest VM traffic and use other methods to “tier” your Guest VM traffic… I doubt many guest OS’ can saturate a 10Gb link for long, so ‘sharing’ storage & guest vm traffic on host uplinks is a small price to pay. (This of course totally depends on the types of loads you have - you know those best.)

就缩放而言……我不知道其他3层系统缩放和HCI(对于每个节点,您可以选择根据添加的节点来缩放簇IO/存储或计算)。

我去过那里(3层系统)并完成了数学(粉丝/扇出比例,分区糟糕的比率,欺负VM和“管理SSD Cache” - 实际上……不想真正想要返回。)
不要相信我的话……建议您阅读Josh Odgers(http://www.joshodgers.com/

我也会推荐您可以与Nutanix的销售工程师保持联系,并确定您的需求和疑虑。详细信息就是全部内容,并且考虑到有多少系统具有参考文献文档和Nutanix上的100%支持性,我敢肯定,有一个适合您预算的解决方案!

Userlevel 1
徽章 +2

谢谢大卫 - 感谢您的回答。结束时,当需要快速扩展时,HCI很棒,您可以同时扩展计算和存储,而不必担心存储前端。但这并不是HCI带来的最好的好处。我认为它的预先自动化,合并为单个堆栈,其操作效率使其更容易管理。我将指出我们自己的基础设施中的一些事情,这些基础设施是HCI和传统W集中式阵列的混合组合:1。许多组织不需要扩展,实际上正在减少其足迹。HCI doesn’t provide the flexibility to reduce a footprint without a bit of challenge, if you have moved a large # of workloads to the cloud and no longer require the compute power you can’t just pull a couple of nodes away without addressing the reduction is storage that would accompany a compute reduction. #2, we run over 50 hypervisor nodes with about 35 VMs each against a single array and see an average of .5ms latency with very little spiking above 10ms, but our Nutanix deployments average 2ms latency with a substantial amount of spiking that far exceeds the what our centralized array exhibits, and these specific hosts don’t run ½ the # of VMs per host as our non HCI clusters do. I certainly understand the value of what HCI brings to the table and it is a perfect fit for some organizations but not so much for others.

Learn more about our cookies.<\/a>","cookiepolicy.button":"Accept cookies","cookiepolicy.button.deny":"Deny all","cookiepolicy.link":"Cookie settings","cookiepolicy.modal.title":"Cookie settings","cookiepolicy.modal.content":"We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.<\/a>","cookiepolicy.modal.level1":"Basic
Functional","cookiepolicy.modal.level2":"Normal
Functional + analytics","cookiepolicy.modal.level3":"Complete
Functional + analytics + social media + embedded videos"}}}">
Baidu