—7.2TB Total SSD Flash (2.4TBx3节点)— 12TB Usable HDD(去重/压缩前)— 2.6GHz/20核,6个物理cpu: 60个总核— 768GB总内存
我们经营的是纯粹的卫城。
monster VM工作负载是一个Progress数据库,大小为7TB,并为其分配了200GB RAM。
我们正在设置这作为一个PoC迁移独立的ESXi主机与小提琴所有闪存阵列。之前,VMware上的虚拟机只有12,000 IOPs。
我们应该做些什么来优化这个虚拟机的性能?
多个虚拟scsi适配器?另外,是否有办法查看工作集是否超过节点缓存能力?
最佳答案vcdxnz001
\n\nThere shouldn't be anything that needs tuning on the Acropolis side of things. There is no need for multipe vSCSI adapters or anything with Acropolis. Except I would recommend setting the container to use inline compression (delay 0). This overall will have a performance benefit as well as saving valuable SSD and HDD capacity.
\n
\nIf you want to see the working set and also the IO pattern of the VM you can do that through the Stargate 2009 pages. http:\/\/:2009, scroll to one of the vDisks attached to the VM, then click on the link of the vDisk ID. This will display all of the disk characteristics and IO patters, including working set size over the last 2 minutes and last hour. By default this info is protected by firewall. So you can either change the ip tables settings or use links on the CVM to view it in text format.
\n
\nGeneral recommendations would be to run the latest version of NOS, use multiple vDisks to the guest OS where it makes sense, like splitting up data files to different vDisks and having the log files on another vDisk \/ mount point. If using Linux use elevator=noop and iommu=soft in the boot options, change the max_sectors_kb to 1024.
\n
\nI look forward to seeing how this goes. What type of workload or test do you intend to run across the database? It's been a long time since I've done anything at all with Progress. So I'd be interested to hear your experience. If you need any help my team and I are here and your local Nutanix team knows how to reach us. We have team members distributed around the globe in different timezones, so someone is always available to assist.
\n
\nKind regards,
\n
\nMichael","className":"post__content__best_answer"}">