• 76话题
  • 201答复

76个主题

Saying \"It's on 10GB, it's fine\"\n \n2> Using a separate network (port group and VLAN) specifically for voice on a VSS with 1GB uplinks\n \n3> Using a separate network (port group and VLAN) specifically for voice on a VDS with 1GB uplinks pinned to the port group\n \n4> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks\n \n5> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks and traffic prioritization on the TOR switches for the voice VLAN using IP Precedence (differentiated services code point [DSCP]) audio priority.\n \nAny thoughts community?","url":"\/virtual-desktop-infrastructure-28\/qos-for-voice-traffic-prioritization-in-horizon-view-2089?postid=2089#post2089","creationDate":"2015-03-23T18:23:07+0000","relativeCreationDate":"6 years ago"},"isTopicUnread":true,"privateId":16429,"id":2089,"type":"post","hasCurrentUserLiked":false,"isSticky":false,"features":[{"id":"isAnswered","label":"beantwoord"}],"forum":{"id":28,"isIdeation":false,"url":"\/virtual-desktop-infrastructure-28","title":"Virtual Desktop Infrastructure","description":"Discussions about VDI Solutions VMware Horizon and Citrix XenDesktop on Nutanix"},"url":"\/virtual-desktop-infrastructure-28\/qos-for-voice-traffic-prioritization-in-horizon-view-2089","title":"QoS for voice traffic prioritization in Horizon View","lastPost":{"id":2158,"author":{"id":8553,"url":"\/members\/daemonbehr-8553","name":"DaemonBehr","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/90x90\/2657iAEB12671DF81F376.png","userTitle":"Adventurer","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Adventurer","color":"#000000"},"userLevel":4},"content":"I'm actually revisiting this now as the use of the 1GB ports increases the port utilization greatly, and reduces scalability of nodes per switch.\n \nI may have to go back to a 2 x 10GB uplink on a VDS ( with LBT and NIOC) and hope that there is enough bandwidth to thwart any contention.\n \nCisco jabber with VXME is a great option, but the client doesn't want to leave IP Communicator yet.","url":"\/virtual-desktop-infrastructure-28\/qos-for-voice-traffic-prioritization-in-horizon-view-2089?postid=2158#post2158","creationDate":"2015-03-27T21:54:45+0000","relativeCreationDate":"6 years ago"},"lastReply":{"id":2158,"author":{"id":8553,"url":"\/members\/daemonbehr-8553","name":"DaemonBehr","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/90x90\/2657iAEB12671DF81F376.png","userTitle":"Adventurer","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Adventurer","color":"#000000"},"userLevel":4},"content":"I'm actually revisiting this now as the use of the 1GB ports increases the port utilization greatly, and reduces scalability of nodes per switch.\n \nI may have to go back to a 2 x 10GB uplink on a VDS ( with LBT and NIOC) and hope that there is enough bandwidth to thwart any contention.\n \nCisco jabber with VXME is a great option, but the client doesn't want to leave IP Communicator yet.","url":"\/virtual-desktop-infrastructure-28\/qos-for-voice-traffic-prioritization-in-horizon-view-2089?postid=2158#post2158","creationDate":"2015-03-27T21:54:45+0000","relativeCreationDate":"6 years ago"},"numberOfUnreadReplies":0,"numberOfReplies":7,"numberOfLikes":0,"relevantPost":{"id":2089,"author":{"id":8553,"url":"\/members\/daemonbehr-8553","name":"DaemonBehr","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/90x90\/2657iAEB12671DF81F376.png","userTitle":"Adventurer","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Adventurer","color":"#000000"},"userLevel":4},"content":"I'm looking at using Cisco IP Communicator on a Horizon View persistant desktop and would like some input on what others suggest for voice traffic prioritization. Some options I see:\n \n1> Saying \"It's on 10GB, it's fine\"\n \n2> Using a separate network (port group and VLAN) specifically for voice on a VSS with 1GB uplinks\n \n3> Using a separate network (port group and VLAN) specifically for voice on a VDS with 1GB uplinks pinned to the port group\n \n4> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks\n \n5> Using a separate network (port group and VLAN) specifically for voice on a VDS with 10GB uplinks and traffic prioritization on the TOR switches for the voice VLAN using IP Precedence (differentiated services code point [DSCP]) audio priority.\n \nAny thoughts community?","url":"\/virtual-desktop-infrastructure-28\/qos-for-voice-traffic-prioritization-in-horizon-view-2089?postid=2089#post2089","creationDate":"2015-03-23T18:23:07+0000","relativeCreationDate":"6 years ago"},"numberOfViews":2234,"contentType":"question","publicLabel":"","category":{"url":"\/workloads-applications-6","categoryId":6,"title":"Workloads & Applications","metaRobots":"index, follow","type":0},"lastPostId":"2158"}, "isActivityPage": null, "showUnsubscribeBtn": null, "attachmentCdn": "https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/attachment\/", "isGuest": true, "csrfToken": "58d06469b67d0809da72da93ebb231f536bbde7a", "hideCategoryMetadata": false, "phrases": { "Forum": { "thread.unsubscribe": "Unsubscribe", "vraag": "Question", "beantwoord": "Solved" } } }">
Advanced Settings. Click Configure Passthrough in the top left.Deselect at least one core to remove it from being passed through\n \nAfter that I had a different error in the logs of my VM:\n \nvmiop_log: error: Initialization: VGX not supported with ECC Enabled.\n \nWith some help from Google I found the following explanation: Virtual GPU is not currently supported with ECC active. GRID K2 cards ship with ECC disabled by default, but ECC may subsequently be enabled using nvidia-smi.\n \nUse nvidia-smi to list status on all GPUs, and check for ECC noted as enabled on GRID K2 GPUs. Change the ECC status to off on a specific GPU by executing nvidia-smi -i -e 0, where is the index of the GPU as reported by nvidia-smi.\n \nAfter this change I was able to boot my VM, create a Master Image and deploy the Horizon desktops with a vGPU Profile via Horizon 6:\n \nhttps:\/\/www.youtube.com\/watch?v=UsDry2JY4pg\n \nnote 1: I was performing remote testing with limited bandwidth, as you can see the desktop did up to 66FPS.\n \nnote 2: Please be aware that although this testing was done on a Nutanix powered platform vSphere 6 is not supported by Nutanix at this moment, support will follow asap but be aware of this.\n \nThis is a repost from the blog My Virtual Vision by Kees Baggerman","url":"\/virtual-desktop-infrastructure-28\/introducing-vgpu-on-vsphere-installation-and-troubleshooting-tips-2144?postid=2144#post2144","creationDate":"2015-03-26T22:49:37+0000","relativeCreationDate":"6 years ago"},"isTopicUnread":true,"privateId":33071,"id":2144,"type":"post","hasCurrentUserLiked":false,"isSticky":false,"features":[],"forum":{"id":28,"isIdeation":false,"url":"\/virtual-desktop-infrastructure-28","title":"Virtual Desktop Infrastructure","description":"Discussions about VDI Solutions VMware Horizon and Citrix XenDesktop on Nutanix"},"url":"\/virtual-desktop-infrastructure-28\/introducing-vgpu-on-vsphere-installation-and-troubleshooting-tips-2144","title":"Introducing: vGPU on vSphere, installation and troubleshooting tips","lastPost":{"id":2144,"author":{"id":7898,"url":"\/members\/aluciani-7898","name":"aluciani","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/200x200\/77accf89-4f79-48be-81e6-c2ac5507e29b.png","userTitle":"Chevalier","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Chevalier","color":"#660099"},"userLevel":7},"content":"At Nutanix the last couple of days I\u2019ve been invGPUvolved in the testing of the new vGPU features from vSphere 6 in combination with the new NVIDIA grid drivers so that vGPU would also be available for desktops delivered via Horizon 6 on vSphere 6. \n \nDuring this initial phase I worked together with Martijn Bosschaart to get this installation covered and after an evening of configuring and troubleshooting I thought it would be a good idea to write a blogpost on this new feature that is coming.\n \nWhat is vGPU and why do I need it?\n \nvGPU profiles deliver dedicated graphics memory by leveraging the vGPU Manager to assign the configured memory for each desktop. Each VDI instance will have pre-determined amount of resources based on their needs or better yet based on the needs of their applications.\n \nBy using the vGPU profiles from the vGPU Manager you can share each physical GPU, for example a NVIDIA GRID K1 card has up to 4 physical GPU\u2019s which can host up to 8 users per physical GPU resulting in 32 users with a vGPU enabled desktop per K1 card.\n \nNext to the NVIDIA GRID K1 card there\u2019s the K2 card which has 2 x high-end Kepler GPUs instead of 4 x entry Kepler GPUs but can deliver up to 3072 CUDA cores compared to the 768 CUDA cores of the K1.\n \nvGPU can also deliver adjusted performance based on profiles, a vGPU profile can be configured on VM based in such a way that usability can be balanced between performance and scalability. When we look at the available profiles we can see that less powerful profiles can be delivered on more desktops compared to high powered VMs:\n \n\n \nAll the GPU profiles with a Q are being submitted to the same certification process as the workstation processors meaning that these profiles should (at least) perform the same as the current NVIDIA workstation processors.\n \nBoth of the K100 and K200 profiles are designed for knowledge workers and will deliver less graphical performance but will enhance scalability, typical use cases for these profiles are much more commodity than you would expect and with the growing graphical richness of applications the usage of vGPU will become more of a commodity as well. Just take a look at Office 2013, Flash\/HTML or Windows 7\/8.1 or even 10 with Aero and all other eye candy that can be enabled, these are all good use cases for the K100\/K200 vGPU profiles.\n \nThe installation\n \nOur systems rely on the CVM, the Nutanix CVM is what runs the Nutanix software and serves all of the I\/O operations for the hypervisor and all VMs running on that host. For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d). In the case of Hyper-V the storage devices are passed through to the CVM. Below is an example of what a typical node logically looks like:\n \n\n \nIt turned out that our CVM played very nicely with the upgrade of vSphere 5.5 to vSphere 6 as it worked exactly as planned (don\u2019t you just love a Software Defined Datacenter?) and I saw the following configuration in our test cluster: \n \n\n \nThe installation went without any problems so we could follow (very detailed) guide to setup the rest of the environment. Setting up vCenter 6 and Horizon 6.0.1 was fairly easy and well described but when we got down to assigning the vGPU profiles to the VM I was able to see the vGPU profiles but when starting the VM an error message would occur.\n \nUseful commands for troubleshooting\n \n\n \nIn my case the block I was testing on was used for other testing purposes so I when I tried running Xorg it would not start, so I checked the vGPU configuration and noticed that the cards where configured in pciPassthru. That was why Xorg wasn\u2019t running, to disable pciPassthru:\n \nIn the vSphere client, select the host and navigate to Configuration tab > Advanced Settings. Click Configure Passthrough in the top left.Deselect at least one core to remove it from being passed through\n \nAfter that I had a different error in the logs of my VM:\n \nvmiop_log: error: Initialization: VGX not supported with ECC Enabled.\n \nWith some help from Google I found the following explanation: Virtual GPU is not currently supported with ECC active. GRID K2 cards ship with ECC disabled by default, but ECC may subsequently be enabled using nvidia-smi.\n \nUse nvidia-smi to list status on all GPUs, and check for ECC noted as enabled on GRID K2 GPUs. Change the ECC status to off on a specific GPU by executing nvidia-smi -i -e 0, where is the index of the GPU as reported by nvidia-smi.\n \nAfter this change I was able to boot my VM, create a Master Image and deploy the Horizon desktops with a vGPU Profile via Horizon 6:\n \nhttps:\/\/www.youtube.com\/watch?v=UsDry2JY4pg\n \nnote 1: I was performing remote testing with limited bandwidth, as you can see the desktop did up to 66FPS.\n \nnote 2: Please be aware that although this testing was done on a Nutanix powered platform vSphere 6 is not supported by Nutanix at this moment, support will follow asap but be aware of this.\n \nThis is a repost from the blog My Virtual Vision by Kees Baggerman","url":"\/virtual-desktop-infrastructure-28\/introducing-vgpu-on-vsphere-installation-and-troubleshooting-tips-2144?postid=2144#post2144","creationDate":"2015-03-26T22:49:37+0000","relativeCreationDate":"6 years ago"},"numberOfUnreadReplies":0,"numberOfReplies":0,"numberOfLikes":0,"relevantPost":{"id":2144,"author":{"id":7898,"url":"\/members\/aluciani-7898","name":"aluciani","avatar":"https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/icon\/200x200\/77accf89-4f79-48be-81e6-c2ac5507e29b.png","userTitle":"Chevalier","rank":{"isBold":false,"isItalic":false,"isUnderline":false,"name":"Chevalier","color":"#660099"},"userLevel":7},"content":"At Nutanix the last couple of days I\u2019ve been invGPUvolved in the testing of the new vGPU features from vSphere 6 in combination with the new NVIDIA grid drivers so that vGPU would also be available for desktops delivered via Horizon 6 on vSphere 6. \n \nDuring this initial phase I worked together with Martijn Bosschaart to get this installation covered and after an evening of configuring and troubleshooting I thought it would be a good idea to write a blogpost on this new feature that is coming.\n \nWhat is vGPU and why do I need it?\n \nvGPU profiles deliver dedicated graphics memory by leveraging the vGPU Manager to assign the configured memory for each desktop. Each VDI instance will have pre-determined amount of resources based on their needs or better yet based on the needs of their applications.\n \nBy using the vGPU profiles from the vGPU Manager you can share each physical GPU, for example a NVIDIA GRID K1 card has up to 4 physical GPU\u2019s which can host up to 8 users per physical GPU resulting in 32 users with a vGPU enabled desktop per K1 card.\n \nNext to the NVIDIA GRID K1 card there\u2019s the K2 card which has 2 x high-end Kepler GPUs instead of 4 x entry Kepler GPUs but can deliver up to 3072 CUDA cores compared to the 768 CUDA cores of the K1.\n \nvGPU can also deliver adjusted performance based on profiles, a vGPU profile can be configured on VM based in such a way that usability can be balanced between performance and scalability. When we look at the available profiles we can see that less powerful profiles can be delivered on more desktops compared to high powered VMs:\n \n\n \nAll the GPU profiles with a Q are being submitted to the same certification process as the workstation processors meaning that these profiles should (at least) perform the same as the current NVIDIA workstation processors.\n \nBoth of the K100 and K200 profiles are designed for knowledge workers and will deliver less graphical performance but will enhance scalability, typical use cases for these profiles are much more commodity than you would expect and with the growing graphical richness of applications the usage of vGPU will become more of a commodity as well. Just take a look at Office 2013, Flash\/HTML or Windows 7\/8.1 or even 10 with Aero and all other eye candy that can be enabled, these are all good use cases for the K100\/K200 vGPU profiles.\n \nThe installation\n \nOur systems rely on the CVM, the Nutanix CVM is what runs the Nutanix software and serves all of the I\/O operations for the hypervisor and all VMs running on that host. For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d). In the case of Hyper-V the storage devices are passed through to the CVM. Below is an example of what a typical node logically looks like:\n \n\n \nIt turned out that our CVM played very nicely with the upgrade of vSphere 5.5 to vSphere 6 as it worked exactly as planned (don\u2019t you just love a Software Defined Datacenter?) and I saw the following configuration in our test cluster: \n \n\n \nThe installation went without any problems so we could follow (very detailed) guide to setup the rest of the environment. Setting up vCenter 6 and Horizon 6.0.1 was fairly easy and well described but when we got down to assigning the vGPU profiles to the VM I was able to see the vGPU profiles but when starting the VM an error message would occur.\n \nUseful commands for troubleshooting\n \n\n \nIn my case the block I was testing on was used for other testing purposes so I when I tried running Xorg it would not start, so I checked the vGPU configuration and noticed that the cards where configured in pciPassthru. That was why Xorg wasn\u2019t running, to disable pciPassthru:\n \nIn the vSphere client, select the host and navigate to Configuration tab > Advanced Settings. Click Configure Passthrough in the top left.Deselect at least one core to remove it from being passed through\n \nAfter that I had a different error in the logs of my VM:\n \nvmiop_log: error: Initialization: VGX not supported with ECC Enabled.\n \nWith some help from Google I found the following explanation: Virtual GPU is not currently supported with ECC active. GRID K2 cards ship with ECC disabled by default, but ECC may subsequently be enabled using nvidia-smi.\n \nUse nvidia-smi to list status on all GPUs, and check for ECC noted as enabled on GRID K2 GPUs. Change the ECC status to off on a specific GPU by executing nvidia-smi -i -e 0, where is the index of the GPU as reported by nvidia-smi.\n \nAfter this change I was able to boot my VM, create a Master Image and deploy the Horizon desktops with a vGPU Profile via Horizon 6:\n \nhttps:\/\/www.youtube.com\/watch?v=UsDry2JY4pg\n \nnote 1: I was performing remote testing with limited bandwidth, as you can see the desktop did up to 66FPS.\n \nnote 2: Please be aware that although this testing was done on a Nutanix powered platform vSphere 6 is not supported by Nutanix at this moment, support will follow asap but be aware of this.\n \nThis is a repost from the blog My Virtual Vision by Kees Baggerman","url":"\/virtual-desktop-infrastructure-28\/introducing-vgpu-on-vsphere-installation-and-troubleshooting-tips-2144?postid=2144#post2144","creationDate":"2015-03-26T22:49:37+0000","relativeCreationDate":"6 years ago"},"numberOfViews":411,"contentType":"discussion","publicLabel":"","category":{"url":"\/workloads-applications-6","categoryId":6,"title":"Workloads & Applications","metaRobots":"index, follow","type":0},"lastPostId":"2144"}, "isActivityPage": null, "showUnsubscribeBtn": null, "attachmentCdn": "https:\/\/uploads-us-west-2.insided.com\/nutanix-us\/attachment\/", "isGuest": true, "csrfToken": "58d06469b67d0809da72da93ebb231f536bbde7a", "hideCategoryMetadata": false, "phrases": { "Forum": { "thread.unsubscribe": "Unsubscribe", "vraag": "Question", "beantwoord": "Solved" } } }">
Learn more about our cookies.<\/a>","cookiepolicy.button":"Accept cookies","cookiepolicy.button.deny":"Deny all","cookiepolicy.link":"Cookie settings","cookiepolicy.modal.title":"Cookie settings","cookiepolicy.modal.content":"We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.<\/a>","cookiepolicy.modal.level1":"Basic
Functional","cookiepolicy.modal.level2":"Normal
Functional + analytics","cookiepolicy.modal.level3":"Complete
Functional + analytics + social media + embedded videos"}}}">
Baidu