有建议这样做吗?我知道容器通过NFS导出到ESXI。那可以使用吗?这是否能够利用星际之门从任何地方访问?我真正需要的只是我所有节点之间共享的全球可用卷。
最好的答案乔恩
\nI've moved your post from the CE forums to our production product forums.\n
\n
\nIn general, for Hadoop on Nutanix, I'd recommend checking out these three assets which you can cherry pick data from
\nhttps:\/\/portal.nutanix.com\/#\/page\/solutions\/details?targetId=RA-2078-Cloudera-with-Nutanix:RA-2078-Cloudera-with-Nutanix<\/a>
\nhttps:\/\/portal.nutanix.com\/#\/page\/solutions\/details?targetId=RA-2030_Hadoop_with_AHV:RA-2030_Hadoop_with_AHV<\/a>
\n
\n
\n
\nWe dont specifically have a Spark on Nutanix guide out yet; however, those two are rich with content for the type of solution that you might want to roll out.
\n
\n
\nThat said, you are correct that HDFS (in general) is designed for non-redundant storage (like bare metal), so it has a lot of the same constructs that Nutanix does already. It is worth nothing that you can (or should be able to) configure the replication copies of Hadoop itself, such that you dont have many copies in Hadoop on top of many copies on Nutanix. Thats generally where \"the rub\" comes from when we discuss this with customers.
\n
\nThat said, we've got customers doing Hadoop RF2 + Nutanix RF2 (such as in the Cloudera case) and it works just fine, it just imposes a bit of an overhead.
\n
\n
\nTo be clear though, you can't expose HDFS directly from stargate, so you'd always have something like a Hadoop data node (or data nodes plural) in between Nutanix and Spark","className":"post__content__best_answer"}">