真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

k8s臨時內(nèi)存-創(chuàng)新互聯(lián)

如何查看CPU總占用率?
top?-bn?1?-i?-c
sar?-P?0?-u?1?5

創(chuàng)新互聯(lián)專業(yè)為企業(yè)提供茫崖網(wǎng)站建設(shè)、茫崖做網(wǎng)站、茫崖網(wǎng)站設(shè)計、茫崖網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計與制作、茫崖企業(yè)網(wǎng)站模板建站服務(wù),十余年茫崖做網(wǎng)站經(jīng)驗,不只是建網(wǎng)站,更提供有價值的思路和整體網(wǎng)絡(luò)服務(wù)。

I had a similar error. My analysis:

Pods on a same k8s node share the ephemeral storage, which (if no special configuration was used) is used by spark to store temp data of spark jobs (disk spillage and shuffle data). The amount of ephemeral storage of a node is basically the size of the available storage in your k8s node.

If some executor pods use up all of the ephemeral storage of a node, other pods will fail when they try to write data to ephemeral storage. In your case the failing pod is the driver pod, but it could have been any other pods on that node. In my case it was an executor that failed with a similar error message.

I would try to optimize the spark code first before changing the deployment configuration.

  • reduce disk spillage, shuffle write
  • split transforms if possible
  • and increase the amount of executors as the last resource :)

If you know upfront the amount of storage required in each executor, maybe you can try to set the resources?requests?(and not?limits) for ephemeral storage to right amount.

你是否還在尋找穩(wěn)定的海外服務(wù)器提供商?創(chuàng)新互聯(lián)www.cdcxhl.cn海外機房具備T級流量清洗系統(tǒng)配攻擊溯源,準確流量調(diào)度確保服務(wù)器高可用性,企業(yè)級服務(wù)器適合批量采購,新人活動首月15元起,快前往官網(wǎng)查看詳情吧


本文名稱:k8s臨時內(nèi)存-創(chuàng)新互聯(lián)
文章位置:http://weahome.cn/article/dsphdg.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部