return bytesToMi(requestBytes+ pysparkMemoryBytes), nil Copy link MemberAuthor jacobsalwaySep 18, 2024 I think I'm also missing offheap bytes in this calculationhttps://github.com/apache/spark/blob/7de71a2ec78d
Apache Celeborn is an elastic and high-performance service for shuffle and spilled data. - [CELEBORN-775] Fix executorCores calculation in SparkShuffleManager f… · river-zuo/incubator-celeborn@2bd1d86
// Below calculation uses executorMemory, not memoryOverhead math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN)) The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames. I will add that when using Spark on...
// Below calculation uses executorMemory, not memoryOverhead math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN)) The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames. I will add that when using Spark on Ya...