def coalesce(numPartitions: Int, shuffle: Boolean = false)(implicit ord: Ordering[T] = null): RDD[T]
該函數用于將RDD進行重分區,使用HashPartitioner。
第一個參數為重分區的數目,第二個為是否進行shuffle,默認為false;
以下面的例子來看:
scala> var data = sc.textFile("/tmp/lxw1234/1.txt")
data: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[53] at textFile at :21scala> data.collect
res37: Array[String] = Array(hello world, hello spark, hello hive, hi spark)scala> data.partitions.size
res38: Int = 2 //RDD data默認有兩個分區scala> var rdd1 = data.coalesce(1)
rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[2] at coalesce at :23scala> rdd1.partitions.size
res1: Int = 1 //rdd1的分區數為1scala> var rdd1 = data.coalesce(4)
rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[3] at coalesce at :23scala> rdd1.partitions.size
res2: Int = 2 //如果重分區的數目大于原來的分區數,那么必須指定shuffle參數為true,//否則,分區數不便scala> var rdd1 = data.coalesce(4,true)
rdd1: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at coalesce at :23scala> rdd1.partitions.size
res3: Int = 4
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T]
該函數其實就是coalesce函數第二個參數為true的實現
scala> var rdd2 = data.repartition(1)
rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[11] at repartition at :23scala> rdd2.partitions.size
res4: Int = 1scala> var rdd2 = data.repartition(4)
rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at repartition at :23scala> rdd2.partitions.size
res5: Int = 4
當spark程序中,存在過多的小任務的時候,可以通過 RDD.coalesce方法,收縮合并分區,減少分區的個數,減小任務調度成本,避免Shuffle導致,比RDD.repartition效率提高不少。
版权声明:本站所有资料均为网友推荐收集整理而来,仅供学习和研究交流使用。
工作时间:8:00-18:00
客服电话
电子邮件
admin@qq.com
扫码二维码
获取最新动态