pyspark.RDD.mapPartitionsWithSplit¶
- 
RDD.mapPartitionsWithSplit(f, preservesPartitioning=False)[source]¶
- Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition. - Deprecated since version 0.9.0: use - RDD.mapPartitionsWithIndex()instead.- Examples - >>> rdd = sc.parallelize([1, 2, 3, 4], 4) >>> def f(splitIndex, iterator): yield splitIndex >>> rdd.mapPartitionsWithSplit(f).sum() 6