Skip to content Skip to sidebar Skip to footer

How To Flatten Nested Lists In Pyspark?

I have an RDD structure like: rdd = [[[1],[2],[3]], [[4],[5]], [[6]], [[7],[8],[9],[10]]] and I want it to become: rdd = [1,2,3,4,5,6,7,8,9,10] How do I write a map or reduce fu

Solution 1:

You can for example flatMap and use list comprehensions:

rdd.flatMap(lambda xs: [x[0] for x in xs])

or to make it a little bit more general:

from itertools import chain

rdd.flatMap(lambda xs: chain(*xs)).collect()

Post a Comment for "How To Flatten Nested Lists In Pyspark?"