java – 并行流调用Spliterator的次数超过其限制

我最近发现了一个错误

StreamSupport.intStream(/* a Spliterator.ofInt */, true)
    .limit(20)

正在调用Spliterator.ofInt.tryAdvance超过20次.当我把它改成

StreamSupport.intStream(/* a Spliterator.ofInt */, true)
    .sequential()
    .limit(20)

问题消失了.为什么会这样?当tryAdvance有副作用时,有没有办法在并行流上实现严格限制,除了在Spliterator中构建一个? (这是为了测试一些返回无限流的方法,但是测试需要达到最终结束而没有“X毫秒循环”构造的复杂性.)

解决方法:

关于限制和trySplit应该如何交互似乎存在根本的误解.应该没有比指定限制更多的trySplit调用的假设是完全错误的.

trySplit的目的是将源数据分成两部分,在最好的情况下分成两半,因为trySplit应该尝试平衡分割.因此,如果您拥有一百万个元素的源数据集,则成功的拆分会产生两个源数据集,每个元素包含五十万个元素.这与您可能已应用于流的限制(20)完全无关,除了我们事先知道的,如果spliterator具有SIZED | SUBSIZED特征,我们可以删除第二个数据集,作为请求的前20个元素只能在上半年内找到.

很容易计算出,在最好的情况下,即平衡分割,我们需要十五次分割操作,每次都丢弃上半部分,之后我们在前20个元素之间进行分割,这允许我们处理前20个元素并行的元素.

这可以很容易地证明:

class DebugSpliterator extends Spliterators.AbstractIntSpliterator {
    int current, fence;
    DebugSpliterator() {
        this(0, 1_000_000);
    }
    DebugSpliterator(int start, int end) {
        super(end-start, ORDERED|SIZED|SUBSIZED);
        current = start;
        fence = end;
    }
    @Override public boolean tryAdvance(IntConsumer action) {
        if(current<fence) {
            action.accept(current++);
            return true;
        }
        return false;
    }
    @Override public OfInt trySplit() {
        int mid = (current+fence)>>>1;
        System.out.println("trySplit() ["+current+", "+mid+", "+fence+"]");
        return mid>current? new DebugSpliterator(current, current=mid): null;
    }
}
StreamSupport.stream(new DebugSpliterator(), true)
    .limit(20)
    .forEach(x -> {});

在我的机器上,它打印:

trySplit() [0, 500000, 1000000]
trySplit() [0, 250000, 500000]
trySplit() [0, 125000, 250000]
trySplit() [0, 62500, 125000]
trySplit() [0, 31250, 62500]
trySplit() [0, 15625, 31250]
trySplit() [0, 7812, 15625]
trySplit() [0, 3906, 7812]
trySplit() [0, 1953, 3906]
trySplit() [0, 976, 1953]
trySplit() [0, 488, 976]
trySplit() [0, 244, 488]
trySplit() [0, 122, 244]
trySplit() [0, 61, 122]
trySplit() [0, 30, 61]
trySplit() [0, 15, 30]
trySplit() [15, 22, 30]
trySplit() [15, 18, 22]
trySplit() [15, 16, 18]
trySplit() [16, 17, 18]
trySplit() [0, 7, 15]
trySplit() [18, 20, 22]
trySplit() [18, 19, 20]
trySplit() [7, 11, 15]
trySplit() [0, 3, 7]
trySplit() [3, 5, 7]
trySplit() [3, 4, 5]
trySplit() [7, 9, 11]
trySplit() [4, 4, 5]
trySplit() [9, 10, 11]
trySplit() [11, 13, 15]
trySplit() [0, 1, 3]
trySplit() [13, 14, 15]
trySplit() [7, 8, 9]
trySplit() [1, 2, 3]
trySplit() [8, 8, 9]
trySplit() [5, 6, 7]
trySplit() [14, 14, 15]
trySplit() [17, 17, 18]
trySplit() [11, 12, 13]
trySplit() [12, 12, 13]
trySplit() [2, 2, 3]
trySplit() [10, 10, 11]
trySplit() [6, 6, 7]

当然,这远远超过二十次分割尝试,但完全合理,因为必须将数据集拆分,直到我们在所需目标范围内具有能够并行处理它的子范围.

我们可以通过删除导致此执行策略的元信息来强制执行不同的行为:

StreamSupport.stream(new DebugSpliterator(), true)
    .filter(x -> true)
    .limit(20)
    .forEach(x -> {});

由于Stream API不了解谓词的行为,因此管道会失去其SIZED特性,导致

trySplit() [0, 500000, 1000000]
trySplit() [500000, 750000, 1000000]
trySplit() [500000, 625000, 750000]
trySplit() [625000, 687500, 750000]
trySplit() [625000, 656250, 687500]
trySplit() [656250, 671875, 687500]
trySplit() [0, 250000, 500000]
trySplit() [750000, 875000, 1000000]
trySplit() [250000, 375000, 500000]
trySplit() [0, 125000, 250000]
trySplit() [250000, 312500, 375000]
trySplit() [312500, 343750, 375000]
trySplit() [125000, 187500, 250000]
trySplit() [875000, 937500, 1000000]
trySplit() [375000, 437500, 500000]
trySplit() [125000, 156250, 187500]
trySplit() [250000, 281250, 312500]
trySplit() [750000, 812500, 875000]
trySplit() [281250, 296875, 312500]
trySplit() [156250, 171875, 187500]
trySplit() [437500, 468750, 500000]
trySplit() [0, 62500, 125000]
trySplit() [875000, 906250, 937500]
trySplit() [62500, 93750, 125000]
trySplit() [812500, 843750, 875000]
trySplit() [906250, 921875, 937500]
trySplit() [0, 31250, 62500]
trySplit() [31250, 46875, 62500]
trySplit() [46875, 54687, 62500]
trySplit() [54687, 58593, 62500]
trySplit() [58593, 60546, 62500]
trySplit() [60546, 61523, 62500]
trySplit() [61523, 62011, 62500]
trySplit() [62011, 62255, 62500]

这显示了较少的trySplit调用,但没有改进;查看数字显示现在范围在结果元素范围之外(如果我们使用我们的知识,所有元素将通过文件管理器)被处理,更糟糕的是,结果元素的范围完全由单个分裂器覆盖,导致不平行对于我们的结果元素的处理,所有其他线程都是后来被删除的处理元素.

当然,我们可以通过更改来轻松地为我们的任务实施最佳分割

int mid = (current+fence)>>>1;

int mid = fence>20? 20: (current+fence)>>>1;

所以

StreamSupport.stream(new DebugSpliterator(), true)
    .limit(20)
    .forEach(x -> {});

结果是

trySplit() [0, 20, 1000000]
trySplit() [0, 10, 20]
trySplit() [10, 15, 20]
trySplit() [10, 12, 15]
trySplit() [12, 13, 15]
trySplit() [0, 5, 10]
trySplit() [15, 17, 20]
trySplit() [5, 7, 10]
trySplit() [0, 2, 5]
trySplit() [17, 18, 20]
trySplit() [2, 3, 5]
trySplit() [5, 6, 7]
trySplit() [15, 16, 17]
trySplit() [6, 6, 7]
trySplit() [16, 16, 17]
trySplit() [0, 1, 2]
trySplit() [7, 8, 10]
trySplit() [8, 9, 10]
trySplit() [1, 1, 2]
trySplit() [3, 4, 5]
trySplit() [9, 9, 10]
trySplit() [18, 19, 20]
trySplit() [10, 11, 12]
trySplit() [13, 14, 15]
trySplit() [11, 11, 12]
trySplit() [4, 4, 5]
trySplit() [14, 14, 15]

但这不是一个通用的分裂者,但如果限制不是二十,则表现不佳.

如果我们可以将限制结合到分裂器中,或者更一般地说,将其纳入流源,我们就没有这个问题.因此,不是list.stream().limit(x),而是调用list.subList(0,Math.min(x,list.size())).stream(),而不是random.ints().limit (x),使用random.ints(x),而不是Stream.generate(generator).limit(x)你可以使用LongStream.range(0,x).mapToObj(index – > generator.get())或使用this answer的工厂方法.

对于任意流源/分裂器,对于并行流,应用限制可能是安静的,这是even documented.好吧,并且在trySplit中具有副作用首先是一个坏主意.

上一篇:java – Stream.spliterator对并行流的奇怪行为


下一篇:java – 从迭代器创建的CompletableFuture流不是懒惰评估的