Thanks, Augusto! You mentioned worse scores for gcc 8.2 than for gcc 4.8, but I
don't see that. The way I look at these numbers, gcc 8.2 looks best, followed by
gcc 4.8, and gcc 5.4 is worst. Did you mean to say gcc 5.4 instead of gcc 8.2?
Overall, I'm not concerned by the numbers. The differences are small, and the
reason we want to move to newer compilers is for code clarity, not primarily for
performance. It is normal that satisficing configurations fluctuate a lot more
than optimal ones because one can solve a satisficing problem just by lucky
tie-breaking, whereas for optimal planning luck can only help you on the last f
layer. (Simplifying a bit -- of course you can also get lucky in how exactly
LM-Cut resolves ties in the landmark selection etc.)
Schedule is one of the domains that has proven more susceptible than most to
tie-breaking in the past. There are some actions there that look promising in
the delete relaxation but will screw you up if you apply them, and whether or
not a heuristic like h^FF will fall into this trap is essentially determined by
arbitrary tie breaking.
The relevant parts of the code don't have explicit randomness, but some parts of
the code do things like break ties based on memory addresses etc., and these can
move around arbitrarily with compiler changes. Perhaps it would be good to
improve the robustness of the code further in the future because these
fluctuatios are somewhat annoying. But this is of course not what this issue is
about.
So from my perspective, this is ready to be closed.
Augusto, if you agree, feel free to set this to "resolved".
|