I ran some experiments to check what this change alone means. The expectation is that the number of landmark goes up: more landmark preconditions mean more additional landmarks in the backchaining mechanism of RHW. This probably implies higher landmark generation times and also can impact the search time and memory because more landmarks need to be processed and stored for each state. In also gives more information to the search, though, possibly rendering the heuristics more accurate and potentially resulting in fewer state expansions before finding a solution. There is no guarantee, though, that more landmarks have a purely positive effect. On the one hand, for satisficing planning this is hard to predict upfront anyway. And for optimal planning on the other hand, it depends on the used heuristic: With optimal cost partitioning over landmarks, it can only become more accurate, but with uniform cost partitioning more landmarks can also have a negative effect.
The experiment data can be found here: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1183/data/
SATISFICING:
I've tested LAMA-first with and without preferred operators. In both cases, coverage remains the same which I consider a good thing because it means having to maintain more landmarks does not increase search time too much to affect coverage. Without preferred operators, we see a slight increase of search time which is likely due to this number of landmarks (and orderings) going up. The expansion and evaluation scores increase by >10 points which I found interesting but the costs of the found plans increase slightly which is a negative effect. With preferred operators, the found plans are generally cheaper, though, and apparently in this case the search time is also slightly improved. I don't see anything worrying in these experiments.
ANYTIME:
When considering full LAMA, coverage decreases by 2 with the change. I'm not worried about this because in the base-version, it solved 1 task more than LAMA-first and in the v1-version it solved 1 tsak fewer than LAMA-first, so I assume this is just random noise. Over all, costs decrease slightly, which could also just be noise (one configuration randomly being slightly faster and finishing more iterations of iterated search, thus reporting a cheaper solution), so I don't want to highlight this as a positive effect, just an observation. Everything else also looks fine.
OPTIMAL:
I've tested RHW landmarks with optimal cost partitioning, RHW landmarks combined with reasonable orderings with uniform costpartitioning, and BJOLP (RHW + hm(m=1) landmarks with uniform cost partitioning). The number of landmarks appears to remain unchanged in all of these configurations. Thus, they should yield the exact same performance which they also pretty much do. Only in the first, there is a drop in coverage by 2, but I again assume this is due to noise.
In summary, I would say there's nothing worrying about this little change in the code. Let me know if you agree or have a different view on the matter. I would be happy to merge this soon. Here's the pull request: https://github.com/aibasel/downward/pull/258
|