Issue1183

Title Compute landmark preconditions accurately in RHW.
Priority bug Status resolved
Superseder Nosy List clemens, jendrik, malte, salome
Assigned To Keywords
Optional summary

Created on 2025-03-30.17:57:24 by clemens, last changed by clemens.

Messages
msg11831 (view) Author: clemens Date: 2025-03-31.19:36:18
Thanks a lot, Malte! I've merged this now.

While writing the commit message, I summarized what effect the change has on the number of landmarks and I would like to put this information here for reference as well:
- ~69% more landmarks in flashfill-sat18-adl
- ~126% more landmarks in miconic-fulladl
- ~88% more landmarks in miconic-simpleadl
- ~36% more landmarks in settlers-sat18-adl
In all other domains, the change has no effect. It is no surprise that all affected domains are ADL domains since the change only triggers for conditional effects.
msg11830 (view) Author: malte Date: 2025-03-31.13:58:05
Looks good to me.
msg11826 (view) Author: clemens Date: 2025-03-30.18:28:51
I ran some experiments to check what this change alone means. The expectation is that the number of landmark goes up: more landmark preconditions mean more additional landmarks in the backchaining mechanism of RHW. This probably implies higher landmark generation times and also can impact the search time and memory because more landmarks need to be processed and stored for each state. In also gives more information to the search, though, possibly rendering the heuristics more accurate and potentially resulting in fewer state expansions before finding a solution. There is no guarantee, though, that more landmarks have a purely positive effect. On the one hand, for satisficing planning this is hard to predict upfront anyway. And for optimal planning on the other hand, it depends on the used heuristic: With optimal cost partitioning over landmarks, it can only become more accurate, but with uniform cost partitioning more landmarks can also have a negative effect.

The experiment data can be found here: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1183/data/

SATISFICING:
I've tested LAMA-first with and without preferred operators. In both cases, coverage remains the same which I consider a good thing because it means having to maintain more landmarks does not increase search time too much to affect coverage. Without preferred operators, we see a slight increase of search time which is likely due to this number of landmarks (and orderings) going up. The expansion and evaluation scores increase by >10 points which I found interesting but the costs of the found plans increase slightly which is a negative effect. With preferred operators, the found plans are generally cheaper, though, and apparently in this case the search time is also slightly improved. I don't see anything worrying in these experiments.

ANYTIME:
When considering full LAMA, coverage decreases by 2 with the change. I'm not worried about this because in the base-version, it solved 1 task more than LAMA-first and in the v1-version it solved 1 tsak fewer than LAMA-first, so I assume this is just random noise. Over all, costs decrease slightly, which could also just be noise (one configuration randomly being slightly faster and finishing more iterations of iterated search, thus reporting a cheaper solution), so I don't want to highlight this as a positive effect, just an observation. Everything else also looks fine.

OPTIMAL:
I've tested RHW landmarks with optimal cost partitioning, RHW landmarks combined with reasonable orderings with uniform costpartitioning, and BJOLP (RHW + hm(m=1) landmarks with uniform cost partitioning). The number of landmarks appears to remain unchanged in all of these configurations. Thus, they should yield the exact same performance which they also pretty much do. Only in the first, there is a drop in coverage by 2, but I again assume this is due to noise.

In summary, I would say there's nothing worrying about this little change in the code. Let me know if you agree or have a different view on the matter. I would be happy to merge this soon. Here's the pull request: https://github.com/aibasel/downward/pull/258
msg11825 (view) Author: clemens Date: 2025-03-30.17:57:24
While working on the refactoring of landmark code (issue992) I discovered an inaccuracy in the RHW landmark generation code. One part aims to compute conditions (i.e., atoms) which need to hold in order for a given landmark to become true. The reasoning works as follows: 

Assume the landmark is a disjunctive fact landmark L = (a_1 or ... or a_n) where a_i are atoms. (RHW only generates disjunctive or simple landmarks where simple means the special case where n=1.) Further, let {o_1, ..., o_m} be the set of achievers of L, i.e., the operators that have any of the atoms in {a_1, ..., a_n} as an effect. Then the intersection of the preconditions of {o_1, ..., o_m} is an approximation of the landmark achievers. A more accurate approximation additionally includes effect conditions in the reasoning. Specifically, the precondition of o_i can be extended with the intersection of all effect conditions of effects in o_i which add any a_j. RHW does consider this, but makes a mistake in doing so: It takes the intersection of effect conditions of more than just the effects that add any a_j. The code is not super obvious in this part, and what happens in detail doesn't make much sense, so I don't try to explain it further.

In my attempt of refactoring the code in issue992, I changed the implementation of this part to make it clearer and "accidentally" made the approximation more accurate. This has the effect that the new implementation finds more landmarks. However, the refactoring issue should not change the semantics of the implementation, but I would also like to change this since the current version doesn't make much sense. So I'm creating this issue to deal with changing the created landmarks so we can then expect the number of landmarks to remain unchanged in the refactoring.
History
Date User Action Args
2025-03-31 19:36:19clemenssetstatus: chatting -> resolved
messages: + msg11831
2025-03-31 13:58:05maltesetmessages: + msg11830
2025-03-30 18:28:51clemenssetstatus: unread -> chatting
nosy: + malte, jendrik, salome
messages: + msg11826
2025-03-30 17:57:24clemenscreate