Issue247

Title LM heuristic crashes and/or wrongly reports dead ends
Priority bug Status chatting
Superseder Nosy List clemens, emilkeyder, erez, malte, remo, rpgoldman, salome, silvia, thomas
Assigned To Keywords
Optional summary
When fixing this, check that it also fixes issue442. If yes, close it as we close this one. If not, update that issue.

Created on 2011-06-29.18:41:04 by malte, last changed by malte.

Summary
When fixing this, check that it also fixes issue442. If yes, close it as we close this one. If not, update that issue.
Files
File name Uploaded Type Edit Remove
minimized-problem.sas clemens, 2022-02-10.08:27:44 application/octet-stream
robert-tony-domain.pddl malte, 2011-06-29.18:41:15 application/octet-stream
robert-tony-problem.pddl malte, 2011-06-29.18:41:23 application/octet-stream
robert-tony.soln malte, 2011-06-29.18:41:31 application/octet-stream
Messages
msg11103 (view) Author: malte Date: 2023-07-05.15:58:48
I updated the summary with the relationship to issue442.
msg10605 (view) Author: malte Date: 2022-02-11.13:58:08
Some follow-up thoughts to our recent Zoom discussion regarding the handling of derived variables in the landmark code.

In principle, tasks with derived variables can be overapproximated by tasks without derived variables in the following way. (I'm writing "set" for the non-default value of a derived variable that must be justified by an axiom and "cleared" for the default value that is has if no other value is derived.)

1. Treat every derived variable as a regular variable.
2. Treat every axiom "COND => v := set" as an operator with precondition "COND", effect "v := set", and cost 0.
3. For every derived variable v, introduce an operator with no preconditions, effect "v := cleared", and cost 0.

In the initial state, we could set each derived variable either to its actual value in the initial state or to "cleared". Both would be OK, and perhaps "beauty" arguments can be made for both. I'll assume the actual value in the following, otherwise the overapproximating argument needs to be adjusted a little bit to account for axiom applications from the initial state.

I think we currently do 1.+2. in some of the landmark code, but I don't think we do 3. This is a source of bugs, though not the source of the bug we've been discussing in the previous message.

If we do 1.+2.+3., we would end up with an overapproximation in the sense that every transition in the original task corresponds to a sequence (consisting of one actual operator application followed by 0 or more applications of the artificial operators introduced by 2.+3.) of transitions of the same cost.

The overapproximated task can also have lots of spurious transitions (e.g. h* could be arbitarily lower) because we can selectively choose whether or not to apply axioms.

For this kind of overapproximation, unless I'm missing something, it holds that every landmark or landmark ordering of the overapproximation *that only refers to non-derived variables* is also a landmark/landmark ordering of the original task.

I think one way to prove this is to start from a real solution, consider the "induced" overapproximated solution that we get by interpreting the axioms faithfully, and then projecting both to the non-derived variables. The two solutions are then the same except that the second one repeats some states (because the intermediate steps involving axioms only change derived variables, which we project away), and for the types of orderings we're considering, that means that every ordering that is always satisfied by the second solution is also satisfied by the first. (There is something generic that can be proved here related to the notion of "stuttering equivalence" that is used for the theory underlying ample sets, a variation of stubborn sets. Not really important here, but potentially an interesting connection if we want to publish something on this. We have to be careful that I think not all our landmark ordering types are invariant under stuttering equivalence.)

I think so far we have only discussed that greedy-necessary orderings leading to derived variables derived from the 1.+2.+3. compilation are potentially wrong for the real task, but I think with this perspective it's possible to construct other problematic cases, such as natural orders between derived variables. (But perhaps that's already what you knew; I don't think I have access to the code used for the experiment Remo referred to to check.)

I wonder if it's also possible to construct cases where *landmarks* of the overapproximation (now involving derived variables) are *not* landmarks of the original task. For simple landmarks I think that isn't the case because from one real state to the other, the derived variables have to change monotonically, so every atom that is present in some intermediate state is also present in either the start or end of this subpath, and these states are also present in the real transition system. With this observation, I think also disjunctive landmarks of the overapproximation must be landmarks of the real task. But I don't think the same is true for conjunctive landmarks because it could be the case that a conjunction is true only in some intermediate state but not in either end of the path.

In any case, to me all this suggests a compilation-based approach:

A. Transfer task with derived variables into an overapproximated task without derived variables.
B. Apply landmark factory to that.
C. Post-process to discard landmarks and/or orderings that might only hold for the overapproximation but not the real task.

In our current implementation, we aim to interleave B+C in a way, i.e., the idea is never to add landmarks or orderings that we'll end up discarding. This certainly has efficiency advantages. I think it would be good to look into implementing it in this strict compilation-based way because the code will be simpler, and the approach would be compositional and could hence apply to every landmark factory out of the box.

Before we do this, I think we need to think a little bit more on the theory side about whether this is exactly the compilation we want and which kinds of landmarks and orderings are or aren't preserved by the transformation. [Digression because I think this might be relevant to this theory: one thought I had related to our discussion of greedy-necessary orderings was whether it would be useful to add a "<=" version of some of the "<" orderings we already have at least on the theory side -- this was also already something that came up for reasonable orders, I think.]

I think this would be worth writing up in technical report/paper form, including proofs and counterexamples for the cases where we have to discard landmarks or orderings.

I also note that the compilation-based approach is not limited to landmarks. h* in the overapproximation is an admissible estimate of h* in the original, so it could also be used to apply our admissible heuristics to tasks with derived predicates.
msg10596 (view) Author: remo Date: 2022-02-11.10:26:00
We ran an experiment to evaluate the crude solution Salomé described in msg10572, here are the results:
https://ai.dmi.unibas.ch/_tmp_files/christen/issue247-v1-satisficing-issue247-base-issue247-v1-compare.html

The fix does not seem to have a significant negative impact, suggesting that a somewhat more thought-through implementation of the fix (e.g. preventing the generation of the orders in question instead of preventing the *add_edge* function from adding them) could be an overall benefit.

Nevertheless, after yesterday's meeting between Malte, Salomé, Clemens and me, we concluded that we do not yet know how to handle this problem in a principled way and adding a temporary fix now could make it harder to understand the issue later on. Consequently, we want to postpone this issue until there is a clearer picture of what the landmark code should do and how it can be accomplished.
msg10572 (view) Author: salome Date: 2022-02-10.11:42:11
Clemens, Remo and me continued investigating this issue. The core problem is that lm_rhw generates wrong orderings in the presence of axioms. In this concrete example it created a greedy_necessary ordering from a non-derived variable x to a derived variable y, while the plan found by other configurations makes x and y true at the same time. We assume that this happens because axioms are treated like normal operators, but did not look too deep into the code yet.

We set up an experiment where we (very crudely) simply disallow orderings x->y if y is a derived variable. While this would solve the issue here, it is a very rough approximation and might remove valid orderings. For example, a greedy necessary ordering from axiom1 to ¬axiom1 can be correct (and happens in the task we investigated). 

As a next step we should discuss in more detail in which cases orderings can be derived when derived variables are involved.
msg10568 (view) Author: clemens Date: 2022-02-10.08:27:44
Remo and I started to look into this yesterday. Using the Minimizer, we were able to shrink the problem significantly while maintaining the underlying issue. I attached the resulting SAS+ task.

We tried the commands from Malte's original msg1393 and got the following results:

$ ./fast-downward.py --search "astar(blind())"
=> Solution found.

$ ./fast-downward.py --search "astar(lmcount(lm_rhw()))"
=> Search stopped without finding a solution.

$ ./fast-downward.py --search "astar(lmcount(lm_rhw(use_orders=false)))"
=> Solution found.

$ ./fast-downward.py --search "astar(lmcount(lm_hm(m=1)))"
=> Tried to use unsupported feature. (h^m landmarks don't support axioms)

$ ./fast-downward.py --search "astar(lmcount(lm_zg()))"
=> Solution found.
(The assertion that fired during landmark construction was removed in issue1041.)

The minimized problem we're looking at now does not have conditional effects. Therefore, conditional effects are presumably not the source of the issue, or at least not the only one. The fact that the problem only occurs when keeping landmark orderings is the most promising lead in my opinion.
msg1457 (view) Author: malte Date: 2011-08-04.19:42:58
> The first stage of refactoring is ready to be merged, so I'll make that a
> priority (which hopefully means I'll get to it this week).

Well, that was too optimistic, but it is now merged.
msg1411 (view) Author: emilkeyder Date: 2011-06-30.13:14:00
Hmm, at least in the case of the h^m landmarks adding support for conditional 
effects is not going to be easy. It would mean basically rewriting it from 
scratch to compute subsets of fluents that are subsets because part of the set 
appears in an action precondition and part of it in a conditional effect. So I 
think I'll go the cowardly route and copy in something similar to Raz's function 
for now. 

I'm working on some other stuff in the h^m landmarks code though, so I'll wait 
until I have that done and merge it in then - unless you feel that there's a 
hurry for this.
msg1408 (view) Author: malte Date: 2011-06-30.10:44:25
It may indeed be better not to mess with the landmark code while it is being
refactored. (But we should still look into it to find out what is going on here.)

The first stage of refactoring is ready to be merged, so I'll make that a
priority (which hopefully means I'll get to it this week).
msg1404 (view) Author: silvia Date: 2011-06-30.09:55:03
> Silvia, is lm_zg supposed to support conditional effects?

From memory, I'd say yes, but I can only verify this on Monday when
I'm back from my holiday.

Silvia
msg1402 (view) Author: erez Date: 2011-06-30.08:43:48
I seem to remember that someone is working on refactoring the landmarks code.
If so, it seems like it should be handles as part of the refactoring (something 
like having each landmark generation method declare what PDDL features it can 
handle, and showing an error otherwise).
I could also add this check now, if anyone thinks it's urgent.
msg1397 (view) Author: malte Date: 2011-06-29.22:01:34
Ah, I think you brought us on the right track here!

Our documentation on
http://www.fast-downward.org/HeuristicSpecification#Landmark_count_heuristic
says: "conditional effects: supported if admissible=false". I know Silvia
evaluated LAMA on tasks with conditional effects. However, I guess this support
is only there for the parts that Silvia implemented, not for lm_hm.

Silvia, is lm_zg supposed to support conditional effects?


It's fine to not support them in parts of the code, but in such cases we should
bail out with an error message rather than silently ignoring them; see
MergeAndShrinkHeuristic::verify_no_axioms_no_cond_effects in
raz_mas_heuristic.cc for an example. (Really, this function should be available
globally, rather than being stuck in this place.)

So one thing we can do is document that only some LM generation methods support
conditional effects and let the others reject tasks that have them.

Of course, we could also open a separate issue to add such support, which should
not be too hard (famous last words?). Anyone interested in looking into that? I
can give advice, but probably won't be able to provide code.


Conditional effects are probably only part of the story here, though.
The main problem for LM_RHW here are vey likely derived variables, which the
translator introduces in this case (I guess because of universally quantified
conditions in the original input).

Our documentation says "axioms: supported if admissible=false (but may behave
stupidly and be unsafe)", where "unsafe" means: can falsely claim a problem to
be unsolvable. So the outcome we get here might just be a known limitation
rather than an actual bug. One potential bug is that an alternation search that
uses both safe and unsafe heuristic should be safe; I think it isn't in this
case because the LM-count heuristic doesn't own up to being unsafe. But that
should be easy to fix and address the actual problem here.


In summary:

1) lm_hm should probably bail out when faced with conditional effects or axioms,
or at least print a clear warning message, also on cerr, that it is unsafe in
this case (like the one for issue240). It should not fail an assertion.

2) Optionally, we should open an issue for adding conditional effect support to h^m.

3) We might also open an issue for proper derived predicate support for all LM
generation methods.

4) We should clarify what the exact support for conditional effects for LM
heuristics is in the documentation.

5) The LM count heuristic should only claim that it is safe for those tasks
where this is actually true.
msg1396 (view) Author: emilkeyder Date: 2011-06-29.20:27:28
Does the landmark heuristic, or any of the landmark generation methods currently 
implemented, actually support conditional effects? I don't remember ever seeing 
conditional effects mentioned in the context of landmarks, and after a quick look 
through the code I'm not sure how to access them either. Where should I look for 
the documentation?
msg1394 (view) Author: malte Date: 2011-06-29.18:42:39
Tentatively assigning to Emil hoping that you can look into the h^m landmark
thing. If not, please reassign to "- no selection -".

Everyone else (Erez? Silvia?), if you have any insights about LM_RHW or LM_ZG,
that would be great.
msg1393 (view) Author: malte Date: 2011-06-29.18:41:03
When working on issue240, I think I found a bug with the landmark heuristic.
Here is what Robert and Tony tried to do (after translate/preprocess, and using
debug mode; paste command as a single line):

$ make debug && ./downward-1-debug --heuristic
"hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,cost_type=2,lm_cost_type=2))"
--search "lazy_greedy([hlm,hff], preferred=[hlm,hff])" < output

This fails noting that the search space was exhausted. However, when using just
hff (e.g. change "[hlm,hff]" to "[hff]" in both places), the problem is solved,
hinting at wrong dead-ends reported by h^LM.

The problem also appears when using the plain LM heuristic in various forms.
Here are some simple examples:

$ ./downward-1-debug --search 'astar(blind())' < output
=> Solves the task.

$ ./downward-1-debug --search 'astar(lmcount(lm_rhw(reasonable_orders=false)))'
< output
=> Exhausts the search space.

$ ./downward-1-debug --search 'astar(lmcount(lm_hm(m=1)))' < output
=> LM construction claims problem is unsolvable.

$ ./downward-1-debug --search 'astar(lmcount(lm_zg()))' < output
=> LM construction assert-fails.

Possibly a clue, possibly not: Disabling all orderings in LM_RHW leads to the
problem being solved:
$ ./downward-1-debug --search
'astar(lmcount(lm_rhw(reasonable_orders=false,no_orders=true)))' < output

However, this doesn't help with the other LM generation methods, which already
fail during LM graph construction.
History
Date User Action Args
2023-07-05 15:58:48maltesetmessages: + msg11103
summary: When fixing this, check that it also fixes issue442. If yes, close it as we close this one. If not, update that issue.
2022-03-07 16:39:46maltesetassignedto: emilkeyder ->
2022-02-11 13:58:08maltesetmessages: + msg10605
2022-02-11 10:26:00remosetmessages: + msg10596
2022-02-10 11:42:11salomesetmessages: + msg10572
2022-02-10 08:27:44clemenssetfiles: + minimized-problem.sas
nosy: + salome, thomas, clemens, remo
messages: + msg10568
2011-08-04 19:42:58maltesetmessages: + msg1457
2011-06-30 13:14:00emilkeydersetmessages: + msg1411
2011-06-30 10:44:25maltesetmessages: + msg1408
2011-06-30 09:55:03silviasetmessages: + msg1404
2011-06-30 08:43:48erezsetmessages: + msg1402
2011-06-29 22:01:35maltesetmessages: + msg1397
2011-06-29 20:27:28emilkeydersetmessages: + msg1396
2011-06-29 18:42:39maltesetassignedto: emilkeyder
messages: + msg1394
2011-06-29 18:41:31maltesetfiles: + robert-tony.soln
2011-06-29 18:41:23maltesetfiles: + robert-tony-problem.pddl
2011-06-29 18:41:15maltesetfiles: + robert-tony-domain.pddl
2011-06-29 18:41:04maltecreate