Created on 2023-01-20.11:02:43 by clemens, last changed by clemens.
File name |
Uploaded |
Type |
Edit |
Remove |
result.sas
|
clemens,
2025-01-22.16:12:19
|
application/octet-stream |
|
|
result.sas
|
clemens,
2025-02-03.20:32:59
|
application/octet-stream |
|
|
msg11793 (view) |
Author: clemens |
Date: 2025-02-14.11:26:02 |
|
Sure! I've tried to cover everything in the commit message as well, but here comes a slightly more detailed version.
SUMMARY (hopefully helpful for writing the change log)
=======
The option `only_causal_landmark` from the landmark factories `lm_exhaust` and `lm_rhw` no longer exists. On the one hand, we introduce a new boolean option `use_unary_relaxation` for `lm_exhaust` which has a similar effect (see below). On the other hand, `lm_rhw` loses some freedom in the configuration space with this change.
The old and new option for `lm_exhaust` have the same effect on search behavior, even though they work differently and yield different landmark graphs than before. Previously, non-causal landmarks were removed from the landmark graph in a post-processing step, which induced a relaxed exploration on top of the relaxed exploration which was used to come up with the non-causal landmarks in the first place. Now, both of this is done in a single relaxed exploration which reduces landmark generation time to 25-30% of what it used to be. While the resulting landmark graph with the flag set to true is identical to before, this is no longer true in the case where the flag is set to false. This is on purpose and doesn't make a difference in the search behavior. In particular, we no longer add all atoms that hold in the initial state to the landmark graph (even though they are landmarks), in order to make the configurations with and without the option more similar (these landmarks are not causal). Since all landmarks that hold in the initial state are anyway progressed to "past and not future" before computing the heuristic value of the initial state, not adding these atoms doesn't impact the search behavior. (This is only true because `lm_exhaust` does not compute landmark orderings.) With this, the overall number of landmarks is halved over the entire benchmark set, which has a positive effect on the memory footprint without changing the search behavior in these configurations.
In configurations using `lm_rhw`, we didn't observe significant differences between variants running with `only_causal_landmarks=true` and those running with `only_causal_landmarks=false`. Thus, we decided to remove the option completely to reduce code complexity and simplify the configuration space. Removing the options from `lm_rhw` affects the Fast Downward Stone Soup portfolios seq-sat-fdss-2018 and seq-sat-fdss-2023 which used some configurations with the flag turned on and others with it turned off. After removing it, all configurations correspond the the previous version as if the option were turned off. Experimentally, we verified that this has no negative effect on the portfolios' overall performance:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v5-sat-eval/issue1074-v5-portfolios-issue1074-base-issue1074-v5-compare.html
Maybe a word on the renaming of the option: While working on this issue, we realized that the definition of "causal landmarks" corresponds exactly to the landmarks of a task transformation of the original problem. Specifically, for this equivalence to hold, the task transformation requires three steps:
(1) delete relaxation
(2) compile the initial state away such that nothing holds initially and introduce an operator which adds all initial atoms
(3) split all operators into sets of unary operators which have the same precondition as the original and one a different one of its effects
Step (2) is necessary to avoid atoms that hold initially but are not causal to be landmarks. Step (3) is necessary to disable side effects.
|
msg11790 (view) |
Author: malte |
Date: 2025-02-14.10:04:41 |
|
Hi Clemens, sounds good. Can you summarize what has changed (command-line option syntax, changes to performance of relevant configurations) in such a way that we can turn it into a change log entry?
|
msg11788 (view) |
Author: clemens |
Date: 2025-02-14.09:50:36 |
|
I think we're ready to merge. I have implemented the suggestions from Salomé's review and then started a final round of experiments. In particular, I wanted to make sure nothing goes wrong in the configurations which should not be affected (i.e., `lm_rhw` and `lm_zg` which are relaxation-based but don't use the unary relaxation). Also, since the option was used in the portfolios Fast Downward Stone Soup 2018 and 2023, I needed to verify that dropping it does not affect the portfolios in a negative way. So here are the results:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v5-opt-eval/issue1074-v5-opt-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v5-sat-eval/issue1074-v5-sat-issue1074-base-issue1074-v5-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v5-sat-eval/issue1074-v5-portfolios-issue1074-base-issue1074-v5-compare.html
Everything looks pretty good I would say. Even the portfolios benefit from the change (finding cheaper plans sometimes) because landmark generation time is reduced. The only really surprising result was in `lm_exhaust` without unary relaxation which showed more landmarks than previously. It turned out that this was due to an unwanted change I've accidentally left in the code. I reran this configuration after removing that part:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v6-opt-eval/issue1074-v6-opt-issue1074-base-issue1074-v6-compare.html
Now there are significantly fewer landmarks (the total number approximately halved). The landmarks that are no longer added all hold in the initial state and would be marked as "past and not future" in the initial progression step before computing the landmark heuristic for the initial state. Thus, they don't make any difference in heuristic quality (as can be seen by the equal number in expanded states), but their absence leads to a lower memory-footprint. So overall this is a positive effect, despite the difference in number of landmarks standing out in red in the report.
|
msg11783 (view) |
Author: clemens |
Date: 2025-02-11.12:36:33 |
|
Moving back to the actual core of this issue: Since removing non-causal landmarks did make a difference in performance for lm_exhaust, we decided that we want to keep the option in that case. However, in lm_rhw it did not make a difference, so we want to remove the option there. (In all other landmark factories the option was already not available.)
Further, with our new understanding, we no longer want to call these kinds of landmarks causal, but landmarks of the unary relaxation which is a transformation of the problem such that each operator is split into one operator for each of its effects (potentially extending preconditions with the effect conditions).
With these changes, we can now directly (exhaustively) compute landmarks of the unary relaxation rather than computing all landmarks of the delete relaxation and only afterwards removing those that are not landmarks of the unary relaxation. So landmark computation time should decrease. This is confirmed by our first experiment of this implementation which can be found here: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v4-opt-eval/issue1074-v4-opt-issue1074-base-issue1074-v4-compare.html
As a sanity check, and out of curiosity, I also ran another experiment comparing to the other configurations which should be equivalent, namely lm_hm(m=1, use_orders=false) and lm_zg(use_orders=false). See results here: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v4-comparing-configurations-eval/issue1074-v4-comparing-configurations-only-causal-compare.html The experiment still includes the option to compute the weaker first achievers (based on unary relaxation as well) for lm_exhaust, so it yields the exact same results as lm_hm(m=1, use_orders=false) if the corresponding option is set. Without it, it yields the exact same results as lm_zg(use_orders=false) which uses the stronger first achiever approximation. The option to compute the weaker first achievers will be removed though, for the patch to be included in the main repository. If we want to investigate the effects of first achievers further, we can open a new issue for this.
Also, since this unary relaxation introduces a new flavor of exploration, we might want to open another issue aiming to consolidate the different kinds of relaxed reachability analyses we support in the code and decide which ones make the most sense and are worth keeping around (to avoid having too many similar things in slightly different variations). We don't see this as part of this issue, though.
I have now also cleaned up the code, so this is ready for a review: https://github.com/aibasel/downward/pull/250
|
msg11767 (view) |
Author: salome |
Date: 2025-02-04.14:31:19 |
|
We took another close look at the difference in first achiever computation between lm_hm and lm_exhaust. lm_hm marks an operator o as first achiever for a landmark lm iff the node representing o in the relaxed task graph does not have lm as landmark. However, the relaxed task graph only computes causal landmarks, which means o is marked as first achiever for lm even if lm is a noncausal landmark for o.
In the example in msg11765, we have exactly this happen: b=F is a landmark of o1 (since we need to apply o2 to achieve o1's precondition a=F, and o2 has the effect b=F), but it is not a causal landmark because b=F in itself was not relevant for being able to apply o1, and thus the lm_hm computation does not count b=F as a (causal) landmark of o1, resulting in o1 being marked as first achiever.
Another way to think of this is that lm_hm would behave the same if we would split up all operators into unary operators (i.e. an operator with n effects is split into n operators which all have the same precondition and one of the effects). In this scenario we have the following operators:
- o1_1 = <{a=F,b=T},{a=T}>
- o1_2 = <{a=F,b=T},{b=F}>
- o2_1 = <{a=T,b=T},{a=F}>
- o2_2 = <{a=T,b=T},{b=F}>,
and o1_2 is indeed a first achiever of b=F (by applying o2_1 and then o1_2).
We will run another experiment that simulates the first achiever computation of lm_hm in lm_exhaust to verify that with this first achiever computation, lm_hm and lm_exhaust with only causal landmarks do in fact behave the same.
|
msg11765 (view) |
Author: clemens |
Date: 2025-02-03.20:32:59 |
|
Salomé and I walked through the code and the example together and we think we have a better understanding now. Computing the possible and first achievers is an approximation. Apparently, the relaxation-based landmark factories use a tighter condition for first achievers than the hm landmark factory, which isn't a mistake in itself. It is unfortunate, though, that this leads to different heuristics for the same landmarks. The main underlying flaw is that achievers are computed by the landmark factories, not by the heuristics which use them in the end. The reason why this is the case is probably efficiency: While the postprocessing of the relaxation-based landmark factories could as well be done in the heuristic, for lm_hm the first achievers are computed along the way together with the landmarks, so doing it (again) in the heuristic could introduce some overhead. However, doing it anyway would have the advantage, that a certain landmark has a certain set of achievers, no matter which landmark factory came up with the landmark. (This is for example relevant for the lm_merged landmark factory; if two of the merged landmark graphs contain the same landmarks with differently computed achievers, one of them is picked to be added to the merged graph, without considering the achievers of the other. The smart thing to do would be to take the intersection of both, or to only compute achievers (deterministically) after merging.)
These are some thoughts on the bigger picture. Now I'll add some details on the lower level to document our findings of the difference in the approximation techniques. So here is a breakdown of the example (from Machetli) and the algorithms.
Consider the following planning task:
- two binary variables {a,b} with dom(a)=dom(b)={T,F}
- initial state: {a=T,b=T}
- goal: {b=F}
- operators: {o1,o2} with
* o1 = <{a=F,b=T},{a=T,b=F}>
* o2 = <{a=T,b=T},{a=F,b=F}>
The only plan is <o2> but both o1 and o2 lead to a goal state in the induced transition system. The causal landmarks of this task are L1=(a=T), L2=(b=T) and L3=(b=F). (L1 and L2 are preconditions of o2 in the only plan and L3 is a goal condition.) L4=(a=F) is also a landmark because it holds in the goal state reached by all plans, but it is not causal because it is a side effect of o2 which is not required in the goal state.
Now for the approximations. Let's start with the easier case of lm_exhaust using the post-processing. There, the reasoning is something along the following lines:
- We know L3 is a (causal) landmark.
- The possible achievers of L3 are o1 and o2 because both have the effect (b=F).
- Let's exclude all possible achievers of L3 and compute which atoms are still relaxed reachable.
- Since all operators are excluded, only {a=T,b=T} are relaxed reachable.
- The only possible achiever of L3 for which all preconditions are relaxed reachable is o2. (For o1, (a=F) is missing.)
- Hence, o2 is the only first achiever of L3.
In the case of lm_hm (with m=1) the approximation follows along with the algorithm to compute the landmarks in the first place, so let's do that:
- The algorithm annotates labels L to all atoms x representing which atoms are prerequisites to reach x.
- Initially, all atoms are considered prerequisites for all atoms except thos true initially which only require themselves, so
x | *a=T* | *b=T* | a=F | b=F
-----|-------|-------|-------------------|-------------------
L(x) | {a=T} | {b=T} | {a=T,b=T,a=F,b=F} | {a=T,b=T,a=F,b=F}
(The *x* indicate that *x* is marked reached in the delete relaxation.)
- Since a=T and b=T are preconditions of o2 and reached in the relaxation, the operator o2 is applied. Since it can only be applied once its preconditions hold, the union of landmark of its preconditions are landmarks to apply o2:
o | o1 | *o2*
-----|-------------------|-----------
L(o) | {a=T,b=T,a=F,b=F} | {a=T,b=T}
- Applying o2 achieves two new atoms and we update their landmark table entries by intersecting the previous entry with the entry for o2 (and add themselves because they are always landmarks for themselves):
x | *a=T* | *b=T* | *a=F* | *b=F*
-----|-------|-------|---------------|---------------
L(x) | {a=T} | {b=T} | {a=T,b=T,a=F} | {a=T,b=T,b=F}
- Since (b=F) was not a landmark for o2 but is its effect, we now consider o2 a first achiever for (b=F).
- Now all preconditions of o1 are relaxed reachable, so we apply o1 and update its landmarks same as above: take the union of the landmarks of all its preconditions:
o | *o1* | *o2*
-----|---------------|-----------
L(o) | {a=T,b=T,a=F} | {a=T,b=T}
- Its effects are (a=T) and (b=F), so let's update their table entries as before: intersect the previous entry with the new entry for o1 (and add themselves):
x | *a=T* | *b=T* | *a=F* | *b=F*
-----|-------|-------|---------------|---------------
L(x) | {a=T} | {b=T} | {a=T,b=T,a=F} | {a=T,b=T,b=F}
- Since (b=F) was not a landmark for o2 but is its effect, we now consider o2 a first achiever for (b=F).
- No new operators became applicable, so the algorithm terminates.
We can see that the set of first achievers for (b=F) is {o1,o2} which is different from the relaxation-based post-processing which was {o2}. We can further observe that the hm-method also approximates based on delete relaxation but less strictly (since it does not exclude all achievers of a landmark all at once, but still applies one after the other sequentially until nothing new is applicable). This approximation always results in a superset of first achievers compared to the approach discussed first.
(Sorry for the spam. If it doesn't help anybody else, I think at least for me it was helpful to summarize our findings here.)
The question remains: What should we do with the gained knowledge? Both approximations are valid and it is nice that one of them happens along with the computations of the landmarks. Unfortunately, it is the worse one that happens along the way. We've discussed several options:
(a) Compute first and possible achievers all the same for all landmark factories. Maybe let the heuristics do it on their own if they require it and save some time in case the heuristics don't care about achievers.
(b) Leave achiever computation up to the landmark factories, but add a post-processing in the case of lm_hm to ensure they all end up with the same first achievers for simple landmarks.
(c) Accept the fact that different landmark factories yield different first achievers.
We would like to discuss this with Malte (and others who might be interested) as part of the ongoing sprint.
|
msg11744 (view) |
Author: clemens |
Date: 2025-01-22.16:12:19 |
|
Indeed the tasks with different numbers of landmarks are unsolvable, so that's one problem out of the way. I agree that it would be nice to have the same behavior in both cases but this seems less important to me at the moment.
I've looked deeper into the other problem which is different heuristic values despite the same number of landmarks. In all tasks I've looked at, it was always true that they yielded the exact same set of landmarks. As suspected, the difference lies in the achievers, more specifically the first achievers. Computing them is in fact *not* done the same in all landmark factories. There's one method for all relaxation-based landmark factories (lm_rhw, lm_zg and lm_exhaust) which computes both first_achievers and possible_achievers in a post-processing step after computing the landmarks. For lm_hm (the only non-relaxation-based landmark factory apart from lm_reasonable_orders_hps and lm_merged which compute no landmarks themselves), first_achievers are computed alongside with the landmarks and only possible_achievers are computed in a post-processing.
I have used Machetli to get a really small problem instance where this issue arises (result.sas is attached). I'm convinced that the issue is on the side of lm_hm which overestimates the set of first achievers for some reason. I could not figure out where in the code this happens, though. While I believe that I have a general understanding of how the landmark generation algorithm works in theory, understanding the code is a different story.
Is somebody willing to look at the code together?
|
msg11742 (view) |
Author: malte |
Date: 2025-01-20.09:36:24 |
|
> The data does not confirm the previously stated claim that 1.+2. = 3. This of
> course does not imply that it wrong in theory, but only that there's more to
> be investigated before we can draw further conclustions.
I think these tasks cannot be minimized further. I think they are the trivially unsolvable tasks that the translator generates when it detects unsolvability during grounding. They have one state variable, which is binary and has a different initial and goal value, and they have no actions.
So it isn't a meaningful difference, but perhaps we should still look at which of the two behaviours we prefer and then decide which one we want. I guess from the definition of causal landmarks, in a known unsolvable task every atom should be a landmark because it is indeed a precondition or goal in every plan (vacuously). But perhaps it doesn't matter that much as long as we make sure that we get infinite heuristic values on tasks that we know to be unsolvable from landmark analysis. (But then if we document that we compute the causal landmarks of the delete relaxation, we should also document this exception.)
It indeed sounds like we should look into the achievers.
|
msg11741 (view) |
Author: clemens |
Date: 2025-01-20.09:25:36 |
|
What I forgot to mention before: Landmark generation is indeed faster with hm(m=1), but this does not positively impact the overall planner time too much in the current implementation, although hm(m=1) is faster on average (e.g., https://ai.dmi.unibas.ch/_visualizer/?c=eJx9kEtOxDAQRK8SeY3tCSAh5Rzso56kSVryT-3Ob0Zzd2wQZINYul7Zr-S7wiBMmHtHWVTXqIZyXrC9vL3qK2TUzmvcZ1iy6JiEPLguBnfooUTgmj_qs29_quqpUTtIMVwXwfp8chACcl8wVnr8SxPHhCx13we5r8osknJnLZAZPZklUPGaYbY97qVbLgbJhdoxbmEDHu3vQjuCwHnU63PdqXEFZ0-TEWCz36qeMUWW9yNV80sJVuRMMdQdrbnUylA-JvreU-g3CrmQ--OMYZoYJ5DI3-TxCbmsgB0%3D).
|
msg11740 (view) |
Author: clemens |
Date: 2025-01-20.09:21:52 |
|
Thank you for your inputs. Previously, I did not consider the connection between exhaust limited to causal landmarks and hm(m=1), but it sounds absolutely reasonable. I ran the suggested comparison experiment, here are the results: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/issue1074-v2-opt-eval/issue1074-v2-opt-only-causal-compare.html
The data does not confirm the previously stated claim that 1.+2. = 3. This of course does not imply that it wrong in theory, but only that there's more to be investigated before we can draw further conclustions.
For one, the number of landmarks found by the two methods is not always identical. Mostly it is, but there are two tasks in ther mystery domain (prob07 and prob18) where exhaust finds one additional landmark. This is probably a task for the minimizer to help us point out where this difference comes from. Maybe there's a tricky corner case that is dealt with differently in the two methods, but with my current understanding this is probably a bug and should therefore be fixed independent of what we do with the causal landmark filtering.
Another observation is that heuristic values are different even when both configurations yield the same number of landmarks. This means the cost partitionings end up having different results, which can only be the case if the landmarks end up having different achiever sets (assuming the same number of landmarks means that the landmarks are the same atoms). It would surprise me, if they indeed have different achievers, because I thought they were computed the same way independent of which landmark factory was used, but maybe I'm misremembering. This will definitely also require some further digging in the code.
Finally, in terms of coverage the exhaust approach is superior. It solves 4 more problems for both uniform and optimal cost partitioning. Before taking this too seriously, though, we should figure out where the differences come from. I currently trust in Malte's reasoning in the previous message, so I lean towards a faulty implementation, but as of now I'm not ruling out that we've overlooked something on the theory side either. I will think about it some more and investigate the implementation further. Any of your thoughts that might help track this down would also be much appreciated.
|
msg11739 (view) |
Author: malte |
Date: 2025-01-17.20:39:28 |
|
Thanks! Interesting and somewhat unexpected, but that's why we run the experiments.
Can you add results for hm(m=1) landmarks, and ideally compare them directly to lm_exhaust without causal landmarks in the comparison view?
The following is from memory, and perhaps I'm forgetting some relevant detail, but here is how I think lm_exhaust, causal landmark removal and hm(m=1) work, roughly:
1. lm_exhaust performs a relaxed exploration for each atom X in the problem to check if it's a landmark; this is done by checking if the relaxed task is unsolvable if we remove actions that achieve X. (With perhaps some special-casing if X is in the initial state.)
2. causal landmark removal performs another relaxed exploration for every atom that has been detected to be a landmark and removes it if it's not a causal landmark; this is done by checking if the relaxed task is unsolvable if we remove actions that have X as a precondition (plus perhaps some special stuff if X is a goal)
3. hm(m=1) landmarks compute the same set of landmarks as 1.+2., but with a specialized algorithm that only does a single relaxed exploration that handles all atoms at the same time. (This is much more expensive than each individual exploration from 1. and 2., but computes everything at once.)
I'd be curious to see how 1.+2. compares to 3. If 1.+2. is just an inferior version of 3., that would add weight to the suggestion of removing 2.
I don't remember if hm(m=1) also computes orderings. If yes, it would make sense to also run a version of it (and compare it to 1.+2.) with orderings disabled to make them even more similar because IIRC 1.+2. does not compute orderings (and that's also what it looks like in the reports).
That 1.+2. does better than 1. is not necessarily expected, but can make sense because of poor cost partitioning or because of overhead when we have many landmarks that are ultimately not useful. But if it is a decent set of landmarks worth keeping around, then the way we compute it seems really stupid. Like Clemens wrote, it's quite pointless to go over every atom and first run the check in 1. and then the check in 2., rather than running the check in 2. immediately.
|
msg11738 (view) |
Author: clemens |
Date: 2025-01-17.16:42:46 |
|
Here are the results from my experiments: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1074/data/
There are (currently) three reports:
- v1-sat containing lama-first with and without preferred operators
- v1-anytime containing full lama
- v1-opt containing the cross product of {exhaustive, rhw} landmark generation and {uniform, optimal} cost partitioning
Notably, filtering for causal landmarks seems to fail on tasks with conditional effects, so much fewer problems are solved over the entire benchmark suite. I've removed those runs to clean up the reports, though.
Generally, removing the non-causal landmarks seems to have a positive effect, so I'm not so sure anymore we actually want to remove this option. As Malte has said in the offline meeting, though, this performance increase might be caused by the lower number of landmarks rather than the fact that all considered landmarks are causal. If so, one could also simply drop half of the landmarks that were found (independent of whether they are causal or not) to achieve a performance increase. We did not test this hypothesis, but I can well imagine it holds.
To give more details on the results:
- Most domains have non-causal landmarks, so the change affects many problems.
- lama-first (with and without preferred operators) finds lower cost plans in most problems of the Miconic domain, but no other domains.
- In lama-first, the search time score improves by 1-2 points while the overal time score decreases by 11-12 points in total (indicating that precomputation is more expensive but pays off a little bit when focussing on the search).
- lama-anytime also finds cheaper plans in Miconic. Other domains show different costs as well, but this might as well be grid noise, as the differences are small and may be due to finishing fewer or more search iterations which we have observed in the past for other issues.
- With exhaustive landmark generation, dropping all non-causal landmarks again leads to solving 15-17 tasks more, depending on the cost partitioning strategy. If this is the main reason to keep this option, we could also consider building it directly into the exhaustive landmark generation, which now does two independent reachability analysis (if I remember correctly). I think it should be possible to simply only exhaustively produce all causal landmarks instead of also producing non-causal ones and dropping them again later.
- seq-opt-bjolp also seems to be slightly faster with only causal landmarks, but gaining not even 1 scoring point in search time.
These are just a few observations. Is there something you would like to know in more detail? How should we continue with this issue? I generally like the idea of removing the option completely since doing so makes the code simpler and there is no configuration for which we currently recommend using this option. However, I'm hesitant to do this as the experiments show improved performance if the option is turned on.
|
msg11737 (view) |
Author: clemens |
Date: 2025-01-16.08:29:14 |
|
In an offline meeting yesterday, Florian, Malte, Salomé and I discussed the option to get rid of the "only causal landmarks" support instead of moving it to its own factory. This is motivated by the fact that there is no good reason to remove non-causal landmakrs ("side effects") as they still denote correct information and should be handled correctly by the landmark progression. Further, removing all non-causal landmarks from a landmark graph seems to be a rather expensive operation, so doing this might do more harm than good.
I am hijacking this issue to do some experimentation on whether any configuration benefits from the option that is currently implemented before we make any concrete decisions.
|
msg10933 (view) |
Author: clemens |
Date: 2023-01-20.11:02:43 |
|
Some landmark factories provide an option to ensure that the returned landmark graph only contains causal landmarks (i.e., landmarks that either hold in the goal or make the relaxed planning task unsolvable when removing all actions satisfying the landmark in their precondition). This option is applied in the form of a post-processing step, removing all non-causal landmarks from the landmark graph. We have recently implemented similar post-processing steps (e.g., reasonable orderings) as a separate landmark factory instead of an option. It takes as an input the result of another landmark factory and outputs the processed (in this case filtered) landmark graph. On the command line call, this results in a nested set of landmark factories rather than adding a set of options. We would like to implement the post-processing for causal landmarks in this style as well.
|
|
Date |
User |
Action |
Args |
2025-02-14 11:26:02 | clemens | set | status: chatting -> resolved messages:
+ msg11793 |
2025-02-14 10:04:41 | malte | set | messages:
+ msg11790 |
2025-02-14 09:50:36 | clemens | set | messages:
+ msg11788 |
2025-02-11 12:36:34 | clemens | set | messages:
+ msg11783 |
2025-02-04 14:31:19 | salome | set | messages:
+ msg11767 |
2025-02-03 20:32:59 | clemens | set | files:
+ result.sas messages:
+ msg11765 |
2025-01-22 16:12:19 | clemens | set | files:
+ result.sas messages:
+ msg11744 |
2025-01-20 09:36:24 | malte | set | messages:
+ msg11742 |
2025-01-20 09:25:36 | clemens | set | messages:
+ msg11741 |
2025-01-20 09:21:52 | clemens | set | messages:
+ msg11740 |
2025-01-17 20:39:28 | malte | set | messages:
+ msg11739 |
2025-01-17 16:42:46 | clemens | set | messages:
+ msg11738 |
2025-01-16 08:29:14 | clemens | set | messages:
+ msg11737 title: Filter Causal Landmarks in Landmark Factory -> Remove Option to Filter Causal Landmarks or Move it to Separate Landmark Factory |
2023-01-20 11:02:43 | clemens | create | |
|