Issue1070

Title Avoid computing landmark set in every state when using preferred operators
Priority wish Status resolved
Superseder Nosy List clemens, jendrik, malte, salome, silvan
Assigned To Keywords
Optional summary
This is part of issue987.

Created on 2022-12-12.11:58:08 by clemens, last changed by salome.

Summary
This is part of issue987.
Messages
msg10942 (view) Author: salome Date: 2023-01-25.13:53:05
The follow up issue is issue1075.
msg10930 (view) Author: clemens Date: 2023-01-18.16:33:06
The issue is merged, thanks everyone! 
I did not yet open the follow-up issue on the ideas of changing the semantics.
msg10922 (view) Author: clemens Date: 2023-01-13.14:41:24
I updated the changelog and also addressed the other comments. I'll merge on Monday unless somebody objects in the meantime.
msg10919 (view) Author: malte Date: 2023-01-13.09:55:57
I commented on the pull request. Once the comments are addressed, this looks ready to merge.
msg10916 (view) Author: clemens Date: 2023-01-12.18:44:55
In this message, I would like to share an observation that is only partly related with the goal of this issue, but I would still like to record this here. Maybe you wondered why I skipped v5. Well, I introduced a silly bug when I tried to revert the semantic changes. I only realized this after running the experiments, and since I would like to share some impression of it with you, I didn't just overwrite v5 but fixed the bug in a new version.

So I accidentally still had one semantic change in v5. Originally, amongst other reasons, a landmark was considered interesting if it was never reached before but all its parents are reached. Due to a forgotten negation, what I had in v5 was that a landmark is interesting if it was never reached and one of its parents is *not reached*. There's no good reason for a rule like this in my opinion, but the experiments proved me wrong: using this rule, LAMA solves 11 tasks in the domain Maintenance in less than 1 second where it previously ran out of memory and the overall planner performance doesn't seem to be affected too much by this change. Full lama even finds better costs in summary, although these values are dominated by the large costs in the Parcprinter domain (but still lower over all if we ignore Parcprinter).
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v5-lama-first-eval/issue1070-v5-lama-first-issue1070-base-issue1070-v5-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v5-lama-eval/issue1070-v5-lama-issue1070-base-issue1070-v5-compare.html

I did look into what could happen in Maintenance, but wasn't very successful so far. I also don't understand the multi-queue system with preferred operators to well to have any good idea what could be happening there. The main reason why I wanted to share this is because the behavior seems so random. Especially, the logic of the buggy code seems absolutely stupid but still produces good results or even better ones compared the "correct" version. This is just one more argument that it should be worth considering how we could be smart about this, because apparently a random approach is similarly good as the current implementation which tries to be smart. We should probably continue this discussion in another thread, though.
msg10914 (view) Author: clemens Date: 2023-01-12.18:24:18
I implemented the change and ran experiments.
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v6-lama-first-eval/issue1070-v6-lama-first-issue1070-base-issue1070-v6-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v6-lama-eval/issue1070-v6-lama-issue1070-base-issue1070-v6-compare.html
The values for state expansions, evaluations, and generations (identical in all domains where the translator is deterministic) suggests that v6 indeed does not touch the semantics. For lama-first, the search time score improves by ~10 points and overall the changes seem to have a positive effect in general. In the full lama runs, the plan quality improves in several domains which I think is due to the slight speed-up. Also the reasons for terminating the planner (still) shift from out-of-time towards out-of-memory. 

Since the code became cleaner and the experiments look good to me, I'm in favor of merging this version v6 (after revising the changelog and maybe some more cleanup). If someone is willing to do a code review this would be much appreciated, the current version of the code can be found in the old pull request: https://github.com/aibasel/downward/pull/141.
msg10910 (view) Author: clemens Date: 2023-01-11.11:50:31
Salomé and I had a quick discussion how to proceed. Our suggestion is to stick with the pure refactoring within this issue, as originally suggested by Silvan. This means, reverting the code to what we had in v1 (except we want to keep some changes that happend along the way later). If nobody objects, I will start to do this soon.

We would still like to work on clearing things up on the semantic level as well. While we originally thought, this would be straightforward and works well enough right away, it turns out this needs more thought. The suggestions in Malte's last message about trying out easier definitions of what should be considered as preferred operators is more explorative than what we originally anticipated. Therefore, I now think it makes much more sense to do this in a separate issue along with thorough experiments.
msg10906 (view) Author: malte Date: 2022-12-22.16:22:05
I agree we probably shouldn't merge the complete change right now with these results. It's up to you if you prefer to split off the v1 change. In that case, can we again have a clean experiment and pull request before we merge?

Different results for the same code at different times can happen depending on what else is running on the grid at the same time. It's the reason why we always make sure to run configurations in "important" experiments together and randomize runs. From the order of magnitude of what you describe, I'm not sure if that's a sufficient explanation though. I didn't dig deeper.

I saw no problems in the v3 vs. v4 diff.

I suggest looking more into what goes wrong in some of the tasks that are no longer solved after the change. In the lama-first base vs. v4 experiment, for example we lose two childsnack tasks that were previously solved in ~2 seconds.

More generally, the old code was pretty convoluted and arbitrary regarding what was or wasn't considered a preferred operator, but the new code still has aspects of this (for example the treating of simple vs. disjunctive landmarks, and the consideration of landmark leaves vs. all landmarks). If it turns out the new rules don't work better than the old ones, perhaps it makes sense to consider other, simpler rules than either.
msg10903 (view) Author: clemens Date: 2022-12-22.08:50:32
In case somebody wants to check what changed between v3 and v4: https://github.com/ClemensBuechner/downward/commit/b39d9e19b539f20d8d5e21aea1362a93c4af73d0
I'm convinced this did not affect the performance significantly, and also the numbers in the reports including v3 and the new reports including v4 seem to be consistent.
msg10902 (view) Author: clemens Date: 2022-12-22.08:44:13
As we've seen it before in other landmark issues, the "hoped to be final" experiment reveals does not look as good as what we've seen so far. Here are the two reports:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v4-lama-first-eval/issue1070-v4-lama-first-issue1070-base-issue1070-v4-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v4-lama-eval/issue1070-v4-lama-issue1070-base-issue1070-v4-compare.html

Worst of all, the overall coverage is reduced by 8 tasks both for lama-first and lama. I was under the impression that the opposite is the case and we actually gain some coverage, based on the reports v1-v3 where coverage increased by 15 tasks, and the memory that coverage was unaffected in base-v1. Looking into these reports revealed that although my memory was correct, there was a difference between the two v1 runs in the different reports which explains why we now observe a decrease in base-v4. (I have no clue about the reason for the inconsistency in the two v1 runs.)

Despite the reduced coverage, time scores are improved in v4 and lama-first. However, expansion, evaluation, generation, and memory scores are worse than before. Also the costs are in general worse in the newer version compared to before. What remains true from what we've seen before is that there is a shift from search-out-of-time towards search-out-of-memory in case of failure.

I can also share the links to some scatter plots:
cost lama-first: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v4-lama-first-eval/scatter-relative/issue1070-base-issue1070-v4/issue1070-v4-lama-first-issue1070-base-issue1070-v4-cost-lama-first-pref.png
search-time lama-first: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v4-lama-first-eval/scatter-relative/issue1070-base-issue1070-v4/issue1070-v4-lama-first-issue1070-base-issue1070-v4-search_time-lama-first-pref.png
cost lama: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v4-lama-eval/scatter-relative/issue1070-base-issue1070-v4/issue1070-v4-lama-issue1070-base-issue1070-v4-cost-lama-pref.png

To me, the results don't look like a clear merge anymore, unfortunately. I will therefore not enforce merging this before Christmas (unless there's still enough people working to have a proper discussion about what to do). I'm also open to reconsider only merging v1 for now, because I think we were happy with those results. Any opinions?
msg10901 (view) Author: clemens Date: 2022-12-21.16:17:24
Thank you Malte for your feedback. I've incorporated your code review and opened issue1072 to start the discussion about conjunctive landmarks.

There is no data yet for base-vs-v3. I have started a new experiment base-vs-v4 comparing with the latest version with all requested changes.
msg10898 (view) Author: malte Date: 2022-12-21.11:31:40
I'm done commenting. Some comments are about clarification/consistency regarding what the code does now. A few are small polishing that are a bit incidental, but I think should be done in this issue given which code/which comments it touched. One is about conjunctive landmarks and I think needs discussion. I'm available today for discussion before and after the lectures.
msg10897 (view) Author: malte Date: 2022-12-21.11:12:19
Do we have experiment reports that compare the code versions before and after the change of this issue, i.e., base vs. v3 (rather than v1 vs. v3)? I think these would be useful to link here, and also I think it can be a good idea to give some quantitative comment for the change log, but this should comment on what was changed in the issue, not on coverage difference between an intermediate version and final version.
msg10896 (view) Author: clemens Date: 2022-12-20.19:39:31
Of course, that's what I suggested anyway. In the end, everything before Christmas is fine by me, but I would like to avoid reminding ourselves of what we did after the holidays.
msg10895 (view) Author: malte Date: 2022-12-20.19:00:19
I'd like to have a quick look at the pull request and experimental results, if only to understand better what was done. Can you wait until tomorrow with merging?
msg10894 (view) Author: clemens Date: 2022-12-20.15:00:42
Thanks everyone. I rebased onto the latest release, made some last changes to the documentation, and added the changelog entry. I'll wait until tomorrow with merging. If Silvan finds the time to do a proper review or somebody else has objections, please keep me posted.
msg10893 (view) Author: salome Date: 2022-12-20.13:12:14
I had a look at the code and only left one remark about a now outdated comment. Once that is fixed and the changelog has been written, I'm all for merging v3.
msg10892 (view) Author: silvan Date: 2022-12-20.10:32:29
I'm with going ahead with all changes; I think you are much more into this than I am. I'm not sure I'll be able to review v3 before the Christmas break, but I can try.
msg10891 (view) Author: clemens Date: 2022-12-20.09:56:37
I would be very happy if we could come to a decision about merging v1 or v3 before Christmas. 

@Silvan: Could you live with doing everything at once? 
- If yes, would you like to also review v3 (i.e., the current pull request?
- If no, do you agree that v1 is ready to be merged after writing a changelog entry?

@Salomé: You might have the best understanding of my changes among the participants of this discussion. Did you have a look at either v1 or v3 more closely and agree with them, or do you see need for further changes? (Your opinion of v1 is only really necessary if we decide against v3 in this issue.)
msg10890 (view) Author: jendrik Date: 2022-12-19.18:32:40
I only had a very superficial glance at the code. I'd love to see the code merged though :-)
msg10889 (view) Author: clemens Date: 2022-12-19.15:34:44
I've generated the scatter plots suggested by Salomé.

Costs of LAMA: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v3-lama-eval/scatter-absolute/issue1070-v1-issue1070-v3/issue1070-v3-lama-issue1070-v1-issue1070-v3-cost-lama-pref.png

Costs of LAMA-first: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v3-lama-first-eval/scatter-absolute/issue1070-v1-issue1070-v3/issue1070-v3-lama-first-issue1070-v1-issue1070-v3-cost-lama-first-pref.png

Search Time of LAMA-first: https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v3-lama-first-eval/scatter-absolute/issue1070-v1-issue1070-v3/issue1070-v3-lama-first-issue1070-v1-issue1070-v3-search_time-lama-first-pref.png

To me, the costs look very much scattered around the diagonal and I don't see a particularly systematic trend in both LAMA and LAMA-fist. In terms of search time, the Schedule domain catches the eye: in this domain, it seems that the change consistently has a positive effect. Other than that, the differences seem rather random to me.


Commenting on another point mentioned by Salomé:

> I could also see us merging v2 even if we don't do the semantic change since it looks like it is still better than base and it would enable us to get rid of the landmark set.

We got rid of the landmark set in v1 already. The difference of v2 compared to v1 is that v2 gets the information whether it is reached from the current status of a landmark (reached, not reached, or needed again), while in v1 this information comes from the bitset of reached landmarks stored in the landmark status manager for each state (which is the basis for computing landmark statuses). I personally would stick with v1 if we decide against the semantic change (or deal with it in a separate issue).

Regarding the question whether we should deal with this in two separate issues: Salomé and I had an offline discussion with Malte and he didn't have any preferences in this regard. My personal opinion right now: If it is already decided that we want to merge v1 but we're not sure about the semantic change yet, then let's merge v1 soon and continue with the discussion on the semantic change in a separate issue. If we also already agree on doing the semantic change, then I wouldn't split it up at this point. It really touches the same bit of code and would basically make the changes from v1 obsolete. 

Now I'm not sure how to procede. Who gets to decide when (or what) is ready to be merged? If I interpreted correctly, Silvan agrees with v1, but didn't consider v3 yet. I got some comments on the pull request for v3 from Jendrik (which I incorporated already), but don't know how deep he looked into it. I'm not sure whether Salomé has had a look at the code.
msg10888 (view) Author: salome Date: 2022-12-16.12:32:44
Regarding the results:

v1 definitively looks good, so I would at least merge this. I could also see us merging v2 even if we don't do the semantic change since it looks like it is still better than base and it would enable us to get rid of the landmark set. Having a report comparing base and v2 would be helpful to decide this, but I think that's only necessary if we decide against the semantic change.

The semantic change also looks mostly great with the small caveat that lama (anytime) gets worse cost overall. It would maybe be helpful to see a scatterplot comparing cost so we can better estimate if it is few extreme outliers or a general trend. I'd also be interested in seeing a scatterplot for search_time (and maybe also cost) on lama-first for the same reason. But all in all I like the semantic change, it makes sense to me from a theoretical perspective and I'd say that experimentally we have more benefits than drawbacks from it.
msg10887 (view) Author: salome Date: 2022-12-16.12:24:43
I personally don't really see a benefit in opening a new issue, since it touches the exact same code and I'd rather discuss here a bit longer on whether the semantic change makes sense than merging a change that might anyways be overwritten/extended if we actually do change the semantics.

To give some more explanation on the semantics: 
The function landmark_is_interesting is used to determine the preferred operators. Basically an operator is preferred if it achieves an "interesting" landmark. There is a bit more going on, specifically if we can achieve a simple (i.e. not disjunctive) landmark then we only consider operators achieving simple landmarks, but this is not relevant for our suggested change.

Currently the code says a landmark is interesting if (a) it is not reached and all its parents are reached, or (b) if all landmarks have been reached but this landmark is required again. While it makes sense to focus on landmarks that are "next in order" (i.e. where all parents are already reached), we found it strange that required again landmarks are excluded in this consideration except for when all landmarks have already been reached.
Some minor details which maybe give a better idea on what exactly happens:
 - A landmark can only be required again after it has been reached.
 - Reached does *not* have the semantics "has been true in the past", but rather "has been true after all its predecessors have been true". This only makes a difference with (obedient) reasonable orderings because for all other orderings A->B, reaching B before A implies there can be no plan on this path. (Actually, current landmark generation methods will only find non-resonable orderings A->B where it is impossible to achieve B before A.)
 - There are only two ways how a landmark can be required again: If we have a greedy-necessary ordering A->B (which says that A must be true one step before B becomes true) and A is reached but B is not (which implies that we must achieve A again in order to achieve B), and goal landmarks that are currently not true. This means that if all landmarks are reached already, a landmark can only be required again if it is a goal landmark.

What the suggested semantic change does is getting rid of distinguishing between "non-reached" and "reached but required again landmarks". So a landmark is always interesting if it must be achieved in the future (either because it is not reached or it is required again) and its parents have been reached already. This covers both (a) (with the change from "reached" to "needs to be achieved in the future") and (b) from above.
msg10886 (view) Author: clemens Date: 2022-12-15.17:54:33
In a offline discussion with Salomé some days ago, she actually suggested to do everything in a single issue, so maybe let's wait for some more input on this. I'm definitely fine with reverting my changes and discuss it again on a separate thread, but I also don't see much benefit in doing so.
msg10885 (view) Author: silvan Date: 2022-12-15.17:43:02
> What is it that you don't understand? The change in the semantics? I can definitely try to explain that again, I didn't fully understand myself when I wrote that message.
You mentioned specific methods that would compute specific things and I just thought it would be easier to understand what you wrote when seeing the code.

Regarding v1, I don't have any comments, this looks like a good thing to do.

Regarding the change of semantics, I'm not so deeply into the existing semantics in the first place, so I'd say that another landmark expert should also discuss this. And I think it would be helpful to do this in a separate issue.
msg10884 (view) Author: clemens Date: 2022-12-15.17:15:47
Sure, I just created one: https://github.com/aibasel/downward/pull/141
When doing so, I realized the commit messages are missing the [issue X] tags, I'll add that. Note that the current version of the branch now contains the semantic change. If you're rather interested in v1, you could instead look at the following diff: https://github.com/aibasel/downward/compare/main...ClemensBuechner:downward:b5d8fddcf.

What is it that you don't understand? The change in the semantics? I can definitely try to explain that again, I didn't fully understand myself when I wrote that message.
msg10883 (view) Author: silvan Date: 2022-12-15.14:52:20
This sounds interesting but I'm not sure I understood what you actually did. Do you have pull request?

> Then there is of course the question whether we could actually improve the procedure by rethinking about the semantics. But maybe this should be deferred to a separate issue? Any opinions on that?
I would say that if the refactoring done in v1 doesn't change semantics and improves performances, this is something which should be merged. Changing semantics as in the latest version should be done in a different issue in my opinion.
msg10881 (view) Author: clemens Date: 2022-12-15.07:56:52
It turned out that second change makes things worse again:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v2-lama-first-eval/issue1070-v2-lama-first-b5d8fddcf-1b66e6032-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v2-lama-eval/issue1070-v2-lama-b5d8fddcf-1b66e6032-compare.html
So if we want to keep the semantics, I don't think it is worth considering and I would rather go back to v1, which still gets a BitsetView of reached landmarks and passes them through the functions instead of using the landmark status manager.

However, after that change, the semantic change was very easy to implement and I just did it out of curiosity and ran experiments for that as well. I couldn't really believe my eyes when I saw the results for lama-first:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v3-lama-first-eval/issue1070-v3-lama-first-issue1070-v1-issue1070-v3-compare.html
In summary, everything seems significantly better than before -- I didn't expect that big of a difference. What I find especially impressive: expansion and evaluation scores go up by more than 80 points, and time scores go up by 35 points. The memory footprint is also reduced. The costs are only reduced because of parcprinter; when subtracting the gain from parcprinter, the costs found in the first iteration of LAMA are actually worse than before. Also, in some particular instances, the runtimes go up significantly (e.g., termes-sat18-strips/p19.pddl from 49.4 seconds to 510.5), but examples exist also for the opposite direction of course (e.g., the entire schedule domain is consistently solved faster, in the extreme case of probschedule-51-1.pddl total time decreases from 202.4 seconds to 0.3). Finally, the new version solves 15 tasks more than before.

I also have some data on the full LAMA configuration.
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v3-lama-eval/issue1070-v3-lama-issue1070-v1-issue1070-v3-compare.html
There, we of course also observe the coverage-increase. The costs overall are slightly worse here compared to v1. Furthermore, we are again shifting from timeouts towards more memory-outs. And that's about it with the data I have parsed for LAMA. So here, the results don't look as convincing as for lama-first... Would anybody happen to know some more useful data we could parse for the anytime search happening in LAMA? 

What do you think about these results in general? Are there other configurations we should look into? (I believe preferred operators should not be considered in optimal planning, is that correct?)
msg10877 (view) Author: clemens Date: 2022-12-14.08:50:05
Addendum: I wanted to write lama can get to *lower* weights for WA* in some cases.
msg10876 (view) Author: clemens Date: 2022-12-14.08:45:46
Here are first experimental results. 
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v1-lama-first-eval/issue1070-v1-lama-first-641d70b36-b5d8fddcf-compare.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue1070/data/issue1070-v1-lama-eval/issue1070-v1-lama-641d70b36-b5d8fddcf-compare.html

The only configurations I tested are lama-first and lama, but with preferred operators activated because the changes only affect the computation of preferred operators. In this version, the semantics of the code are exactly as in the original. The changes speed up the code a bit, leading to slightly higher coverage (+1 with lama-first, +3 with lama). The search time scores for lama-first increase by 7.83 (total time) and 9.73 (search time) and also there are fewer time-outs and more memory-outs. (Memory is not affected much, but also shows positive both in absolute value and score.)

The scores for expansions, evaluations, and generated states are negative. However, looking closer at the data, the number of expansions, evaluations, and generations are identical whenever both versions are able to solve the tasks. This confirms that the change is a pure refactoring without unintentional changes in the semantics.

In the full lama runs, we can furthermore observe a slight improvement in the costs found overall. Even though the change is small, it occurs in several domains, and is not just because of a few problem instances. I also attribute this to the fact that the code is faster now, because I assume the anytime procedure gets further and can get to weights for WA* in some cases.

I already see the changes made so far as an improvement and think it could be worth to merge them in order to clean up the landmark code. I have another commit I would like to try, which still does not change the semantics but also avoids passing a certain parameter to all functions for computing preferred operators -- I'm not convinced this will make the code more efficient, but maybe more readable. I'll start experiments with that change to see whether it's worth to consider before merging.

Then there is of course the question whether we could actually improve the procedure by rethinking about the semantics. But maybe this should be deferred to a separate issue? Any opinions on that?
msg10875 (view) Author: clemens Date: 2022-12-12.13:40:37
I've started refactoring. The code is very convoluted and I'm not completely sure what are the intended semantics, and whether maybe some of it got lost in previous refactorings.

Specifically, the function LandmarkCountHeuristic::landmark_is_interesting is where I'm stuck at right now. First, it checks whether all landmarks have been reached in the past. If yes, it checks which landmarks are true in the goal but not in the current states and considers only those interesting. (I'll get to the "if not" case further down.) This is basically equivalent to considering the "needed again" landmarks interesting; a landmark is needed again if it does not hold in the current state and it is a greedy-necessary predecessor of an unachieved landmark or required by the goal. The former case is impossible when all landmarks are reached already.

If not all landmarks are reached yet, the function further distinguishes two cases: 
(1) If the currently considered landmark is reached, then it is not considered interesting.
(2) Otherwise, the landmark is considered interesting only if all its parents are reached.

I would argue it makes sense to not only consider the un-reached landmarks here but also those that are "needed again". I believe this would also make the first distinction obsolete, namely whether all landmarks are reached or not. However, I'm not sure whether this breaks some intentions that are maybe somewhat implicit here: Does it for example make sense to only consider the needed-again stuff interesting after everything has been reached once? Then actions that achieve "new stuff" are more preferrable than actions that re-achieve stuff that also has to hold in the future, but held in the past already.

Also: would the answer to the above question change if there are additional sources for "needed again"? This is particularly relevant with the landmark progression in mind, where for example reasonable orderings can lead to required-again landmarks.
msg10873 (view) Author: clemens Date: 2022-12-12.11:58:08
The function LandmarkCountHeuristic::convert_to_landmark_set is annotated with a comment saying that "This function exists purely so we don't have to change all the functions in this class that use LandmarkSets for the reached LMs (HACK)." Today, the only logic relying on this set is finding "interesting" landmarks to compute preferred operators. Instead of using such a set, the corresponding code could simply look up whether a landmark is reached in the landmark status manager itself. I don't expect a large speed-up from this change, but it would (in my opinion) contribute towards having cleaner code in the landmarks part of Fast Downward.
History
Date User Action Args
2023-01-25 13:53:05salomesetmessages: + msg10942
2023-01-18 16:33:06clemenssetstatus: in-progress -> resolved
messages: + msg10930
2023-01-13 14:41:24clemenssetmessages: + msg10922
2023-01-13 09:55:57maltesetmessages: + msg10919
2023-01-12 18:44:55clemenssetmessages: + msg10916
2023-01-12 18:24:18clemenssetmessages: + msg10914
2023-01-11 11:50:31clemenssetmessages: + msg10910
2022-12-22 16:22:05maltesetmessages: + msg10906
2022-12-22 08:50:32clemenssetmessages: + msg10903
2022-12-22 08:44:13clemenssetmessages: + msg10902
2022-12-21 16:17:24clemenssetmessages: + msg10901
2022-12-21 11:31:40maltesetmessages: + msg10898
2022-12-21 11:12:19maltesetmessages: + msg10897
2022-12-20 19:39:31clemenssetmessages: + msg10896
2022-12-20 19:00:19maltesetmessages: + msg10895
2022-12-20 15:00:42clemenssetmessages: + msg10894
2022-12-20 13:12:14salomesetmessages: + msg10893
2022-12-20 10:32:29silvansetmessages: + msg10892
2022-12-20 09:56:37clemenssetmessages: + msg10891
2022-12-19 18:32:40jendriksetmessages: + msg10890
2022-12-19 15:34:44clemenssetmessages: + msg10889
2022-12-16 12:32:44salomesetmessages: + msg10888
2022-12-16 12:24:43salomesetmessages: + msg10887
2022-12-15 17:54:33clemenssetmessages: + msg10886
2022-12-15 17:43:02silvansetmessages: + msg10885
2022-12-15 17:15:47clemenssetmessages: + msg10884
2022-12-15 14:52:20silvansetmessages: + msg10883
2022-12-15 08:01:32clemenssetsummary: This is part of issue987.
2022-12-15 08:01:22clemenssetmessages: - msg10882
2022-12-15 08:01:17clemenssetstatus: chatting -> in-progress
messages: + msg10882
2022-12-15 07:56:52clemenssetmessages: + msg10881
2022-12-14 08:50:05clemenssetmessages: + msg10877
2022-12-14 08:45:46clemenssetmessages: + msg10876
2022-12-12 13:40:37clemenssetmessages: + msg10875
2022-12-12 11:58:08clemenscreate