Issue551

Title Use new task interface in landmark heuristics
Priority feature Status resolved
Superseder Nosy List florian, jendrik, malte, manuel, silvan
Assigned To manuel Keywords
Optional summary

Created on 2015-07-15.11:48:19 by florian, last changed by malte.

Messages
msg5802 (view) Author: malte Date: 2016-11-02.09:47:04
The performance difference of the code produced by -O0 and -O3 can be quite
large. The last times I checked, the difference was a factor 3-10, but it
depends a lot on the code in question, so if we're interested in what it is in
our case, we should measure. I think if we want to use -O0, this should be
discussed with everyone.
msg5801 (view) Author: manuel Date: 2016-11-02.09:27:43
Apple answered that the bug is triggered by a crash in the global numbering pass
of LLVM. As a workaround they proposed to alter the optimization setting for the
file that exposes the bug.

At this point a small but important fact might have been lost in our discussion.
The bug is only triggered when compiling Fast Downward into a debuggable binary.
Regarding optimizations and debugging the compiler documentation says following:

-O0 Reduce compilation time and make debugging produce the expected results. 
This is the default.

Now, there are two options:

1. Set optimization level to -O0 for debugging.
2. Update Xcode and Mac OS X to recent versions.

I would recommend both options. In order be consistent with the compilers
documentation and to be sure that FD compiles on a recent Mac OS X. For testing
the minimum required compiler version, we can maintain different Xcode versions
on a recent Mac OS X. While it is easy to switch between compiler versions it is
difficult to downgrade to an older Mac OS X version.
msg5799 (view) Author: malte Date: 2016-10-28.19:23:47
From the numbers I assume 7.3.1 is a bugfix release of 7.3, so I think we should
rather update to 7.3.1 than 7.3. Regarding 7.3.1 vs. 8: given that updates cause
work, we perhaps want to make as few updates as possible. For this it would
probably make sense to go to the newest version, unless we suspect it's not very
stable yet (considering how new it is and that it is a new major version). I
have no strong preference, just pick whatever you prefer among 7.3.1 and 8.


Regarding Florian's point on requiring other people to update their compiler:

I think serious Mac users are often close to the cutting edge because there is a
bit of an immediate updating culture, and also it's usually personal/workstation
hardware, not server hardware. So I think in the Mac world it's less of a
problem to require a rather new version than, say, in the Linux world, where
server infrastructure for things like sciCORE tends to move quite slowly.

If there is an easy workaround for the bug, we can still try to maintain support
with older versions, so it would be good to know more about what causes this and
if/how it can be avoided. But if we can't get this information or if it cannot
be easily avoided, I suggest we require a recent version.

In that case, though, we should do some of the following to warn Mac users:

1. Write an email to the public mailing list about this (after it's fixed).
2. Update our Mac build documentation to mention the minimum required version
numbers and that there are compiler crashes with older versions, such as version
X (whatever our current version is).
3. Update the CMake files to warn or abort on Macs with too old compiler versions.

I think 1.+2. would be enough.
msg5791 (view) Author: florian Date: 2016-10-27.13:03:40
Do we know what exactly triggers the bug and is there a known workaround for it?

If we update the buildbot to a 2016 compiler, we also require everyone else using a 
Mac to do the same. Compared to Linux (g++ 4.8, from 2013) and Windows (VC 12, from 
2013), this seems like a large change.

Manuel, could you link the bug report here for reference, or is it one of those 
Apple-ID-login-only things?
msg5790 (view) Author: manuel Date: 2016-10-27.12:40:17
The bug reported in msg5774 is fixed in XCode 7.3.

Since Apple has confirmed the bug, we should update XCode to a newer version on
our MacOSX build bot server.

These are release dates of following versions:
XCode 7.3: March 21, 2016
XCode 7.3.1: Mai 3, 2016
XCode 8: September 12, 2016

Malte, shall we update to the latest version or do you prefer to use the oldest
version that successfully compiles our code?
msg5784 (view) Author: malte Date: 2016-10-26.19:10:16
Thanks! Can you also open an issue for the work that still needs to be done?
(How to deal with task transformations in general path-dependent heuristics such
as the landmark heuristic.)
msg5782 (view) Author: jendrik Date: 2016-10-26.19:04:49
issue685 deals with restoring the "transform" argument.
msg5779 (view) Author: malte Date: 2016-10-26.11:58:59
I see the problem, but one of the next things we should do is get rid of the
"cost_type" arguments of heuristics, and then we will need "transform" for LAMA
and similar configurations.

So I think we should put "transform" back in. It's really meant to be an
argument of the heuristic base class (Heuristic), so it shouldn't disappear in a
derived class like the landmark-count heuristic. With our current design we
cannot really enforce that base class arguments are always included in derived
classes, but conceptually they really should be.

The problem of how to deal with task transformations in path-dependent
heuristics needs to be solved more generally. I suggest opening an issue for
that, perhaps referring back to the discussion in this issue for context.
msg5778 (view) Author: jendrik Date: 2016-10-26.10:48:45
This was intentional, but maybe it wasn't the best solution to the following 
problem: the landmark heuristics currently can't handle arbitrary task 
transformations since they need GlobalStates for updating the path-dependent 
information. Allowing arbitrary task transformations for landmark heuristics  
still needs a substantial amount of work.

Since we currently have only one task transformation that users can specify on 
the commandline, having the "transform" argument wouldn't hurt. But as soon as we 
add another user-definable transformation (e.g. the TNF transformation from 
issue576), users could invoke the landmark heuristics incorrectly.

If you don't like this intermediate solution, I'm happy to restore the 
"transform" parameter.
msg5777 (view) Author: malte Date: 2016-10-26.10:31:01
Merging this removed the "transform" parameters of lmcount and lm_ff_syn:

http://www.fast-downward.org/Doc/Heuristic?action=diff&rev1=73&rev2=74
http://www.fast-downward.org/Doc/Synergy?action=diff&rev1=13&rev2=14

I think this wasn't intentional, right? If it was unintentional, I suggest we
open a new issue to fix this.
msg5776 (view) Author: manuel Date: 2016-10-25.17:59:17
Yes, it should. Good catch!
msg5775 (view) Author: malte Date: 2016-10-25.17:53:42
Should this:

$ g++ -m32 -o2 issue551-mac-bug.cc

be this:

$ g++ -m32 -O2 issue551-mac-bug.cc

instead? ("-o2" would set the result file name to "2" instead of setting the
optimization level).

Thanks for filing the bug report!
msg5774 (view) Author: manuel Date: 2016-10-25.17:22:39
Building Fast Downward fails on our MacOS system because of a segmentation fault
when compiling the FD code.

The code in issue551-mac-bug.cc
(http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-mac-bug.cc) reduces FD to a
program where the failure still happens when compiling it to a 32 bit binary
with optimization level 2 or 3.

$ g++ -m32 -o2 issue551-mac-bug.cc

After Florian, Jendrik and myself had investigated the code I wrote a bug report
to Apple since we believe the source of error is not in our code.

$ g++ --version
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: i386-apple-darwin14.5.0
Thread model: posix

on 

Mac OS X 10.10.5 (14F1713)
msg5771 (view) Author: manuel Date: 2016-10-24.11:52:47
It is done! Thank you all for the help and inputs.
msg5770 (view) Author: silvan Date: 2016-10-24.11:28:05
> rom my point of view this can be merged now. Malte, Silvan, what do you think?

I also think that it's time to merge this one! :-)

I think it would be useful to rename two classes:
HMLandmarks -> LandmarkFactoryHM
LandmarkGraphMerged -> LandmarkFacotryMerged
to match the naming patterns of the other factories. I'll do this in a separate
(default) commit after you merged the branch.
msg5766 (view) Author: malte Date: 2016-10-21.17:18:06
Looks good to me. Thank you to you and Manuel for the work (and patience)!
msg5765 (view) Author: jendrik Date: 2016-10-21.17:14:26
Fixing the get_task_from_options() problem was easier than imagined since we only 
need to create a special Options object for Exploration. It only needs the keys 
"transform", "cost_type" and "cache_estimates".

The code changes in the non-landmark non-CEGAR parts of the code should be 
minimal now. Cleaning up the cost_type code is issue684. I added a comment for 
FactPair::no_fact and removed the "extern" from the template function.

From my point of view this can be merged now. Malte, Silvan, what do you think?
msg5754 (view) Author: jendrik Date: 2016-10-19.12:13:03
Thanks!

I suggest we discuss how to make sure that we always get the same AbstractTask 
from the same options object in the Fast Downward meeting on Friday.
msg5753 (view) Author: manuel Date: 2016-10-19.11:08:36
We discussed to also run experiments with seq-sat-lama-2011 configuration in
order to see if the increase in landmark generation time affects the quality
measure.

Here are the results, which look good:
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v5-seq-sat-lama-2011-issue551-base-issue551-v5-compare.html
msg5751 (view) Author: malte Date: 2016-10-18.12:25:08
As discussed yesterday, given the special situation of the landmarks code, the
increase in landmark generation time isn't a show-stopper. However, before
merging this we need to review the code changes in the non-landmark non-CEGAR
parts of the code, in particular heuristic.{h,cc} and abstract_task.{h,cc}.
msg5714 (view) Author: jendrik Date: 2016-10-12.11:13:02
Thanks! I think the plots for the two optimal configurations bjolp and bjolp-ocp 
look good. Relative landmark generation time goes up, but usually only for the 
cases where it is below one second. The additional memory consumption is also 
uncritical in my opinion. Search time for bjolp even has a negative trend (i.e., 
search is faster). For bjolp-ocp search time has no visible negative nor positive 
trend.

For the satisficing configurations there is still work to do. 

The increase in landmarks generation time is too high for lama_first (using 
lm_merged and lm_rhw) lm_exhaust and lm_rhw. Maybe we can also speed up lm_zg a 
bit. lm_hm looks fine to me.

Search time shows mixed results: it has a negative trend for lama-first, but a 
slightly positive trend for lm_rhw (which seems to be a contradiction). 
lm_exhaust and lm_zg have slightly positive trends (slower) and lm_hm has 
negative trend (faster). Judging by these results, I think we should try to 
reduce the search time, but I wouldn't see it as problematic if we don't succeed, 
since the results are already mixed. Most of the additional runtime probably 
stems from the GlobalState to State conversion, something we'll have to think 
about later anyway.

Let's discuss offline how to speed things up.
msg5707 (view) Author: manuel Date: 2016-10-03.10:48:48
The scatter plots:
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-base-issue551-v4-opt.zip
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-base-issue551-v4-sat.zip

The tables with memory and time attributes:
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v4-lama-opt-issue551-base-issue551-v4-compare-memory-times.html
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v4-lama-sat-issue551-base-issue551-v4-compare-memory-times.html
msg5704 (view) Author: jendrik Date: 2016-09-30.14:08:10
You can use the "landmarks_generation_time" attribute for this.
msg5703 (view) Author: malte Date: 2016-09-30.14:02:56
In addition to looking at total time, I'd be interested in runtime measurements
(and corresponding scatter plots) for the landmark computation time.
msg5700 (view) Author: jendrik Date: 2016-09-30.13:41:07
I think we have to take a closer look at the increase in runtime (and maybe later 
also do the same for memory). Could you please produce relative scatter plots for 
total_time for all tested configurations? If you're using the latest 
common_setup.py (e.g. from issue591) you can produce a relative scatter plot with 

exp.add_scatter_plot_step(relative=True, attributes=["total_time"])

If it turns out that the increase in runtime is not acceptable, we'll probably 
have to reduce the number of times we access the task interface and precompute 
some information instead.
msg5699 (view) Author: manuel Date: 2016-09-30.12:19:18
After some iterations of bugfixing the results of the experiments look good.

http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v4-lama-opt-issue551-base-issue551-v4-compare.html
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v4-lama-sat-issue551-base-issue551-v4-compare.html

The configuration lama-first has minor differences in number of expansions for
very few tasks. I noticed that the number of expansions changes for the affected
instances when FD is compiled on different platforms. This already happens in
the default branch.
msg5690 (view) Author: jendrik Date: 2016-09-28.23:45:12
I think such a discussion could be very helpful indeed. Let's schedule a meeting 
on the research group mailing list.
msg5687 (view) Author: malte Date: 2016-09-28.22:34:03
Please proceed in this way if this solves the immediate problem, but more
generally this is an example of what I mean when I say that we need to think
more globally about the "plumbing" between our various components, especially
when the same object can be used in multiple contexts because they can be
assigned to variables (as is the case for landmark factories, if I remember
correctly).

I don't feel very comfortable making these decisions on an ad-hoc, case by case
basis. We should probably not delay this question for much longer, but rather
find an opportunity to discuss it properly. I think the correct way to discuss
topics like these is in a dedicated design meeting (i.e., not one of the regular
Fast Downward meetings). If you also think this is a useful thing worth doing
soon, let's schedule such a meeting for October. If you think that now is not
the best time for this topic, I'm also happy to defer it.
msg5679 (view) Author: jendrik Date: 2016-09-28.13:59:54
Florian and I discussed this again and we'll try to find a solution that allows 
to keep the lm_cost_type option in the factories. Probably by passing the 
AbstractTask shared_ptr to the factories.
msg5678 (view) Author: jendrik Date: 2016-09-28.13:41:31
I added a note to the developer docs.

For the issue at hand we'll implement the proposed solution for now (passing a 
cost-adapted TaskProxy to the factories).
msg5676 (view) Author: malte Date: 2016-09-28.11:55:43
Perhaps it would be useful to write down somewhere in our design notes when to
use shared_ptr. We already had the guiding rule "to indicate ownership". There
seems to be a second case "for abstract tasks, if we need to create a delegating
task based on it" (even if it's only a temporary).

Conceptually, it's less clear that this is desirable, but with our current
design you cannot create a delegating task without (co-) owning the task. This
is a bit ugly and non-obvious, so it would be good to document it somewhere to
make it less likely that we stumble over this again in the future.
msg5673 (view) Author: florian Date: 2016-09-28.11:24:45
This seems to be related to a problem we ran into with issue598 (see msg5670). In 
both cases we want to create a derived task that adapts the cost in places where we 
only had access to a TaskProxy before.
msg5672 (view) Author: jendrik Date: 2016-09-28.00:01:08
OK, maybe I can clarify things by being more concrete. 

Currently, each landmark factory has its lm_cost_type option. This doesn't fit 
well with the task interface, since ideally we want to pass a const TaskProxy to 
a factory. Since we can't adapt costs for TaskProxy objects, we need to adapt the 
costs outside of the factories to be able to pass task proxies.

(In principle, we could also store the lm_cost_type in each factory, pass an 
AbstractTask to LandmarkFactory::compute_lm_graph(), let the method adapt the 
costs and pass a cost-adapted TaskProxy to the generate_landmarks() method of the 
child classes. However, this doesn't work since the generate_landmarks() method 
of lm_merged calls the compute_lm_graph() method of the base class again.)

One solution would be to move the lm_cost_type option to the landmark count 
heuristic and let it pass the cost-adapted task proxy to the landmark factory. 
The only drawback we see for this solution is that we would not be able to use 
different lm_cost_types for different factories that are combined with lm_merged. 
However, as said before, only one factory currently uses lm_cost_type at all.
The benefit of this solution is that the landmark factories would not have to 
know about AbstractTask, CostAdaptedTask and cost_type.
msg5669 (view) Author: malte Date: 2016-09-27.17:05:26
I don't understand the suggestion. What do you mean by "all landmark factories
would have to use the same cost type"? LAMA currently uses multiple landmark
factories with different cost types.

Your second paragraph is also very unclear to me. I don't know you mean with
"the landmark graph" and "the landmark heuristic" in general. For example, when
I generate landmarks for operator-counting constraints (not sure if this is
currently implemented, but it is a natural thing to do), there is no landmark
heuristic involved. Similarly when using landmarks for enriching the states as
in "When landmarks met abstractions", when using landmarks for search control as
Jörg Hoffmann's work, etc.
msg5667 (view) Author: jendrik Date: 2016-09-27.16:41:23
One remaining task for this issue is to get rid of "lm_cost_type" in the landmark 
factories (currently only used by LandmarkFactoryRpgSasp). Ideally, we would like 
to pass a TaskProxy to the landmark factories (then the factories wouldn't need 
to know anything about adjusting costs). However, then we would have to know 
about lm_cost_type outside the factories, e.g. in the landmark heuristic. We 
could in fact move the lm_cost_type parameter to lmcount(), however then all 
landmark factories would have to use the same cost type. Would you be fine with 
this, Malte?

This brought us to the question whether we really need lm_cost_type. In other 
words, why can't we just use the same costs for the landmark graphs as for the 
landmark heuristic? LAMA, e.g., always uses the same cost type for both (but a 
different cost type for the search).
msg5657 (view) Author: manuel Date: 2016-09-22.11:27:44
Here are the results of the last experiment: 
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v2-lama-sat-issue551-base-issue551-v2-compare.html
http://ai.cs.unibas.ch/_tmp_files/heusner/issue551-v2-lama-opt-issue551-base-issue551-v2-compare.html

I tried to figure out the reason for the different number of expansions when run
the lama-first configuration on the different revisions. 

In a first step I narrowed the lama-first configuration and found the following
minimal configuration where there is still a difference in number of expansions:

--heuristic "hlm,hff=lm_ff_syn(lm_rhw())" --search
"lazy_greedy([hff,hlm],preferred=[hlm])"

Because I expected to find the source for this change in the computation of
preferred operators in lmcount, I tested following configuration: 

--heuristic "hlm=lmcount(lm_rhw())" --search "lazy_greedy([hlm],preferred=[hlm])" 

Surprisingly, this configuration shows no difference in the number of expansions
between the concerned revisions.

In a second step, I had a look at the diff between revisions issue551-base and
62d207e1f9c52856704c4c6d3353a9535e4843e6. Between these two revisions the number
of expansions changes for lama-first run on some problem instances.

I did not find the reason for the change in number of expansions so far. From
what I have observed I assume it has to do with LamaFFSynergy and preferred
operators.
msg5640 (view) Author: malte Date: 2016-09-18.23:43:57
I would have used negative numbers for axioms (-1 - axiom_id) since this makes
the axiom IDs independent of the number of operators and would mean that the
OperatorProxy-to-ID conversion function don't need to obtain the task proxy.

We should also decide where we want to say "operator_or_axiom" in all context
like these. Right now we tend to use "Operator" when we mean "(real) operator or
axiom", e.g. in names like OperatorProxy.
msg5629 (view) Author: jendrik Date: 2016-09-15.14:08:35
One part of the patch that we should discuss is that we currently
store operator and axiom ids in one integer where we used to store a
GlobalOperator pointer (e.g. in ExUnaryOperator). The code uses the 
new id-conversion functions in util.h (see below) to convert back and 
forth between id  OperatorProxy. Malte, do you prefer this solution, 
or would rather like us to store OperatorProxys? Or do you have a  
different idea?

(In ExUnaryOperator we can't just store the operator id for operators and -1 for axioms like in UnaryOperator, since the code needs to 
know which axiom is stored.)

https://bitbucket.org/manuel_h/downward-issues-issue551/pull-requests/1/issue551/diff#chg-src/search/landmarks/util.h
msg5628 (view) Author: jendrik Date: 2016-09-15.13:49:05
I had another look at the pull request, left some comments and pushed some changes myself. Could you please take care of the remaining 
comments, Manuel?
msg5470 (view) Author: silvan Date: 2016-06-28.12:22:57
Florian also mentioned that we may have problems with the (path dependent)
landmark heuristic when using a task for the landmarks (landmark
graph/exploration) that is not identical to the task used by the search. This
problem is probably not easy to solve and rather a general problem with path
dependent heuristics.
msg5469 (view) Author: silvan Date: 2016-06-28.12:13:00
After merging issue592, Manuel created a new clean pull request for this issue
(against default branch):
https://bitbucket.org/manuel_h/downward-issues-issue551/pull-requests/1/issue551/diff

I am done with my first round of comments.
msg5423 (view) Author: florian Date: 2016-06-03.20:25:33
Added a cross reference to issue592 to the summary.
msg5124 (view) Author: malte Date: 2016-01-20.16:02:09
Sure.
msg5123 (view) Author: florian Date: 2016-01-20.16:00:54
Yes, there is an issue for that (issue559), I added a cross reference there.

If that's ok, I will add the notify_* methods in issue416, because with these
methods we may be able to fix issue416 before fixing this one.
msg5120 (view) Author: malte Date: 2016-01-20.12:08:49
I think there would be a certain logic/consistency in adding such an
initialization method. Path-dependent heuristics need to know about the search
graph as it is being explored. We already have a notification method for the
transitions ("reach_state"), but a notification method for the root is missing.

Let's find a less generic name than "initialize" for this, though. I would see
this as a complement to "reach_state" and suggest that we pick names for these
two that communicate that they have a parallel purpose. For examples, how about
"notify_initial_state" and "notify_state_transition"? We can keep the
reach_state name as an alias for the time being.


This is also related to one of the other big open problems we still have, which
is thinking about what happens if the same component is used in different
contexts, like heuristics that are used in multiple searches. Let's call
plug-ins that support this "sharable plugins". We need to make it clear which
sharing usages the heuristic is supposed to support, and which ones would be
errors. And then we should make sure that the erroneous usages are reported
rather than just leading to corrupt results. If two searches that run in a row
(or concurrently, or in an interleaved fashion) use the same landmark heuristic
object, how should it behave? This affects all heuristics with internal state --
even if it's just the heuristic value cache that all heuristics have.

Do we already have an issue for this? If yes, we should add a link to this
discussion. If not, it would be good to open one. For what it's worth, I feel
uneasy that we use shared_ptr for all new plugin types now because that fails to
communicate that in many cases it's really important that they are *not* shared
with anyone. Perhaps we can use type names like SharablePlugin and
SomethingelsePlugin to make this really clear, and have one of them return
shared_ptr and the other one unique_ptr.
msg5119 (view) Author: florian Date: 2016-01-20.10:29:52
Ok, this means that three of the four occurrences are not problematic, because
they currently use GlobalStates from their parameters and should keep those
parameters (or something similar) in the long run.

What about the constructor that uses the initial state from a global variable,
though? It should get the initial state from the search, but when it is
constructed it does not know about the search yet. Do we need an initialization
method that has the initial state as a parameter?
msg5117 (view) Author: malte Date: 2016-01-19.19:48:22
The path-dependent heuristics must be able to process information of the
original task. There is no way to implement reach_state otherwise because the
information that must be processed requires understanding search-level operators.

More generally, a few times we discussed the point that it's not sufficient to
think of "abstract tasks" the heuristics operate on. We need "task
transformations" because inputs (states) and outputs (preferred operators) need
to be translated somewhere. For path-dependent heuristics, the landmark storage
and updating in reach_state is closely related to the "input" side.

How to best resolve this is not clear at the moment. Perhaps the easiest way
would be to split off path-dependent and path-independent heuristics as we
discussed a few times before. In that case, path-independent heuristics don't
need virtual methods on "search-level" states. Path-dependent heuristics will
need more flexibility.

I don't think this is a problem at the moment. We will hopefully see clearer
what the design for this should look like when we have converted all heuristics
and can revisit the current hacks in Heuristic.
msg5116 (view) Author: florian Date: 2016-01-19.19:42:10
I saw a problem with the direction in which we moved the interface of most of
the other heuristics:

Where we previously implemented compute_heuristic(const GlobalState &), we
started converting the global state to a local state in the first line and then
called the overload compute_heuristic(const State &). I thought the long-term
plan was to drop compute_heuristic(const GlobalState &) from all classes except
the base class and do the conversion there. In this case, the landmark heuristic
wouldn't see the state provided by the search, only the base class would. Did
you plan to make both methods virtual and override the one for global states in
the landmark heuristic?

The other issue is mainly a technicality: where we currently use the global
initial state in the initialization, we now have to find a way to access the
initial state of the task used in the search.
msg5115 (view) Author: malte Date: 2016-01-19.19:29:53
> However, this would mean that every state is registered and stored twice: once
by the
> search and once by the heuristic. Is there a way we can avoid this?

The *only* (real) reason why we have state registries, per-state information
etc. is so that the landmark heuristic doesn't have to store the states for its
internal map but can use the state IDs provided by the search. :-)

So yes, the landmark heuristic should use the state IDs provided by the search.
I think this is actually fine. Conceptually, it seems correct to me that this
works on the level of states from the search, not transformed states. The same
is true for states stored in the heuristics' heuristic caches, I think.

We need to document this properly, but at the moment I don't think there is a
problem there. I hope I'm not overlooking one. :-)
msg5109 (view) Author: florian Date: 2016-01-19.10:20:50
When working on issue416, we noticed that there is a problem with the per state
information in the landmark heuristic. Currently, four methods access it:

LandmarkCountHeuristic::LandmarkCountHeuristic
--> LandmarkStatusManager::set_landmarks_for_initial_state
    --> LandmarkStatusManager::get_reached_landmarks

LandmarkCountHeuristic::compute_heuristic
--> LandmarkCountHeuristic::set_exploration_goals
    --> LandmarkStatusManager::get_reached_landmarks

LandmarkCountHeuristic::get_heuristic_value
--> LandmarkStatusManager::update_lm_status
    --> LandmarkStatusManager::get_reached_landmarks

LandmarkCountHeuristic::reach_state
--> LandmarkStatusManager::update_reached_lms
    --> LandmarkStatusManager::get_reached_landmarks

All of these methods use GlobalState objects as their parameter, which they
either get from the search or from g_inital_state. It is important that the
states that are used all come from the same registry.

If we switch the heuristic to use the task interface and have a potentially
transformed task inside the heuristic, I don't see a way around storing
information for local states. However, this would mean that every state is
registered and stored twice: once by the search and once by the heuristic. Is
there a way we can avoid this?
msg4748 (view) Author: manuel Date: 2015-11-10.10:04:19
Consider issue592 for this issue.
msg4369 (view) Author: florian Date: 2015-07-15.11:48:19
The landmark heuristics should use the task it get from the options object (as a
TaskProxy) instead of the global task.
History
Date User Action Args
2016-11-02 09:47:04maltesetmessages: + msg5802
2016-11-02 09:27:43manuelsetmessages: + msg5801
2016-10-28 19:23:47maltesetmessages: + msg5799
2016-10-27 13:03:40floriansetmessages: + msg5791
2016-10-27 12:40:17manuelsetmessages: + msg5790
2016-10-26 19:10:16maltesetmessages: + msg5784
2016-10-26 19:04:49jendriksetmessages: + msg5782
2016-10-26 11:58:59maltesetmessages: + msg5779
2016-10-26 10:48:45jendriksetmessages: + msg5778
2016-10-26 10:31:01maltesetmessages: + msg5777
2016-10-25 17:59:17manuelsetmessages: + msg5776
2016-10-25 17:53:42maltesetmessages: + msg5775
2016-10-25 17:22:39manuelsetmessages: + msg5774
2016-10-24 11:52:47manuelsetstatus: in-progress -> resolved
messages: + msg5771
2016-10-24 11:28:05silvansetstatus: reviewing -> in-progress
messages: + msg5770
2016-10-21 17:18:06maltesetmessages: + msg5766
2016-10-21 17:14:26jendriksetstatus: in-progress -> reviewing
messages: + msg5765
2016-10-19 12:13:03jendriksetmessages: + msg5754
2016-10-19 11:08:36manuelsetmessages: + msg5753
2016-10-18 12:25:08maltesetmessages: + msg5751
2016-10-12 11:13:02jendriksetmessages: + msg5714
2016-10-03 10:48:48manuelsetmessages: + msg5707
2016-09-30 14:08:10jendriksetmessages: + msg5704
2016-09-30 14:02:56maltesetmessages: + msg5703
2016-09-30 13:41:07jendriksetmessages: + msg5700
2016-09-30 12:19:18manuelsetmessages: + msg5699
2016-09-28 23:45:12jendriksetmessages: + msg5690
2016-09-28 22:34:03maltesetmessages: + msg5687
2016-09-28 13:59:54jendriksetmessages: + msg5679
2016-09-28 13:41:31jendriksetmessages: + msg5678
2016-09-28 11:55:43maltesetmessages: + msg5676
2016-09-28 11:24:45floriansetmessages: + msg5673
2016-09-28 00:01:08jendriksetmessages: + msg5672
2016-09-27 17:05:26maltesetmessages: + msg5669
2016-09-27 16:41:23jendriksetmessages: + msg5667
2016-09-22 11:27:44manuelsetmessages: + msg5657
2016-09-18 23:43:57maltesetmessages: + msg5640
2016-09-15 14:08:35jendriksetmessages: + msg5629
2016-09-15 13:49:05jendriksetmessages: + msg5628
2016-06-28 12:22:57silvansetmessages: + msg5470
2016-06-28 12:13:00silvansetmessages: + msg5469
summary: Wait for issue592. ->
2016-06-03 20:25:33floriansetmessages: + msg5423
summary: Wait for issue592.
2016-05-18 14:08:33silvansetnosy: + silvan
2016-01-20 16:02:09maltesetmessages: + msg5124
2016-01-20 16:00:54floriansetmessages: + msg5123
2016-01-20 12:08:49maltesetmessages: + msg5120
2016-01-20 10:29:52floriansetmessages: + msg5119
2016-01-19 19:48:22maltesetmessages: + msg5117
2016-01-19 19:42:10floriansetmessages: + msg5116
2016-01-19 19:29:53maltesetmessages: + msg5115
2016-01-19 10:20:50floriansetmessages: + msg5109
2015-11-10 10:04:19manuelsetmessages: + msg4748
2015-10-27 16:34:48manuelsetstatus: unread -> in-progress
nosy: + manuel
assignedto: manuel
2015-07-15 17:14:05jendriksetnosy: + jendrik
2015-07-15 11:48:19floriancreate