Issue123

Title FF/LM synergy is (at least sometimes) much slower than using the two heuristics independently
Priority feature Status resolved
Superseder Nosy List erez, malte, silvan
Assigned To silvan Keywords 1.0
Optional summary

Created on 2010-09-07.15:46:18 by malte, last changed by silvan.

Messages
msg3684 (view) Author: silvan Date: 2014-10-06.22:34:38
Okay, so finally I can call this one merged (if also not really resolved,
unfortunately).
msg3676 (view) Author: malte Date: 2014-10-06.14:31:33
I think we shouldn't leave a head open -- this would look to much like an open
TODO in the repository. I think the common solution in cases like this is to
merge into default, but keep none of the code from the issue branch in the
repository. This is called a "null merge". There are some infos here:
https://docs.python.org/devguide/faq.html#how-do-i-make-a-null-merge but we
should adapt them to our situation (e.g. we want to merge the experiment stuff
even if we don't want to merge the search code changes).
msg3675 (view) Author: silvan Date: 2014-10-06.14:15:06
Should I simply close the branch in the repository? This leaves the extra head,
but we don't want to merge anything, do we?
msg3672 (view) Author: malte Date: 2014-10-06.12:35:12
Not mandatory.
msg3671 (view) Author: silvan Date: 2014-10-06.12:34:40
I can do it, but the scripts don't use the common_setup from the experiments
directory. Is that mandatory? Maybe I could profit from changing to that setup
anyways, but I don't want to maintina two setups in parallel.
msg3657 (view) Author: malte Date: 2014-10-05.13:09:03
OK, Silvan, before we close this, given that the configurations are tricky, I
suggest we add an experiment for this to the repository for future reference,
and then mark this as resolved. Can you add an experiment to bitbucket and make
a pull request?
msg3656 (view) Author: erez Date: 2014-10-05.00:31:54
I don't even remember raising this issue (I always thought it was Silvia).
In any case, I think we can leave this alone for now - we will need to do 
thorough testing after the landmark code refactoring.
.
msg3631 (view) Author: malte Date: 2014-10-04.19:20:10
Erez, you raised the issue back in the day. Do you think there's still something
here that needs addressing? (Independently of this issue we should refactor the
landmark code, but that is a separate story.)
msg3599 (view) Author: silvan Date: 2014-09-29.12:35:48
Results:

http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-09-28-issue123-compare-lama11-comp.html

http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-09-28-issue123-compare-lama11-first-it-comp.html

The results look quite bad, so that I even wonder if the statement from the
creation of this issue, i.e. msg508 (what a low number!), still holds. For
example, in the mentioned blocks domain, the separated config is not at all
faster than the synergy.

The last statement of Malte before the revival (msg2600) was that "it's time to
read the code". Should we do this or "live with the synergy", as it seems there
indeed is a synergy when using the synergy?
msg3589 (view) Author: malte Date: 2014-09-28.19:06:48
> Shouldn't we remove the cost_types from this heuristic definition because it
> doesn't change the behaviour and potentially leads to confuse people (as it did
> in my case)?

Yes, this would be good. I also checked the behaviour w.r.t. axioms now to make
sure that there is no funny business from that end.

Function get_adjusted_action_cost always returns 0 for axioms, independently of
the cost type, and they also don't affect the test if a given task is unit-cost.
(That is, a task with unit-cost operators plus axioms is considered unit-cost
even though parts of the planner treat axioms as cost-0 operators.) This makes
sense to me and means that the cost-type option really doesn't matter for
unit-cost tasks, even if it has axioms.

To change the LAMA config for the future, we would need to change it within
issue414 or after merging issue414. Can you get in touch with Jendrik for this
or remember to change it after issue414 is merged? Within issue414 might be a
good idea because we need to run an experiment there anyway to make sure we
didn't mess the configs up.
msg3588 (view) Author: silvan Date: 2014-09-28.18:59:04
Okay, thanks for the clarification! Point 3 and 4 helped a lot.

Reading point 5, I still wonder why the unit-cost configuration of lama uses a
specification of cost types:

        "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                         lm_cost_type=plusone,cost_type=plusone))",

Shouldn't we remove the cost_types from this heuristic definition because it
doesn't change the behaviour and potentially leads to confuse people (as it did
in my case)?

Anyways, let me propose the following lama2011-first-iteration configuration:

    'lama-2011-first-it': [
        "--heuristic",
        "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                         lm_cost_type=one,cost_type=one))",
        "--search",
        "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=one)"
    ],

Does this look correct to you?
msg3587 (view) Author: malte Date: 2014-09-28.18:47:18
I don't fully understand the questions. :-) Let me have one more go at trying to
explain the whole picture view. If this isn't clear, let's perhaps discuss this
in person at the next opportunity since writing takes so much time.

Main points:

1. Ignoring action costs during planning (= treating all actions as if their
cost was 1) tends to find solutions faster but produce poorer solutions. Hence
we do this in the first iteration so that we can find *some* solution before the
timeout, and then switch to taking action costs into account later on.

2. Considering actions with cost 0 in the heuristic leads to poor guidance: for
example, if all actions cost 0, then h*(s) = 0 for all solvable states, so
search is essentially blind even with excellent heuristics. Therefore, for the
*heuristic computation*, in the cases where we don't ignore action costs and the
input has actions of different costs, we treat all actions with cost X as if
they had cost X+1 instead.

3. We see no compelling reason to do the same "plusone" thing for the cost_type
option of the *search*, which is used for computing g values. Here we think that
accurate g values serve us better in general. However, it *is* important that
the g and h values are roughly "on the same scale": for example, if we treated
action as cost 1 in heuristics but used the real costs for the g values, then in
a task where, say, all actions have costs between 1000-1001, the importance of
the g values in WA* would be blown up by a factor of 1000, which is almost the
same thing as ignoring the heuristic.

4. Hence, we have the following rules: if heuristic uses unit cost, search
should use unit cost.
If heuristic uses plus one, search should use the actual cost (which is the
default setting if no cost_type is specified).

5. If the input problem only has unit-cost actions, all this doesn't matter
because all cost_type settings behave the same then. (I'm not sure if this is
also true if the task has axioms, because some parts treat axioms as actions of
cost 0, and perhaps the cost_type settings interfere with that. I'd need to have
a careful look at the code to know what exactly happens there. But I would say
this is unrelated to this issue.)
msg3585 (view) Author: silvan Date: 2014-09-28.18:32:57
> Anyway, the second config doesn't look correct. It defines heuristics hlm2,hff2
> but uses hlm1,hff1. Can you quote the correct ones instead?

Sorry, here it is:

"--heuristic",
        "hlm1,hff1=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                           lm_cost_type=one,cost_type=one))",

Anyways, those excerpts are simply copied from the lama2011 config. The
difference is that the heuristic for the unit-cost case uses plusone, whereas
(the now correctly copied) config for the non unit-cost case uses one.

I still did not understand why the search uses different cost types. This is the
second difference in the first iteration of lama, depending on the cost type of
the task. For the non unit-cost case, the search uses unit cost (this matches
the heuristic also using unit-cost for the first iteration), but isn't it enough
if the heuristic reports all operator costs as 1?
msg3584 (view) Author: malte Date: 2014-09-28.18:21:11
> Well, the lama2011 configuration (also in your branch) uses plusone for the
> unit-cost case, and apparently (according to your answer) there is no reason
> for it. On the other hand, it also doesn't hurt probably.

It's a bit subtle. I wrote that "It's odd that the *search algorithms* use
PLUSONE, though. Shouldn't this just be used in the *heuristics*?". (Emphasis
added.)

The "correct" thing for the general-cost case (after the initial unit-cost
search) is to use the actual costs for the search algorithms and PLUSONE for the
heuristics, and it looks like this is what the LAMA config you quoted does (the
default if a search algorithm doesn't sepecify cost_type is to use the actual
costs).

This is different from the ones in the comments earlier which also use PLUSONE
for the search algorithms (or at least some of them do).


> One question remains: I also wanted to repeat the "first iteration of lama"
> experiment, but I am not sure, if we need to distinguish between the unit-cost
> and the non unit-cost case here as well. The first iterations in both cases are
> similar, although not identic:
>
>"--heuristic",
>        "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
>        "                         lm_cost_type=plusone,cost_type=plusone))",
>"--search", "lazy_greedy([hff,hlm],preferred=[hff,hlm])"
>
>vs
>
>"--heuristic",
>        "hlm2,hff2=lm_ff_syn(lm_rhw(reasonable_orders=true,"
>        "                           lm_cost_type=plusone,cost_type=plusone))",
>"--search", "lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],
>                         cost_type=one,reopen_closed=false)"
>
> The difference is the the cost type of the heuristic (where we could simply take
> plusone, because it does not matter for the unit-cost case), but also for the
> search, where in the non unit-cost case, the cost type used is that for unit
> cost (?), and reopen_closed is set to false.

I assume the first config is for the unit-cost case and the second one is for
the general-cost case?
Anyway, the second config doesn't look correct. It defines heuristics hlm2,hff2
but uses hlm1,hff1. Can you quote the correct ones instead?

In any case, they should be identical, and the differences you see should be due
to options that have no influence (e.g. cost_type=XXX makes no difference on
unit-cost tasks, and reopen_closed=false is the default value for reopen_closed
in lazy_greedy).

> So, my questions are:
> - do we want to run the first iteration of lama as a standalone configuration
> besides running the full configuration, and if yes,

Yes, and it should be run in a separate experiment (or at least: separate
report) because comparing anytime to non-anytime configurations with lab leads
to awkward effects.

> - what does this configuration look like? Previously in this issue, we have
> taken the unit-cost case's configuration, but I think we could simply mimic the
> "one configuration for both cases" (with --if...) also for the first iteration
> configuration.

Let's fix and discuss the two configurations first before we answer this.
msg3583 (view) Author: silvan Date: 2014-09-28.18:10:28
Well, the lama2011 configuration (also in your branch) uses plusone for the
uni-cost case, and apparently (according to your answer) there is no reason for
it. On the other hand, it also doesn't hurt probably.

Just for the issue tracker, this is the config:

'lama-2011': [
        "--if-unit-cost",
        "--heuristic",
        "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                         lm_cost_type=plusone,cost_type=plusone))",
        "--search", """iterated([
                         lazy_greedy([hff,hlm],preferred=[hff,hlm]),
                         lazy_wastar([hff,hlm],preferred=[hff,hlm],w=5),
                         lazy_wastar([hff,hlm],preferred=[hff,hlm],w=3),
                         lazy_wastar([hff,hlm],preferred=[hff,hlm],w=2),
                         lazy_wastar([hff,hlm],preferred=[hff,hlm],w=1)
                         ],repeat_last=true,continue_on_fail=true)""",
        "--if-non-unit-cost",
        "--heuristic",
        "hlm1,hff1=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                           lm_cost_type=one,cost_type=one))",
        "--heuristic",
        "hlm2,hff2=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                           lm_cost_type=plusone,cost_type=plusone))",
        "--search", """iterated([
                         lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],
                                     cost_type=one,reopen_closed=false),
                         lazy_greedy([hff2,hlm2],preferred=[hff2,hlm2],
                                     reopen_closed=false),
                         lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=5),
                         lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=3),
                         lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=2),
                         lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=1)
                         ],repeat_last=true,continue_on_fail=true)""",
    ],

One question remains: I also wanted to repeat the "first iteration of lama"
experiment, but I am not sure, if we need to distinguish between the unit-cost
and the non unit-cost case here as well. The first iterations in both cases are
similar, although not identic:

"--heuristic",
        "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                         lm_cost_type=plusone,cost_type=plusone))",
"--search", "lazy_greedy([hff,hlm],preferred=[hff,hlm])"

vs

"--heuristic",
        "hlm2,hff2=lm_ff_syn(lm_rhw(reasonable_orders=true,"
        "                           lm_cost_type=plusone,cost_type=plusone))",
"--search", "lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],
                         cost_type=one,reopen_closed=false)"

The difference is the the cost type of the heuristic (where we could simply take
plusone, because it does not matter for the unit-cost case), but also for the
search, where in the non unit-cost case, the cost type used is that for unit
cost (?), and reopen_closed is set to false. So, my questions are:
- do we want to run the first iteration of lama as a standalone configuration
besides running the full configuration, and if yes,
- what does this configuration look like? Previously in this issue, we have
taken the unit-cost case's configuration, but I think we could simply mimic the
"one configuration for both cases" (with --if...) also for the first iteration
configuration.

Regarding the separated version, I'll simply split up the configuration we
settle on for the lama/synergy configuration.

(Sorry for the long text with configurations again, I just want to run the right
configuration from the first time on :-) )
msg3582 (view) Author: malte Date: 2014-09-28.17:22:05
> Malte, I think I don't have access rights to that repository of yours.
> At your bitbucket profile, I only see ipc2008 and pyperplan.

Sorry, I didn't realize it was private. I've given access to the new Basel group
account now. Erez, let me know if you'd also like access to that repository.
(There isn't much exciting in it at the moment; I use it for issues I'm working on.)

> So, for the unit-cost case, we don't need to specify any cost types? And yes,
> I don't really understand why searches and heuristics may use cost types...

For unit-cost inputs all cost types behave the same. (I'm not sure if that
answers the question. If not, please rephrase. :-))
msg3578 (view) Author: silvan Date: 2014-09-28.16:19:18
Malte, I think I don't have access rights to that repository of yours. At your
bitbucket profile, I only see ipc2008 and pyperplan.

So, for the unit-cost case, we don't need to specify any cost types? And yes, I
don't really understand why searches and heuristics may use cost types...
msg3553 (view) Author: malte Date: 2014-09-26.14:21:49
For LAMA, there is now a single variant that covers both cases in the issue414
branch -- check file src/driver/aliases.py in the malte/downward bitbucket
repository in that branch. (You can simply use this option string with the
current codebase, it doesn't rely on issue414 features.)

I don't think anyone has sanity-checked it yet, though, so it would be good if
you could check that it makes sense.

Other comments:

- Yes, numerical values for the enums should be avoided. It's probably not worth
bothering changing this in the default branch if we'll hopefully merge issue414
soon anyway.

- In the unit-cost cases, all three cost types (should) behave the same, because
"PLUSONE" doesn't trigger for unit-cost problems. (Or at least it shouldn't...)
It's odd that the search algorithms use PLUSONE, though. Shouldn't this just be
used in the heuristics?
msg3549 (view) Author: silvan Date: 2014-09-26.00:33:27
In the FD master branch, in the downward script, lama-2011 is defined as follows
(probably should adapt 1 and 2 to ONE and PLUSONE as in the issue123 branch):

    'synergy-unit': [
        "--heuristic",
       
"hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,lm_cost_type=2,cost_type=2))",
        "--search",
        "iterated([lazy_greedy([hff,hlm],preferred=[hff,hlm]),"
           
"lazy_wastar([hff,hlm],preferred=[hff,hlm],w=5),lazy_wastar([hff,hlm],preferred=[hff,hlm],w=3),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=2),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=1)],"
            "repeat_last=true,continue_on_fail=true)"
    ],

    'synergy-non-unit': [
        "--heuristic",
       
"hlm1,hff1=lm_ff_syn(lm_rhw(reasonable_orders=true,lm_cost_type=1,cost_type=1))",
        "--heuristic",
"hlm2,hff2=lm_ff_syn(lm_rhw(reasonable_orders=true,lm_cost_type=2,cost_type=2))",
        "--search",
       
"iterated([lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],cost_type=1,reopen_closed=false),"
            "lazy_greedy([hff2,hlm2],preferred=[hff2,hlm2],reopen_closed=false),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=5),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=3),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=2),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=1)],"
            "repeat_last=true,continue_on_fail=true)"
    ],

While it seems strange to me to use cost_type=PLUSONE in the unit case, we
changed the lama-2011 config 14 months ago in the issue123 branch to the
following "separated" configs:

    'separated-unit': [
        "--heuristic",
        "hlm=lmcount(lm_rhw(reasonable_orders=true),pref=true)",
        "--heuristic",
        "hff=ff()",
        "--search",
        "iterated([lazy_greedy([hff,hlm],preferred=[hff,hlm]),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=5),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=3),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=2),"
            "lazy_wastar([hff,hlm],preferred=[hff,hlm],w=1)],"
            "repeat_last=true,continue_on_fail=true)"
    ],

    'separated-non-unit': [
        "--heuristic",
       
"hlm1=lmcount(lm_rhw(reasonable_orders=true,lm_cost_type=ONE,cost_type=ONE),pref=true,cost_type=ONE)",
        "--heuristic",
        "hff1=ff(cost_type=ONE)",
        "--heuristic",
       
"hlm2=lmcount(lm_rhw(reasonable_orders=true,lm_cost_type=PLUSONE,cost_type=PLUSONE),pref=true,cost_type=PLUSONE)",
        "--heuristic",
        "hff2=ff(cost_type=PLUSONE)",
        "--search",
       
"iterated([lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],cost_type=ONE,reopen_closed=false),"
           
"lazy_greedy([hff2,hlm2],preferred=[hff2,hlm2],cost_type=PLUSONE,reopen_closed=false),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=5,cost_type=PLUSONE),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=3,cost_type=PLUSONE),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=2,cost_type=PLUSONE),"
            "lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=1,cost_type=PLUSONE)],"
            "repeat_last=true,continue_on_fail=true)"
    ],

This does not mention cost types at all for the unit case.

I am not really getting through those configs; what's correct, what isn't?
msg3548 (view) Author: silvan Date: 2014-09-26.00:09:05
There are two lama-2011 variants: one with unit cost, one with non unit cost.
Which should I take? Or both?

I think the configs I listed stem from msg1614.
msg3547 (view) Author: malte Date: 2014-09-25.21:12:23
The "synergy" and "separated" configs look a bit odd in more ways than one. We
don't usually simply iterate lazy WA* with a weight of 1. Perhaps we should
replace them by the lama-2011 config and a separated variant?
msg3546 (view) Author: silvan Date: 2014-09-25.20:49:46
Reviving this one, I am not 100% sure if the configs to be tested are correct:

synergy:

'synergy': [
    '--heuristic',
    'hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true))',
    '--search', 'iterated(lazy_wastar([hlm,hff],w=1,
        preferred=[hlm,hff]),repeat_last=true)'],
'synergy-first-it': [
    "--heuristic",
    "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,
        lm_cost_type=ONE,cost_type=ONE))",
    "--search",
    "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=ONE)"],

separated:

'separated': [
    '--heuristic',
    'hlm=lmcount(lm_rhw(reasonable_orders=true),pref=true)',
    '--heuristic',
    'hff=ff()',
    '--search',
    'iterated(lazy_wastar([hlm,hff],w=1,preferred=[hlm,hff]),
        repeat_last=true)'],
'separated-first-it': [
    "--heuristic",
    "hlm=lmcount(lm_rhw(reasonable_orders=true,lm_cost_type=ONE,
        cost_type=ONE),pref=true,cost_type=ONE)",
    "--heuristic",
    "hff=ff(cost_type=ONE)",
    "--search",
    "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=ONE)"],

Am I right assuming that the "synergy" and "separated" configs also need the
cost_types included? In the last iteration of this issue (14 months ago), we
ended up testing the first iteration of the searches, and then found that we
must use those cost types...
msg2600 (view) Author: malte Date: 2013-08-01.10:56:54
Thanks, this does clear up a few questions! Based on this data, I think we can
remove fd14176e4003-synergy from future experiments in this issue and focus on
the implementation differences between the two FF heuristics and/or the way that
the relaxed explorations for FF and landmarks in exploration.cc interact. In
other words, I guess it's time to read the code.
msg2599 (view) Author: silvan Date: 2013-08-01.10:51:57
Here they come:
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-full-abs-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-full-abs-p.html

The synergy configuration on the newer revision (73e...) is indeed a bit
"better", but still similar to the separated version.
msg2584 (view) Author: malte Date: 2013-07-31.21:09:42
> For 1.) the changes are very small and really only refrain from using the
> removed synergy, so I am not sure what to do in that case.

As discussed offline, one potential pitfall is that we're comparing to a
slightly older version of the default branch. I had a look at the revision
history, and there were a few (small, but still) changes to the landmark code
that might affect the results.

Can you add a third configuration to the experiment? (I guess this means we
can't use the special comparison report any more, but since it's only a few
configs, I think it's still easy to interpret.)

What I'd like to see is something like the report in msg2575, but with a third
configuration added:

- same parameters as in fd14176e4003-synergy
- but with a different planner revision: 73e1477edf7b

If that doesn't show us anything, I think we need to look at the FF heuristic
implementations and more generally the way the relaxed explorations are set up
and used by the two planner versions.
msg2575 (view) Author: silvan Date: 2013-07-29.10:15:36
There it is (these experiments run through very quickly, so I produce tons of
new data as long as desired ;-)):
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-full-configs-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-full-configs-p.html

How should be proceed in having "another close look" at the code? For 2.) it may
be enough to actually look at the code again, go through it and try to figure
out if and where there are some differences, but for 1.) the changes are very
small and really only refrain from using the removed synergy, so I am not sure
what to do in that case.
msg2574 (view) Author: malte Date: 2013-07-29.00:05:22
Thanks! Looking at the data, I find it hard to understand why we get a massive
slowdown in some domains, but not in others, even when the evaluations look
comparable in the two cases. The more I look at the results, the stranger they are.

It's not that the results are terrible (though it would be nice if they were a
bit better). Rather, what concerns me is that it's hard to find a plausible
explanation for the results.

Also, looking at the previous experiments on all domains, it seems that some
psr-middle/psr-large instances cause problems for the old codebase with the
synergy, but not for the new codebase without. (See the unexplained_errors
there, which only concern one of the two codebases, and which were
landmark-related if I recall correctly.)

All this makes me wonder if there is some behaviour differences we're missing at
the moment; either bugs that are present in only one of the two cases, or
behaviour differences in FF tie-breaking, preferred operator generation,
dead-end handling or whatever that mask certain bugs in one case, but not in
another.

Therefore, I'd like to:

1. have another close look at the differences between the code revisions we're
comparing

2. have another close look at the differences between the two FF heuristic
implementations (regular vs. synergy)

before we call this one done.


To get a better clue of what is going on, it would be nice to have exactly the
same data as for the previous experiment (msg2573) also for the other domains
(i.e., same configurations, only this time again with the complete suite), given
that the previous experiments we have on the complete suite cannot really be
used to compare expansions or runtime, only coverage, due to the way that the
anytime results are handled by lab. (But if you're tired of running experiments
for this issue, I can fully understand, and I think we can look into this
further with the data we have already. :-))
msg2573 (view) Author: silvan Date: 2013-07-28.23:12:03
I understood that but I thought you wanted to keep it like this for the very
same reason (although I have to admit I couldn't think of any reason not to
change it) :)

There are new results:
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-configs-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/ssearch-compare-configs-p.html

You find the logs in my home directory:
experiments/executed/issue123/ssearch-synergy
experiments/executed/issue123/ssearch-separated
msg2572 (view) Author: malte Date: 2013-07-28.22:18:38
Configurations for the experiment look good -- hope I didn't miss anything.

Regarding the downward script, I simplified the unit-cost cost as discussed over
email. (Pushed.) Due to the special-case behaviour of PLUSONE on unit-cost
tasks, the configurations before and after this change are equivalent, but using
PLUSONE in this case (where it has no effect and no motivation) is a bit
confusing/misleading to the reader and was only in the downward script for
historical copy-paste reasons.
msg2571 (view) Author: silvan Date: 2013-07-28.22:03:51
I will soon start two experiments, one for the old revision with the following
config:
CONFIGS = {
    'synergy': ["--heuristic", "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true,"
                "lm_cost_type=ONE,cost_type=ONE))", "--search",
                "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=ONE)"]
}
and one for the new revision with the following config:
CONFIGS = {
    'separated': ["--heuristic", "hlm=lmcount(lm_rhw(reasonable_orders=true,"
                  "lm_cost_type=ONE,cost_type=ONE),pref=true,cost_type=ONE)",
                  "--heuristic", "hff=ff(cost_type=ONE)", "--search",
                  "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=ONE)"]
}

If you spot an error, please let me know :)
(You were right about the last experiment I posted)

Yes, I remember having read an issue where Gabi and Salomé made the same
observation about Sokoban instances. But I also believe to remember they didn't
find a solution (yet) and they backed out the patch for the issue because the
problem only occured after merging the branch (?).

I pushed another update to the downward script, could you check it please? (It
is not being used in the next experiment, so there is no hurry.)
msg2570 (view) Author: malte Date: 2013-07-28.20:21:42
Thanks! I probably still wasn't clear enough, though. The purpose of the
expreriment is to see what the negative impact of merging this issue is in terms
of coverage, especially on the problems involving action costs (IPC 2008 and IPC
2011).

So we should compare the old way of doing things (old code using the synergy) to
the new way of doing things (new code not using the synergy). From a look at the
properties file, it looks like the old code being run in the experiment is not
using the synergy. Right?

Other notes:

1. There seems to be a bug somewhere (in the landmark heuristic?), as some
Sokoban instances are reported unsolvable, which is wrong. We should look into
this. It looks a bit familiar; I think Gabi and Salomé noticed this recently, too.

2. The configuration for the new code is still not quite correct. The cost_type
option should also be set for the search algorithm, and to be future-proof, it
would also be good to set it for the lmcount heuristic, since this is what will
matter once we've cleaned up the mess described in
http://www.fast-downward.org/OptionCaveats. So this:

    "commandline_config": [
      "--heuristic",
"hlm=lmcount(lm_rhw(reasonable_orders=true,lm_cost_type=ONE,cost_type=ONE),pref=true)",

      "--heuristic", 
      "hff=ff(cost_type=ONE)", 
      "--search", 
      "lazy_greedy([hff,hlm],preferred=[hff,hlm])"
    ] 

should be:

    "commandline_config": [
      "--heuristic",
"hlm=lmcount(lm_rhw(reasonable_orders=true,lm_cost_type=ONE,cost_type=ONE),pref=true,cost_type=ONE)",

      "--heuristic", 
      "hff=ff(cost_type=ONE)", 
      "--search", 
      "lazy_greedy([hff,hlm],preferred=[hff,hlm],cost_type=ONE)"
    ]

(so two additional cost_type flags).

I don't think it will make a big difference, though, since the cost_type of the
lmcount heuristic is currently only used in the admissible heuristic case, I
think, and greedy search shouldn't care much about the cost_type. (We might use
g values for tie-breaking, though, in which case the cost_type matters even for
greedy search -- I don't remember.)
msg2569 (view) Author: silvan Date: 2013-07-28.18:57:54
http://ai.cs.unibas.ch/_tmp_files/sieverss/ipc-compare-single-search-revisions-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/ipc-compare-single-search-revisions-p.html

There are now values for total time in the transport-sat11 domain.
msg2567 (view) Author: malte Date: 2013-07-28.15:05:00
> I am not sure what you mean with "experiment using only the first search
> algorithms of the two planners (the one that ignores costs)": you want to take
> the search from the first iteration and make it the single search?

Yes.

> By "the two planners" you mean the unit-cost and the non-unit-cost case, 
> right?

No, I meant before/after the changes in this issue, i.e. the old version of the
code with the synergy vs. the new one without.

The first search in LAMA-2011 always ignores cost, so we don't need to
distinguish unit/nonunit: the correct configuration is the one with all
cost_type settings set to ONE.
msg2566 (view) Author: silvan Date: 2013-07-28.14:44:30
Okay, I thought there is no need to keep obviously wrong results. I overwrote
the experiment data but I still have a copy of the old results, you can find
them here:
http://ai.cs.unibas.ch/_tmp_files/sieverss/old-ipc-compare-revisions-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/old-ipc-compare-revisions-p.html

I hope you have now access to my home directory. You find the results from the
last experiment at /inafi/sieverss/experiments/executed/issue123/ipc-compare

I am not sure what you mean with "experiment using only the first search
algorithms of the two planners (the one that ignores costs)": you want to take
the search from the first iteration and make it the single search? By "the two
planners" you mean the unit-cost and the non-unit-cost case, right? But then I
do not understand how we can take both configs but only "the one that ignores
costs".
msg2565 (view) Author: malte Date: 2013-07-28.12:47:10
Hmm, it's hard to evaluate this if the old results are no longer there. :-( 

Generally, please don't overwrite results that are linked from the issues with
new ones -- otherwise the issue tracker loses its purpose of showing what the
effect of the changes is.

It seems that the results are a bit better now. Where before we had this:

> All in all, I'm a little bit concerned by the behaviour on the IPC 11
> (satisficing) domains. Apart from the floortiles and nomystery domains, the
> old version has 5 unsolved instances among 240 tasks. The new one has 14.

We now have numbers of 6 vs. 12, which is not great, but a bit better. But maybe
this is also random noise.

Regarding access to the logs, what I did is run "chmod g+rx ~" on maia to give
the group read access to my home directory there.

Regarding the missing total_time etc., we probably need to run an non-anytime
config to see more details.

Can we do an additional experiment using only the first search algorithms of the
two planners (the one that ignores costs) on only the IPC-2008 and IPC-2011
satisficing suite ("IPC08_SAT" + "IPC11_SAT")?

(Sorry that it is taking so long to get this landed, but LAMA-2011 performance
is one of the things we should take care to protect, and domains with general
action costs are quite important...)
msg2564 (view) Author: silvan Date: 2013-07-28.11:53:20
The results are now updated.
I am afraid they aren't any changes, as far as the transport-sat11 is concerned.
I had a look at some log-files, and at least for the first two instances, both
configurations find at least one plan. In the error-log, it says
line 196: 79053 CPU time limit exceeded "$PLANNER" --heuristic
"hlm1,hff1=lm_ff_syn(lm_rhw(
                    reasonable_orders=true,lm_cost_type=1,cost_type=1))"
--heuristic "hlm2,hff2=lm_ff_syn(lm_rhw(
                    reasonable_orders=true,lm_cost_type=2,cost_type=2))"
--search "iterated([
                    lazy_greedy([hff1,hlm1],preferred=[hff1,hlm1],
                                cost_type=1,reopen_closed=false),
                    lazy_greedy([hff2,hlm2],preferred=[hff2,hlm2],
                                reopen_closed=false),
                    lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=5),
                    lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=3),
                    lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=2),
                    lazy_wastar([hff2,hlm2],preferred=[hff2,hlm2],w=1)],
                    repeat_last=true,continue_on_fail=true)" "$@" < $TEMPFILE
So maybe no time is reported because the planner get killed from lab?

I don't know how I can make the logs available in case you want to have a look;
any ideas?
msg2561 (view) Author: silvan Date: 2013-07-27.12:54:21
Sorry about that! :(

Experiments are running and should be finished in a few hours; I'll post the
results in the early evening.
msg2560 (view) Author: malte Date: 2013-07-27.10:37:15
Ah, I found a small bug. Fixed and pushed. Can you rerun the experiment on the
new revision? It should hopefully produce better results on non-unit-cost tasks now.
msg2559 (view) Author: malte Date: 2013-07-27.10:31:35
Thanks! There seems to be something weird with the results -- for example, in
transports-sat11, we don't have any entries for total_time, even though there
should be many commonly solved tasks.

All in all, I'm a little bit concerned by the behaviour on the IPC 11
(satisficing) domains. Apart from the floortiles and nomystery domains, the old
version has 5 unsolved instances among 240 tasks. The new one has 14. Is there a
way to look at the logs?

Also, can we find out what is going on with "total time" (and maybe other
measurements)? (Maybe ask Jendrik?)
msg2558 (view) Author: silvan Date: 2013-07-27.10:21:11
fd14176e4003 is the old one (same basis revision as in the previous
experiments), with the synergy.
msg2557 (view) Author: malte Date: 2013-07-27.09:48:46
Which version is which?
msg2556 (view) Author: silvan Date: 2013-07-27.01:37:48
I am still not sure if the experiments ran through that quickly, but maybe our
grid is indeed that fast... ;)

Here they are:
http://ai.cs.unibas.ch/_tmp_files/sieverss/ipc-compare-revisions-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/ipc-compare-revisions-p.html
msg2555 (view) Author: malte Date: 2013-07-26.18:30:14
After a bit of digging, I found out why the cost_type of the landmark factories
cannot be removed: the options of the factory are passed on to the "Exploration"
instance that is generated for certain kinds of FF-style relaxed explorations,
e.g. to compute preferred operators of the LM heuristic. Exploration subclasses
Heuristic and therefore needs a cost_type option. Also, we indeed need it in
practice, too, to compute the preferred operators correctly.

So it's OK that cost_type needs to remain for the time being, until we have more
properly decoupled the landmark factories, landmark graphs and exploration
stuff. We should probably update the documentation to make this a bit clearer,
though.
msg2553 (view) Author: silvan Date: 2013-07-26.12:24:36
Oh and I forgot about the OptionCaveats:
http://www.fast-downward.org/OptionCaveats

Also according to this information, cost_type should be removed from the
landmark factories, as we have lm_cost_type and cost_type (for the heuristic). I
left this there until it is clear how to proceed with the contained information.
msg2552 (view) Author: silvan Date: 2013-07-26.12:18:28
2. I removed a all appearances of the synergy in the documentation, but there
are two places where they (so far) remained:
- first, the automatically generated documentation (I thought I'd better not
touch it)
www.fast-downward.org/AutomaticDocGenTestPage?highlight=(syn)

- second, in the LandmarksDefinition documentation
http://www.fast-downward.org/LandmarksDefinition?highlight=%28syn%29
I was surprised to read "cost_type (enum): action cost adjustment for FF
heuristic (see HeuristicSpecification#common) - only applies when used with
LAMA/FF synergy" and wondered if I couldn't remove it from the code as well, but
I failed to do so, because when removing the source of the option cost_type for
the factories, namely the line "Heuristic::add_options_to_parser(parser);" in
landmark_graph.cc (which adds the option cost_type, among others), the search
engine could not be parsed anymore, because apparently the parser tried to
retrieve cost_type but it wasn't set. I looked around a bit but I can't figure
out why the search engine should rely on a call to
Heuristic::add_options_to_parser(parser); in a factory class, as (of course)
also the lmcount heuristic includes this call. Any thoughts on this?
msg2551 (view) Author: silvan Date: 2013-07-26.10:26:09
1. I pushed the changes. I replaced the calls to the synergy by separated calls
to both heuristics. In the case where two synergies with different cost-types
have been used (and thus two instances of the ff-heuristic), I was not
completely sure if using one ff-heuristic instance is enough after defining two
lm-count-heuristics separatedly, but I couldn't think of any reason to declare
two different ff-heuristic instances, as they do not get passed the cost type
anyway.

3. You already took care of this in the maillist, thanks.

2. I will have a look at this now.
msg2549 (view) Author: malte Date: 2013-07-25.16:20:19
Looks good! I removed some now unused code (pushed to your repository) and
merged with the latest changes to default. It's not yet merged back yet because
there are a few more action items besides the C++ code. Silvan, can you take
care of these, too?

1. Change uses of lm_ff_syn in src/search/downward so that seq-sat-lama-2011
works again.

2. Update documentation on www.fast-downward.org so that it no longer mentions
this feature (best to search for lm_ff_syn -- I think it's used in some
examples, too).

3. Let Jendrik and Florian know about the change so that Jendrik can make the
update in lab (which uses lm_ff_syn in "showcase_examples") and Florian can make
the update in his code that conducts general before/after experiments on lots of
planner configurations.

I don't think we need to change the broken code in new-scripts, since those are
deprecated anyway.
msg2548 (view) Author: silvan Date: 2013-07-24.16:25:49
You should have been sent an email, granting you access to the repository.
You can find it here:
https://bitbucket.org/SilvanS/fd-issue123

There are some rather "unnecessary" commits included like adding an experiment
script and in the end, removing it again, but I think overall, there are only a
few code changes compared to the default branch.
msg2547 (view) Author: silvan Date: 2013-07-23.15:12:00
I just needed to include the attributes in the report, so if you click the links
below again you will find the updated reports.
msg2546 (view) Author: malte Date: 2013-07-23.14:15:01
Looks good to me. Where can I find the code to merge it?

(BTW, for future experiments, it would be useful to include the score_...
fields. There is no need to generate new data for this one, though.)
msg2542 (view) Author: silvan Date: 2013-07-09.12:08:07
New experiments results are available:

Comparison between the separated and the synergy configurations:
http://ai.cs.unibas.ch/_tmp_files/sieverss/compare-sep-syn-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/compare-sep-syn-p.html
The synergy should definitely be removed.

Comparison for the separated configuration before and after the removal of the
synergy implementation from the code:
http://ai.cs.unibas.ch/_tmp_files/sieverss/separated-revisions-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/separated-revisions-p.html
The results are less clear in this case, but I still think the overall
performance remains similar.

Comparisons for all three configuration setups:
http://ai.cs.unibas.ch/_tmp_files/sieverss/compare-all-d.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/compare-all-p.html
msg2540 (view) Author: silvan Date: 2013-07-02.17:12:22
I want to re-do these experiments and have better evaluations, but currently the
grid is not working.

I'll post updates asap.
msg2521 (view) Author: silvan Date: 2013-06-25.17:55:15
Ok, I did some an experiment for both configurations befor removing the synergy
and one for the separated configuration after removing the synergy. They can be
found here:
http://ai.cs.unibas.ch/_tmp_files/sieverss/issue123-abs-before.html (40MB!)
http://ai.cs.unibas.ch/_tmp_files/sieverss/issue123-abs-after.html

The reports are just the standard reports, as I was not sure what exactly to
include or not (and I am not really familiar with lab yet). If you would like to
have more elaborate reports, let me know.

Observations (before experiment): there are more domains in which the synergy
solves 1 instance more than the separated version than the other way round, but
a few domains in which the separated one solves several instances more than the
synergy (especially miconic-fulladl 93 vs 85).

Comparing the after-experiment: the separated configuration loses one coverage
in total and unfortunately performs slightly worse concerning most of the other
important measures such as expansions, generated nodes and search time.
msg2485 (view) Author: silvan Date: 2013-06-02.19:05:30
Okay, we can probably do this. I'll have a look at this issue after the ICAPS.
msg2483 (view) Author: malte Date: 2013-06-02.15:43:05
Now that we have better ways of running such experiments, could we do another
before/after comparison and then resolve this one?
msg2230 (view) Author: silvan Date: 2012-05-22.14:58:19
Sure, on
/home/downward/silvan/hiwi/downward/new-scripts/exp-ss-issue123-{separated,synergy}.
msg2229 (view) Author: malte Date: 2012-05-22.14:44:02
The large coverage differences e.g. in elevators-sat11-strips look suspicious.
Can I find the experiment logs somewhere?
msg2223 (view) Author: silvan Date: 2012-05-21.14:51:53
There are some probably not so good news: on the most recent IPC 11 domains, the
separated version performs notably worse than the lama syngergy, contrasting the
previous results (see message from December 13, 2011).

New results can be found here:
http://www.informatik.uni-freiburg.de/~downward/exp-ss-issue123-synergy-eval-abs-d.html
http://www.informatik.uni-freiburg.de/~downward/exp-ss-issue123-separated-eval-abs-d.html

Also on some of the "older" domains, the separated versions solve slightly less
problems.
As a comparison, remember the previous results:
http://www.informatik.uni-freiburg.de/~downward/exp-ss-lama-synsepcombined-abs-d.html
(first column: separated version with a wrong configuration, not using preferred
operators, second column: separated version with the correct configuration and
third column: the synergy)

I guess I'll need to discuss this topic with Malte in the next days.
msg2184 (view) Author: malte Date: 2012-05-09.13:34:36
Update: we're running a new set of experiments on this right now. If the results
are satisfactory, we will merge afterwards.
msg1993 (view) Author: malte Date: 2011-12-13.15:55:37
Interesting, thanks! We should discuss this with the whole dev group. I've sent
an email to the internal mailing list.
msg1992 (view) Author: silvan Date: 2011-12-13.15:27:37
The experiment on the grid is done and can be found here (I put both the old and
the new version for the separate test into the same file):
http://www.informatik.uni-freiburg.de/~downward/exp-ss-lama-synsepcombined-abs-d.html
http://www.informatik.uni-freiburg.de/~downward/exp-ss-lama-synsepcombined-abs-p.html

The "problem" seems to be fixed with the correct call for the separated
heuristics case (i.e. with pref=true for the lm-count heuristic). All important
statistics as evaluations/expansions and search time/total time are now more or
less even for both cases (for the exception of domain depot, where the separated
version only needs half of evaluations/expansions than the synergy or the "old"
separated version does, which seems a bit strange to me).

The question therefore is whether the synergy comes with any advantages over
using two separate heuristics. In total, the separated version even solves a few
more instances than the synergy version (1098 vs 1093) and it needs less time in
total (geometric mean: 3.3497 vs 3.3556). Although these are really small
differences, it may not be worth supporting the synergy version (according to
Malte, removing it would also allow for some simplifications in other parts of
the code).
msg1983 (view) Author: malte Date: 2011-12-06.12:45:44
Thanks, Silvan! The grid will very likely be very busy until (close to) the
ICAPS deadline on Dec 16.
msg1981 (view) Author: silvan Date: 2011-12-06.11:36:18
I am still waiting for the experiment on the grid to run through (the grid seems
to be quite busy right now), but a local test for blocks-8-1 yields the
following results:

- For the synergy:
Expanded 643835 state(s).
Evaluated 643836 state(s).
Evaluations: 1287672
Generated 2030115 state(s).
Total time: 64.6s

- Separted (with pref=true):
Expanded 643835 state(s).
Evaluated 643836 state(s).
Evaluations: 1287672
Generated 2030115 state(s).
Total time: 69.96s

So, first the time difference has become smaller and is now (slightly) favoring
the synergy approach, second the search statistics for expanded, generated etc.
are now exactly equal (which was not the case before) and I'll also expect no
more differences concerning number of solved instances in the grid experiment.
msg1979 (view) Author: malte Date: 2011-11-30.12:22:26
The "pref=true" setting of the lmcount heuristic is quite a stumbling block (as
we just saw), so I added it to http://www.fast-downward.org/OptionCaveats and
opened the new issue306 to remind us that this should be improved at some point.
msg1977 (view) Author: silvan Date: 2011-11-30.10:32:32
Malte and I had another look at this problem today and (hopefully) found the
reason for it: the LM synergy is using preferred operators for both the lm
heuristic and the ff heuristic because the code always sets the option to true,
whereas in the separated case, the command line posted by Erez does not set the
preferred operators option to true and thus the lm count heuristic does *not*
compute them. This should explain the time "advantage" over the synergy.

I am going to run a few local tests with the correct option and then also
another experiment on all domains.
msg1894 (view) Author: malte Date: 2011-11-08.19:36:28
Assigning this to Silvan for now. (Feel free to assign to me.)

Update: Silvan and I discussed this a bit today and had a preliminary look at
the code. The "what happens" seems to be pretty clear here; the "why does it
happen" is not so clear yet. A first inspection appears to show that the synergy
code performs relaxed exploration a reasonable number of times and with a
reasonable number of goals, i.e., the functionality seems to be implemented as
it is supposed to be. Possible explanations (maybe not fully exhaustive, but
somewhat exhaustive):

(1) The relaxed exploration in exploration.cc is much more expensive per logical
step (i.e. per iteration of the outer loop) than the one used by the regular FF
heuristic.

(2) The relaxed exploration in exploration.cc needs to perform significantly
more steps than with the regular FF heuristic.

(3) It is not the relaxed exploration that is at fault.

Current plan:
- Look at profiler output to see what it says w.r.t (3).
- Report the number of computation steps in the various relaxed explorations to
say something meaningful about (1) vs. (2).

Regarding (1), it appears fairly clear that the relaxed exploration in
exploration.cc is at least *somewhat* more expensive than the one performed by
the FF heuristic, since there is some overhead related to h^max and depth
computations that are not really needed. If it turns out that this code is the
culprit, we might consider adding an alternative streamlined relaxed exploration
implementation for testing purposes.
msg1892 (view) Author: silvan Date: 2011-11-08.10:37:35
I ran an experiment on all domains, the results can be viewed here (be careful
though, the detailed file (abs-p) is about 21MB big, which yields a memory need
of at least 500MB for firefox in my case):
http://www.informatik.uni-freiburg.de/~downward/exp-ss-lama-synergy-eval-abs-d.html
http://www.informatik.uni-freiburg.de/~downward/exp-ss-lama-synergy-eval-abs-p.html

Some key observations:
- The total time is still an issue; it can be confirmed for all domains.

- There are three domains in which even the coverage differs (WORK-lm_ff_rhw
solves more instances than WORK-lm_synergy_rhw), namely miconic-fulladl (92 vs
85), pipesworld-tankage (27 vs 20) and woodworking-opt08-strips (22 vs 15).
Having a look at woodworking, the separate heuristics need a lot less
evaluations/expansions than the synergy version, probably explaining the higher
number of solved instances. I don't know the reason for the differences, though.

- Comparing evaluations/expansions generally, there are quite some differences
(in both ways, i.e. in half of the domains, WORK-lm_ff_rhw evaluates/expands
less than WORK-lm_synergy_rhw and in the other half, it is the other way round),
but in most of the domains, the differences are not significant. Anyway, they
cannot (solely) explain the total time differences, as WORK-lm_ff_rhw
outperforms WORK-lm_synergy_rhw in all domains.
msg1839 (view) Author: erez Date: 2011-10-25.16:45:34
I don't know what's causing this, I only know that it's still there.
I would be happy if Silvan takes a look.
msg1838 (view) Author: malte Date: 2011-10-25.16:44:32
Hi Erez, any news on this? Do we know what is causing this? Otherwise maybe
Silvan could look into this -- he's working on the landmark code again.
msg1647 (view) Author: erez Date: 2011-08-15.16:45:49
It seems like this is pretty consistent:

+ total_time +
|| total_time                    | ""WORK-lama_sep"" | ""WORK-lama_syn"" |
| **blocks (18)**                | **1.68**         | 2.42             |
| **depot (4)**                  | **14.85**        | 19.41            |
| **logistics00 (10)**           | **1.62**         | 2.1              |
| **satellite (4)**              | **0.7**          | 0.75             |
| **SUM**                        | **18.85**        | 24.68            |
msg1618 (view) Author: erez Date: 2011-08-14.14:14:35
I'll run something on the grid
msg1616 (view) Author: malte Date: 2011-08-14.12:34:08
Confirmed -- I get 40s vs. 23s on that task on my notebook. Should we run a
larger experiment on this then, to see if this happens across the board?
msg1614 (view) Author: erez Date: 2011-08-14.10:03:01
I'm afraid this is still an issue. Here are the times I get now (to exhaust the 
search space).
Using synergy: 26.99 sec
Separately: 16.79 sec

command lines:
synergy: 
./downward --heuristic "hlm,hff=lm_ff_syn(lm_rhw(reasonable_orders=true))" --
search 
"iterated(lazy_wastar([hlm,hff],w=1,preferred=[hlm,hff]),repeat_last=true)" 


separate:
./downward --heuristic "hlm=lmcount(lm_rhw(reasonable_orders=true))" --heuristic 
"hff=ff()" --search 
"iterated(lazy_wastar([hlm,hff],w=1,preferred=[hlm,hff]),repeat_last=true)"
msg1612 (view) Author: malte Date: 2011-08-13.22:20:51
Hi Erez, can you take over this? I'd suggest testing this again with the current
code (where the underlying implementations have changed) on the problems you
mention and report the respective times.

If they are better now, I suggest marking this as resolved without further
action (i.e. no need for experiments on a large suite, I think -- but if you
prefer to do them, feel free).
msg1028 (view) Author: malte Date: 2011-01-05.05:15:51
This should be retested with the new (as of IPC) implementations of these
heuristics.
msg508 (view) Author: erez Date: 2010-09-08.13:38:52
Using lmcount and FF synergy seems to be significantly slower than using them 
separately (at least on a few problems I checked).
Example: on blocks-8-1, both methods evaluate the same number of states, but
using synergy takes 46.9 seconds, and without synergy (just defining the 
heuristic separately) takes 29.3 seconds).
I also see slowdowns just using lazy wastar, but the iterated search is much 
longer, which makes it more noticeable.
msg501 (view) Author: malte Date: 2010-09-07.15:46:18
Split off from issue20.

Our current h_add heuristic is a "poor man's h_add" (see comments in the code)
and shouldn't really be called h_add in its current form. As far as I know, the
exploration that is used for the LAMA/FF synergy is based on a proper h_add
implementation, since that version of the FF heuristic is a proper FF(h_add).

We should:
 * unify this code, i.e., don't have more than one implementation of the same
   thing
 * replace our h_add with a proper h_add, maybe leaving the "poor man's" around
   for a while to make comparisons
 * optionally replace our FF with LAMA's FF(h_add), or allow variations of FF
   such as FF(h_add), FF(h_max) and the default FF(level) (identical for
   FF(h_max) for unit-cost problems)
History
Date User Action Args
2014-10-06 22:34:38silvansetstatus: testing -> resolved
messages: + msg3684
2014-10-06 14:31:33maltesetmessages: + msg3676
2014-10-06 14:15:06silvansetmessages: + msg3675
2014-10-06 12:35:12maltesetmessages: + msg3672
2014-10-06 12:34:40silvansetmessages: + msg3671
2014-10-05 13:09:03maltesetmessages: + msg3657
2014-10-05 00:31:54erezsetmessages: + msg3656
2014-10-04 19:20:10maltesetmessages: + msg3631
2014-09-29 12:35:48silvansetmessages: + msg3599
2014-09-28 19:06:49maltesetmessages: + msg3589
2014-09-28 18:59:04silvansetmessages: + msg3588
2014-09-28 18:47:18maltesetmessages: + msg3587
2014-09-28 18:32:57silvansetmessages: + msg3585
2014-09-28 18:21:11maltesetmessages: + msg3584
2014-09-28 18:10:28silvansetmessages: + msg3583
2014-09-28 17:22:05maltesetmessages: + msg3582
2014-09-28 16:19:18silvansetmessages: + msg3578
2014-09-26 14:21:49maltesetmessages: + msg3553
2014-09-26 00:33:27silvansetmessages: + msg3549
2014-09-26 00:09:05silvansetmessages: + msg3548
2014-09-25 21:12:23maltesetmessages: + msg3547
2014-09-25 20:49:46silvansetmessages: + msg3546
2013-08-01 10:56:54maltesetmessages: + msg2600
2013-08-01 10:51:57silvansetmessages: + msg2599
2013-07-31 21:09:42maltesetmessages: + msg2584
2013-07-29 10:15:36silvansetmessages: + msg2575
2013-07-29 00:05:22maltesetmessages: + msg2574
2013-07-28 23:12:03silvansetmessages: + msg2573
2013-07-28 22:18:38maltesetmessages: + msg2572
2013-07-28 22:03:51silvansetmessages: + msg2571
2013-07-28 20:21:42maltesetmessages: + msg2570
2013-07-28 18:57:54silvansetmessages: + msg2569
2013-07-28 15:05:00maltesetmessages: + msg2567
2013-07-28 14:44:30silvansetmessages: + msg2566
2013-07-28 12:47:10maltesetmessages: + msg2565
2013-07-28 11:53:20silvansetmessages: + msg2564
2013-07-27 12:54:21silvansetmessages: + msg2561
2013-07-27 10:37:15maltesetmessages: + msg2560
2013-07-27 10:31:35maltesetmessages: + msg2559
2013-07-27 10:21:11silvansetmessages: + msg2558
2013-07-27 09:48:46maltesetmessages: + msg2557
2013-07-27 01:37:48silvansetmessages: + msg2556
2013-07-26 18:30:14maltesetmessages: + msg2555
2013-07-26 12:24:36silvansetmessages: + msg2553
2013-07-26 12:18:28silvansetmessages: + msg2552
2013-07-26 10:26:09silvansetmessages: + msg2551
2013-07-25 16:20:19maltesetmessages: + msg2549
2013-07-24 16:25:49silvansetmessages: + msg2548
2013-07-23 15:12:01silvansetmessages: + msg2547
2013-07-23 14:15:01maltesetmessages: + msg2546
2013-07-09 12:08:07silvansetmessages: + msg2542
2013-07-02 17:12:22silvansetmessages: + msg2540
2013-06-25 17:55:15silvansetmessages: + msg2521
2013-06-02 19:05:30silvansetmessages: + msg2485
2013-06-02 15:43:05maltesetmessages: + msg2483
2012-05-22 14:58:19silvansetmessages: + msg2230
2012-05-22 14:44:02maltesetmessages: + msg2229
2012-05-21 14:51:54silvansetmessages: + msg2223
2012-05-09 13:34:36maltesetmessages: + msg2184
2011-12-13 15:55:37maltesetmessages: + msg1993
2011-12-13 15:27:38silvansetmessages: + msg1992
2011-12-06 12:45:44maltesetmessages: + msg1983
2011-12-06 11:36:19silvansetmessages: + msg1981
2011-11-30 12:22:26maltesetmessages: + msg1979
2011-11-30 10:32:32silvansetmessages: + msg1977
2011-11-08 19:36:28maltesetassignedto: erez -> silvan
messages: + msg1894
2011-11-08 10:37:35silvansetmessages: + msg1892
2011-10-25 16:45:34erezsetmessages: + msg1839
2011-10-25 16:44:33maltesetmessages: + msg1838
2011-10-25 16:41:45maltesetnosy: + silvan
2011-08-15 16:45:49erezsetmessages: + msg1647
2011-08-14 14:14:36erezsetmessages: + msg1618
2011-08-14 12:35:10maltesettitle: proper h_add and FF(h_add) heuristic implementations -> FF/LM synergy is (at least sometimes) much slower than using the two heuristics independently
2011-08-14 12:34:08maltesetmessages: + msg1616
2011-08-14 10:03:02erezsetmessages: + msg1614
2011-08-13 22:20:51maltesetassignedto: malte -> erez
messages: + msg1612
2011-01-05 05:15:51maltesetstatus: chatting -> testing
messages: + msg1028
2010-10-15 19:05:06maltesetassignedto: malte
2010-09-08 13:38:52erezsetmessages: + msg508
2010-09-07 15:46:18maltecreate