The new landmark heuristic code from the emil-new branch seems to have some
rather nice properties compared to the original admissible landmarks:
1. reduced memory usage
2. reduced runtime when using LP-based cost partitioning
3. reduced number of expansions in some cases
Point 1. is mentioned in issue74, which shows improvements of overall memory
usage of the planner around a factor of 10 or so in some informal tests.
Point 2. is something I just tested informally, comparing the same two planner
versions as in issue74 but with the optimal cost partitioning, on blocksworld
9-1. The result is that the new code reduces runtime from 88.41 seconds to 2.68
seconds while keeping the number of evaluations the same.
Point 3. is something that worries me slightly since I don't have an explanation
for it. We see in msg673 of issue74 some rather large differences in the number
of expanded states between the two branches when using uniform cost
partitioning. I don't think we changed anything in the LM generation method, and
the number of LMs and orderings is the same in both cases. Also, the search code
is exactly the same in both cases; the only difference is in the landmarks code.
I'd like to know why the heuristic values differ here, since this can be either
(a) a bug, or
(b) a contribution.
It would be good to know which. :-)
So, in particular in light of the LM journal paper we want to write, it would be
good to
(a) run some detailed experiments comparing the new code to the IJCAI paper
code in terms of coverage, memory usage, expansions, and runtime.
(b) find out why exactly we seem to get different h values between the two
code versions in the example task above.
Erez, do you have time to deal with this?
|