Issue140

Title compare performance of new landmark code to code in Erez & Carmel's paper
Priority wish Status resolved
Superseder Nosy List erez, malte
Assigned To Keywords
Optional summary

Created on 2010-11-02.01:14:02 by malte, last changed by erez.

Messages
msg806 (view) Author: erez Date: 2010-12-10.13:46:42
I think we can close this.
msg802 (view) Author: malte Date: 2010-12-10.13:39:11
Erez, can we close this?
msg682 (view) Author: erez Date: 2010-11-03.10:44:52
There is more to this than just action landmarks. See msg681 in issue74.
msg676 (view) Author: erez Date: 2010-11-02.05:20:25
I think the explanation for point 3 is the new action landmark discovery method, 
which is not only more memory efficient, it is also more informative.
In the old IJCAI version, there was no way to discover an action landmark was 
"needed again" (except for using LM-A*). In the new version, this easily happens.
I will try to confirm this by disabling action landmarks and comparing.
msg675 (view) Author: malte Date: 2010-11-02.02:30:57
One more example of point 1 and 2.: on Blocks-9-1 with optimal cost
partitioning, the new code reduces memory usage from 356652 KB to 38188 KB and
runtime from 2462.62s to 92.71s, while leaving everything else the same (214844
evaluated states and 247357 evaluations).
msg674 (view) Author: malte Date: 2010-11-02.01:14:02
The new landmark heuristic code from the emil-new branch seems to have some
rather nice properties compared to the original admissible landmarks:

 1. reduced memory usage
 2. reduced runtime when using LP-based cost partitioning
 3. reduced number of expansions in some cases

Point 1. is mentioned in issue74, which shows improvements of overall memory
usage of the planner around a factor of 10 or so in some informal tests.

Point 2. is something I just tested informally, comparing the same two planner
versions as in issue74 but with the optimal cost partitioning, on blocksworld
9-1. The result is that the new code reduces runtime from 88.41 seconds to 2.68
seconds while keeping the number of evaluations the same.

Point 3. is something that worries me slightly since I don't have an explanation
for it. We see in msg673 of issue74 some rather large differences in the number
of expanded states between the two branches when using uniform cost
partitioning. I don't think we changed anything in the LM generation method, and
the number of LMs and orderings is the same in both cases. Also, the search code
is exactly the same in both cases; the only difference is in the landmarks code.

I'd like to know why the heuristic values differ here, since this can be either

 (a) a bug, or
 (b) a contribution.

It would be good to know which. :-)


So, in particular in light of the LM journal paper we want to write, it would be
good to

 (a) run some detailed experiments comparing the new code to the IJCAI paper
     code in terms of coverage, memory usage, expansions, and runtime.
 (b) find out why exactly we seem to get different h values between the two
     code versions in the example task above.

Erez, do you have time to deal with this?
History
Date User Action Args
2010-12-10 13:46:42erezsetstatus: chatting -> resolved
messages: + msg806
2010-12-10 13:39:11maltesetmessages: + msg802
2010-11-03 10:44:52erezsetmessages: + msg682
2010-11-02 05:20:26erezsetmessages: + msg676
2010-11-02 02:30:57maltesetmessages: + msg675
2010-11-02 01:14:02maltecreate