|
|
Created on 2010-08-05.10:45:15 by erez, last changed by erez.
| msg444 (view) |
Author: erez |
Date: 2010-08-09.16:14:51 |
|
This was a false alarm - sorry.
|
| msg443 (view) |
Author: gabi |
Date: 2010-08-09.16:13:51 |
|
I took a look but did not find a problem with the command line flags. I need an
example or regard this as solved.
|
| msg420 (view) |
Author: erez |
Date: 2010-08-05.16:02:49 |
|
I documented heuristic cache behavior in the wiki, and opened Issue108.
Now to the ignoring part :-)
Also, there is currently a problem with reading the command line flags for
iterated search, I asked Gabi to take a look.
|
| msg416 (view) |
Author: malte |
Date: 2010-08-05.15:14:38 |
|
So the current behaviour is recomputation?
Then I'd suggest
* documenting on the wiki that currently there is no heuristic caching
* opening an issue for this
* and then ignoring it for now
I agree that a cache based on a unique state id is a good solution, or at least
the best I can up with at the moment. :-)
|
| msg415 (view) |
Author: erez |
Date: 2010-08-05.15:09:00 |
|
I think the best way to support this, is by reusing the same heuristic object,
and enabling heuristic cache. It's probably best if we do proper heuristic
caching, as Malte suggested a long time ago, by assigning an id to each state
(via the SearchSpace), and having a vector in each heuristic.
|
| msg414 (view) |
Author: malte |
Date: 2010-08-05.14:59:51 |
|
I think we need to decide in general which kind of information should be
preserved between different search algorithm runs and how this should be preserved.
This is true even without history-dependent heuristics, so let's consider that
simpler case first. For example, if we'd invoke a LAMA-style RWA* search, IIRC,
LAMA would throw away the search space between iterations (of course; that's
what RWA* is about), but heuristic values would be cached between iterations so
that they would never need to be recomputed.
Do we currently match this behaviour?
If yes, how is this done?
If not, how could this be done in a clean way?
|
| msg413 (view) |
Author: gabi |
Date: 2010-08-05.14:45:03 |
|
Yes, but you need to predefine heuristics if you want to use one
instance both for the heuristic estimates and for the preferred
operators within one search. Otherwise, it will create two instances.
|
| msg412 (view) |
Author: erez |
Date: 2010-08-05.14:39:55 |
|
I'll start working on this.
Note that now using predefine heuristics is optional. We can either do:
./downward --heuristic hff=ff() --search
iterated(lazy_wastar(hff,weight=10),lazy_wastar(hff,weight=5))
or
./downward --search
iterated(lazy_wastar(ff(),weight=10),lazy_wastar(ff(),weight=5))
So that's up to the user now.
|
| msg411 (view) |
Author: gabi |
Date: 2010-08-05.14:37:19 |
|
I implemented the dry runs but did not test it thoroughly, yet,
but this is best done after the integration in the iterated search.
I think the easiest way to use this is to do the dry run and to remember the
start positions of the sub-searches in the config vector. With this and the
config vector, it is possible to create the sub-searches on demand. Erez, I
assign this back to you.
One remaining problem is that the predefined heuristics are still used in
several searches. I guess this is a problem with the history-aware heuristics
but for example good with pattern database heuristics. How should we proceed
with this?
|
| msg410 (view) |
Author: malte |
Date: 2010-08-05.11:49:05 |
|
Right, all side effects should be skipped in dry run mode. Where evaluators etc.
are generated, I guess this is most easily accomplished by passing on the
dry_run flag to their constructors.
I've only looked at some heuristics factories so far, not the ones for the
search engines; the ones I've seen tend to have the important stuff at the end.
(Maybe there is a better solution to implement this. The main thing to avoid is
duplicating any code in a way that makes it easy to become inconsistent, as well
as anything that makes writing the parse code significantly harder.)
|
| msg409 (view) |
Author: erez |
Date: 2010-08-05.11:22:12 |
|
That's possible, but I think it might create objects (like heuristics), that will
just be lost afterwards.
|
| msg408 (view) |
Author: malte |
Date: 2010-08-05.11:02:51 |
|
If this is the best way to do it (don't know the code), one way to implement it
without duplicating too much code would be by adding a dry_run flag to the
create_... functions. When set, they'd just skip the final "new" call and return
0 instead.
|
| msg407 (view) |
Author: erez |
Date: 2010-08-05.10:45:15 |
|
I've implemented iterates search in r4634.
Currently, all the SearchEngine objects are parsed and created at the beginning.
This is not only wasteful, it also causes a problem if I want to repeat the last
phase of the search (as LAMA does with w=1, with improving bounds), since the
same SearchEngine object is used for more than 1 search.
I would like to just read the string describing the search engine, and
create the SearchEngine object only when it's needed.
Gabi - can you do this?
I will also implement support for g-bound in eager and EHC search.
|
|
| Date |
User |
Action |
Args |
| 2010-08-09 16:14:51 | erez | set | status: in-progress -> resolved messages:
+ msg444 |
| 2010-08-09 16:13:51 | gabi | set | messages:
+ msg443 |
| 2010-08-05 16:02:49 | erez | set | messages:
+ msg420 |
| 2010-08-05 15:14:38 | malte | set | messages:
+ msg416 |
| 2010-08-05 15:09:00 | erez | set | messages:
+ msg415 |
| 2010-08-05 14:59:51 | malte | set | messages:
+ msg414 |
| 2010-08-05 14:45:03 | gabi | set | messages:
+ msg413 |
| 2010-08-05 14:39:55 | erez | set | messages:
+ msg412 |
| 2010-08-05 14:37:19 | gabi | set | assignedto: gabi -> erez messages:
+ msg411 |
| 2010-08-05 11:49:06 | malte | set | messages:
+ msg410 |
| 2010-08-05 11:22:12 | erez | set | messages:
+ msg409 |
| 2010-08-05 11:02:51 | malte | set | messages:
+ msg408 |
| 2010-08-05 10:45:15 | erez | create | |
|