Issue564

Title reuse per-task information
Priority wish Status chatting
Superseder Nosy List florian, jendrik, malte
Assigned To Keywords
Optional summary

Created on 2015-07-26.17:04:57 by florian, last changed by malte.

Messages
msg8902 (view) Author: malte Date: 2019-06-14.12:02:01
It is only partially solved. Large objects related to a task get released when
the task dies, but there is no mechanism for releasing them earlier.

Florian wrote "Ideally, we would like to compute them only once per task and
remove them after we are done with them." I don't think we can currently remove
them once we are done with them, only once we are done with the task altogether.

With the causal graph example, I think we never need it after all heuristics
have been initialized, but it will always survive until the search has completed
(I think).

I don't think there is an easy way to address this because we cannot easily
predict how long an object will be needed.
msg8899 (view) Author: jendrik Date: 2019-06-14.11:45:23
I think the problem from msg4470 has been solved by the PerTaskInformation class
and its subscriber mechanism that ensures the information relating to deleted
tasks is also deleted.

The problem from msg6251 is still relevant though. I'm renaming this issue to
reflect this.
msg6251 (view) Author: malte Date: 2017-04-27.18:17:19
Ideally we should also allow reusing large data structures in cases where the
tasks are slightly different, but the same data structure can be used anyway.

An example of this is the successor generator used by the landmark heuristic to
compute preferred operators. It cannot in general reuse the successor generator
used by the search algorithm using the heuristic because the tasks might not be
identical. In particular, in LAMA one of them might use an unmodified task,
while the other one might use a task with plus-one cost. But that's not really a
good reason to invest all the effort for a new successor generator because these
tasks are close enough that the same successor generator can be used for both.

Another "almost-match" case is also within LAMA where the second and subsequent
searches work with different action costs from the first one, but that is not
really a good reason to recompute the successor generator.

Ditto for causal graphs: no real need to recompute them if only the action costs
are different.
msg4470 (view) Author: florian Date: 2015-07-26.17:04:57
While integrating the preprocessor into the search code (issue26) we switched
classes that depended on the global task before to the new task interface
(causal graph, successor generator, DTGs). Ideally, we would like to compute
them only once per task and remove them after we are done with them.

The causal graph currently uses a factory with a local cache to only generate
one instance per AbstractTask, but its entries are never deleted. The successor
generator is not cached at all, which means that we create two instances in the
call "astar(ipdb())" (one for the search, one for ipdb's sampling).

We should re-think the way this kind of information is stored and when it should
be removed.
History
Date User Action Args
2019-06-14 12:02:01maltesetmessages: + msg8902
2019-06-14 11:45:23jendriksetmessages: + msg8899
title: Better way of storing per-task information -> reuse per-task information
2017-04-27 18:17:19maltesetmessages: + msg6251
2015-07-26 19:04:40jendriksetnosy: + jendrik
2015-07-26 17:04:57floriancreate