Issue559

Title Solve component coordination problem
Priority meta Status in-progress
Superseder Nosy List florian, jendrik, malte, manuel, mkatz, patfer, silvan
Assigned To Keywords
Optional summary
TODO:

- Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52
- Reuse per-task information (which includes Evaluators now) (issue564)

Some ancillary things to keep in mind:

- Can we enable reuse of heuristics across different searches? See issue121.

Created on 2015-07-21.15:02:48 by silvan, last changed by malte.

Summary
TODO:

- Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52
- Reuse per-task information (which includes Evaluators now) (issue564)

Some ancillary things to keep in mind:

- Can we enable reuse of heuristics across different searches? See issue121.
Messages
msg8908 (view) Author: malte Date: 2019-06-14.16:38:57
> The problem I described in msg8897 is related to issue564.

It is related, but I think it's important to realize that it's not the same
thing. Using a single heuristic object in "astar(lmcut())" rather than three
different ones is not something our current code does, whereas issue564 is about
a different form of missed opportunity for reuse.

It may or may not be a good idea to consider these two different kinds of reuse
together. I think the first may be easier to address than the second and/or
require different mechanisms. No action needed on this right now, I just wanted
to make sure the distinction doesn't get lost.
msg8903 (view) Author: jendrik Date: 2019-06-14.12:11:04
I have updated the part about the component coordination problem in the wiki and
 added a TODO list to the summary of this issue. (With this, I think we can
consider the story for the sprint done.)
msg8900 (view) Author: jendrik Date: 2019-06-14.11:46:21
The problem I described in msg8897 is related to issue564.
msg8897 (view) Author: jendrik Date: 2019-06-13.19:45:36
A first prototype version is now finished (see link in msg8876). Obviously,
there are some rough edges, but I think the prototype is in a state in which we
can discuss whether the pieces fit together.

I think the main open question is when and how we want to reuse plugins instead
of rebuilding them. For example, the prototype caches builds of the blind()
plugin. This ensures that we only have one heuristic object for "astar(blind())"
even though the heuristic builder is called for the f-evaluator, h-evaluator and
tiebreaking-evaluator. However, this currently doesn't work with task
transformations: for "astar(adapt_costs(cost_type=one), evaluator=blind())",
each call to the builder sees a different AbstractTask and we can't cache anything.
msg8896 (view) Author: malte Date: 2019-06-13.13:54:40
I would treat this as a work-in-progress name.
msg8884 (view) Author: silvan Date: 2019-06-12.15:49:43
Also in the landmark and open list classes, we use the name factory.
msg8881 (view) Author: silvan Date: 2019-06-12.14:42:57
In the wiki, we wrote that the factories could be called "builders". In
merge-and-shrink, we already have factories called "factories", so this is
something to keep in mind.
msg8876 (view) Author: silvan Date: 2019-06-12.10:31:50
We started with implementing an exemplary factory for search engine/eager
search:
https://bitbucket.org/SilvanS/fd-dev/pull-requests/52/issue559-prototype/diff

This was pretty straight-forward. The next step will be to do the same for a
minimal set of evaluators and open lists.
msg8834 (view) Author: malte Date: 2019-06-06.11:54:56
I noticed we have an old issue (issue121) which is about reusing heuristic
values in LAMA between different iterations of RWA* that use the same heuristic.
In non-unit-cost problems, the first iteration uses different heuristics from
later ones because it ignores action costs, but later iterations all work with
the same heuristic. Currently, the heuristic values are needlessly recalculated
in each search.

I think that when working on this meta-issue, the question of a "reuse context"
should be considered, even if we don't want to immediately implement such a
reuse. It may be hard to retrofit it after the fact.
msg8467 (view) Author: jendrik Date: 2019-01-16.20:21:48
We discussed this further during our retreat and concluded that we would like to
try out the proposal from msg7674. When somebody tackles this, it's probably
best to first try it out for a small subset of evaluators and disable most
plugins to see if the design works.
msg7699 (view) Author: malte Date: 2018-09-21.10:24:40
Another random thought: in the factory-based model, perhaps PerStateInformation
doesn't need to support multiple state registries any more? Not 100% sure, but
it would be nice to get rid of this extra level of indirection when accessing
per-state data.

If we want to investigate this option, one thing we could do is change the
current code to test at runtime whether the PerStateInformation is only ever
used with one registry, and abort if it is used with more than one.
msg7675 (view) Author: malte Date: 2018-09-20.20:46:20
> A possible disadvantage is that this kind of transformations is specific to a
> particular kind of argument: "adapt_costs" here can only adapt evaluators,
> whereas our current design would allow the same transformation to be applied
> in different contexts.

Actually, it is probably possible to combine both approaches if we want (which I
don't advocate at the moment because of YAGNI, but it's perhaps good to think
about what the scope of possibilities is).

The idea is that we can reintroduce the notion of transformations, but they are
still external to the evaluator that is being applied, like so:

    transformed_evaluator(transformation=adapt_costs_transformation(one),
                          evaluator=ff())

Here, "adapt_costs_transformation" is a transformation (i.e., something that
knows how to map tasks to task and do some back-and-forth conversions), and
"transformed_evaluator" is an evaluator that allows evaluating another evaluator
(here: "ff()") under a generic transformation (here:
"adapt_costs_transformation(one)").

I think this gives us the expressive power of both approaches, it is still clear
which functionality lives where, and the notion of "transformations" is still
well encapsulated in the sense that in a planner with no notion of
transformations, the whole transformation mechanism could be added to the
planner externally by defining a new plugin type "transformation" and a new
evaluator "transformed_evaluator" without touching pre-existing plugins like "ff()".

And with this model transformations can be applied in other contexts, too: for
example, we can add the landmark factory "transformed_landmark_factory" and
immediately use it with all transformations that support the necessary
back-and-forth conversions that are necessary for making landmark factories work.
msg7674 (view) Author: malte Date: 2018-09-20.20:37:38
Some food for thought: in the factory-based world, transformations as objects
passed into evaluators are perhaps not actually necessary.

In our current model, we combine an evaluator with a transformation like this:

    ff(transform=adapt_costs(cost_type=one))

Here, "ff" is an evaluator class and has a "transform" parameter, where
"adapt_costs" in a transformation class.

In the factory-based world, we could instead make this more compositional, where
"ff" doesn't have to know about the adaptation and "adapt_costs" is itself an
evaluator class that takes an evaluator as an argument:

    adapt_costs(cost_type=one,evaluator=ff())

Here, the whole notion of a transformation doesn't have to be a planner-level
concept. It just happens to be the case that "adapt_costs" plays around with the
tasks it is applied to before passing them on to its inner evaluator "ff()", and
it applies some back-transformations to the results it gets from ff() (to
convert back preferred operators) before passing them on to *its* caller.

I think this is a more elegant design, but it is only possible in the
factory-based setting where tasks are not bound during initial construction (as
currently) but only bound later, because "ff()" is ultimately applied to a task
which is transformed. This is not apparent from the syntax "ff()", so this won't
work if we require the task to be passed into the constuctor of "ff()".

Advantages of the compositional view are that inner heuristics like ff() don't
need to know anything about the concept of transformations, and it is very clear
where the back-and-forth translation of states and preferred operators must
happen (within "adapt_costs").

We also get composition of transformations for free without having devise
something like a "chain" transformation as discussed previously.

A possible disadvantage is that this kind of transformations is specific to a
particular kind of argument: "adapt_costs" here can only adapt evaluators,
whereas our current design would allow the same transformation to be applied in
different contexts. However, we currently only have real use cases for adapting
evaluators, and transformations being specific to a particular kind of object
can also help make it easier to restrict that they are only applied in ways that
make sense/are safe.
msg7673 (view) Author: malte Date: 2018-09-20.20:33:32
One observation: in the factory-based world, transformations as objects passed
into evaluators are perhaps not actually necessary.

Currently, we think of specifying an evaluator like this:

    ff(transform=adapt_costs(cost_type=one))

Here, "ff" is an evaluator class and has a "transform" parameter, where
"adapt_costs" in a transformation class.

In the factory-based world, we could instead make this more compositional, where
"ff" doesn't have to know about the adaptation and "adapt_costs" is itself an
evaluator class that takes an evaluator as an argument:

    adapt_costs(cost_type=one,evaluator=ff())

Here, the whole notion of a transformation doesn't have to be a planner-level
concept. It just happens to be the case that adapt_costs plays around with the
tasks it is applied to before passing them on to its inner evaluator "ff()", and
it applies some back-transformations to the results it gets from ff() (to
convert back preferred operators) before passing them on to *its* caller.

I think this is a more elegant design, but it is only possible in the
factory-based setting where tasks are not bound during initial construction (as
currently) but only bound later, because "ff()" is ultimately applied to a task
which is transformed, but this is not apparent from the syntax "ff()".

Advantages of this view are that inner heuristics like ff() don't need to know
anything about the concept of transformations, and it is very clear where the
back-and-forth translation must happen (within "adapt_costs").

We also get composition of transformations for free without having devise
something like a "chain" transformation as discussed previously.

A possible disadvantage is that this kind of transformations is specific to a
particular kind of argument: "adapt_costs" here can only adapt evaluators,
whereas our current design would allow the same transformation to be applied in
different contexts. However, we currently only have real use cases for adapting
evaluators, and transformations being specific to a particular kind of object
can also help make it easier to restrict that they are only applied in ways that
make sense/are safe.
msg7672 (view) Author: malte Date: 2018-09-20.20:01:52
Some further brief notes of the discussion we had today (some of this replicated
information below for completeness):

1. "task transformations" as given to the option parser are currently
represented as tasks in our components. This is not enough: we need access to
both the source and the target of the transformation.

Therefore, we should actually store a task *transformation* there, which
describes how to go from one task to another, rather than just the transformed
task itself.

The only place where we need to pass in a task proper "from the outside" is when
we want to invoke a search algorithm, i.e., when calling SearchEngine::search
(or whatever it is called).

2. In the medium future, we don't want to introduce task transformations for
search algorithms. If we want to eventually support it, we first need to clarify
what exactly is meant by that. Does the user want a plan for the original or
transformed task? On which task are the evaluators evaluated? The search
algorithm has a potpourri of responsibilities, and if we introduce task
transformations for search algorithms eventually, it is even possible that we
support different transformations for different aspects of the algorithm.

3. The one place in search algorithms where we currently use something like task
transformations is in the adjusted action costs used for the g evaluators. (I
*think* this is the only place.) The proposal is to support this functionality
by moving more of the handling of g values out of the search algorithm and into
the g evaluator, which (as an evaluator) supports transformations.

4. Getting task transformations, not tasks, from the option parser means that we
no longer have access to the task upon construction. This should actually be a
blessing because the notion of "the task" is ill-defined at this place of the
code as explained by Silvan in the first message of this issue. So applying this
change will push us towards a more factory-based view of things, which is what
we need anyway, for example for component reuse.

On the negative side, it has the potential of introducing lots of boilerplate
code: the hierarchy of "task-independent" factories that will make up something
like the f-evaluator of A* will likely need to be mirrored by a hierarchy of
"task-specific evaluators" that are instantiated during search. It would be nice
if we could somehow minimize the boilerplate for the simple case.

It is also an interesting question *when* to construct these objects. It may be
a minor concern, but it would be nice if the time at which expensive
computations happen were predictable so that the log output order doesn't become
too confusing and we can still reliably time certain parts of the code without
worrying that we also capture heavy initializations of other objects.

5. In the "factories mirrored by concrete objects" world, our current "reuse" of
variables like "h" in places like "eager_greedy([h],preferred=[h])" can perhaps
be addressed by striving to reuse the same "concrete" object whenever we have
the same factory. But we have to be careful only to do this if the context
actually allows it. If both concrete "h"s operate on different tasks, clearly
they cannot reuse.

6. The factory-based view perhaps also gives us an elegant way of addressing the
"reparsing" hacks that we currently need for iterated_search.{h,cc}. Perhaps it
also means we don't need a (non-help-mode) dry run for the option parser any
more because processing the command line and constructing the factories will
actually be cheap. With the current option parser design, we will still need the
help-mode dry run, though.

Random thoughts about class names for factory vs. concrete:

EvaluatorFactory vs. Evaluator?
Evaluator vs. TaskEvaluator?

other ideas?
msg7671 (view) Author: malte Date: 2018-09-20.19:44:28
Extracting from Florian's msg7659 at issue833:

================================================================================

The current assumption that the task of a heuristic or search engine is known
when it its constructed, is not compatible with general task transformations and
predefined objects. For example, consider a heuristic h with a task
transformation f_h that is nested in two heuristics. If the other heuristics use
transformations f_1 and f_2 then the inner heuristic would have to compute
values for both f_h(f_1(root)) and f_h(f2(root)). This is not possible with our
current model. 

We discussed this today and plan to move towards code where task transformations
are represented as functions on tasks (mapping tasks to other tasks) and where
we use factories to pass the relevant task to objects like heuristics when they
are used. Once this is done, we can pass g_root_task to SearchEngine::search
which will use the factories to generate the required objects for the given task.

================================================================================
msg5122 (view) Author: malte Date: 2016-01-20.15:59:33
It's not about initialization: it's about what happens if two searches use the
same heuristic concurrently and hence share path-dependent information.
msg5121 (view) Author: florian Date: 2016-01-20.15:56:41
This also affects how path dependent heuristics should be initialized (see msg5120).
msg4419 (view) Author: silvan Date: 2015-07-21.15:02:48
A "component" in the sense of this issue is an object that can be created by the
user on the command line, such as an FF heuristic, a merge strategy, a landmark
factory or a search algorithm. Currently, we have two kinds of components: ones
that can be assigned to a variable (e.g. landmark factories, heuristics) and
ones that cannot (e.g. merge strategies). One question is whether we want to get
rid of this distinction. Currently it serves a purpose because some of our
components cannot work when used in two contexts at the same time, and not being
able to assign them to variables prevents them from being used in two contexts.

One of the general problems in planner architecture that we still have to solve
is: how do we ensure that different components of the planner fit together
properly. For example, a search algorithm and a heuristic somehow need to be
able to exchange information about the task (the search algorithm must tell the
heuristic which state to consider, and the heuristic must tell the search
algorithm the set of preferred operators), but they might both be working on
different task objects. We have partially solved this by thinking of "task
transformations" instead of tasks, but this is still rather half-baked.

There are also issues related to using the same component in two different
contexts without having interference due to its attributes. For example, if we
want to use the same landmark graph in two heuristics, I think this would
currently cause problems because there is information about ongoing computations
stored within the landmark graph object. (This could be resolved, for example,
by making sure that something like this never happens -- e.g. by making all
references to other components const.)

For now, we don't really have a good idea how to address this. The main purpose
of this issue is to have a reference point that we can use to annotate parts of
the code that relate to this issue. When working on this issue, we should make
sure to search the code comments for references to this issue number.
History
Date User Action Args
2019-06-14 16:38:57maltesetmessages: + msg8908
2019-06-14 12:11:04jendriksetmessages: + msg8903
summary: Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2019-06-14 11:46:21jendriksetmessages: + msg8900
2019-06-13 19:45:36jendriksetstatus: chatting -> in-progress
messages: + msg8897
2019-06-13 13:54:40maltesetmessages: + msg8896
2019-06-12 15:49:43silvansetmessages: + msg8884
2019-06-12 14:42:57silvansetmessages: + msg8881
2019-06-12 11:58:50manuelsetnosy: + manuel
2019-06-12 10:31:50silvansetmessages: + msg8876
2019-06-06 11:54:56maltesetmessages: + msg8834
summary: Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2019-02-27 19:18:57mkatzsetnosy: + mkatz
2019-01-16 20:21:48jendriksetmessages: + msg8467
2018-10-04 10:13:37patfersetnosy: + patfer
2018-09-21 10:24:40maltesetmessages: + msg7699
2018-09-20 20:46:20maltesetmessages: + msg7675
2018-09-20 20:37:38maltesetmessages: + msg7674
2018-09-20 20:33:32maltesetmessages: + msg7673
2018-09-20 20:01:52maltesetmessages: + msg7672
2018-09-20 19:44:28maltesetmessages: + msg7671
2016-01-20 15:59:33maltesetmessages: + msg5122
2016-01-20 15:56:41floriansetmessages: + msg5121
2015-07-22 01:36:49floriansetnosy: + florian
2015-07-21 20:48:16jendriksetnosy: + jendrik
2015-07-21 15:02:48silvancreate