Issue559

Title Solve component coordination problem
Priority meta Status in-progress
Superseder Nosy List clemens, florian, jendrik, malte, mkatz, patfer, salome, silvan, simon
Assigned To Keywords
Optional summary
TODO:
- rework OptionParser to parse the let syntax (issue1040)
- Use Options objects only in Features (issue1082)
- Reuse per-task information (which includes Evaluators now) (issue564)

Some ancillary things to keep in mind:

- Can we enable reuse of heuristics across different searches? See issue121.

TODOs for final implementation:
1. Check that goalcount heuristic benefits from the changes in this issue.
Converting GlobalStates to States made it slower (see issue554) and the change 
in issue348 should imply that the performance goes up again if we avoid copying 
the state due to a task transformation.

Created on 2015-07-21.15:02:48 by silvan, last changed by simon.

Summary
TODO:
- rework OptionParser to parse the let syntax (issue1040)
- Use Options objects only in Features (issue1082)
- Reuse per-task information (which includes Evaluators now) (issue564)

Some ancillary things to keep in mind:

- Can we enable reuse of heuristics across different searches? See issue121.

TODOs for final implementation:
1. Check that goalcount heuristic benefits from the changes in this issue.
Converting GlobalStates to States made it slower (see issue554) and the change 
in issue348 should imply that the performance goes up again if we avoid copying 
the state due to a task transformation.
Files
File name Uploaded Type Edit Remove
blackboard-2023-07-19.png malte, 2023-07-19.13:45:16 image/png
Messages
msg11500 (view) Author: simon Date: 2024-01-02.18:23:25
I continued on the prototype. 
- It is rebased to the current main branch of fastdownward.
- IteratedSearch has no more ComponentMap field and also works in a nested manner.
- The parameters of the (TaskIndependent)Components match the wiki (besides the additional string parameter 'name' which is always 
right in front of the verbosity parameter, related to issue1082 ).

Next step would be to discuss if this is the direction we want to go for the rest of the code and how to tackle this.

PR: https://github.com/salome-eriksson/downward/pull/2
msg11481 (view) Author: simon Date: 2023-11-02.18:39:15
yes, I agree with your phrasing.
msg11480 (view) Author: malte Date: 2023-11-02.16:12:58
> We talked about the ComponentMap and the IteratedSearch. We said that the
> IteratedSearch should not have one big ComponentMap for all iterations but
> rather that for each iteration a new ComponentMap is used.

I would phrase this a bit differently, unless the discussion evolved further until I left. The way I understood the discussion, the IteratedSearch shouldn't have a component map at all; the existence and life cycle of the component map should be internal to the create methods of the domain-independent components.
msg11478 (view) Author: simon Date: 2023-11-02.15:51:34
We had a meeting about the prototype.

We talked about the ComponentMap and the IteratedSearch. We said that the IteratedSearch should not have one big 
ComponentMap for all iterations but rather that for each iteration a new ComponentMap is used.

We also talked about the constructors of the Components. We want the unpacked constructors to have the same 
parameters as in the wiki.
For example: https://www.fast-downward.org/Doc/OpenList#Tie-breaking_open_list

The next big step is to do this for all Components. This is also a relevant part for the Python interface issue757  
and there is already a dedicated issue for that: issue1082.
msg11445 (view) Author: simon Date: 2023-10-09.15:44:01
About the general design of the prototype:

Each componet (EagerSearch, SumEvaluator, LandMarkCutHeuristic, CostAdaptedTask, etc.) has its corresponding factory 
(TaskIndependentEagerSearch, TaskIndependentSumEvaluator, TaskIndependentLandMarkCutHeuristic, TaskIndependentCostAdaptedTask, etc.) 
these factories have an analog inheritance hierarchy.
The existing inheritance hierarchy is extended by the classes 'Component' and 'TaskIndependentComponent'.

Parsing the command line does not result in a SearchEngine that has some Components as fields anymore. Now it results in a 
TaskIndependentSearchEngine with some TaskIndependentComponents.
To produce a Component from a TaskIndependentComponent the function 'create_task_specific_root(task)' is called. It first creates a 
ComponentMap, that is a unordered_map from TaskIndependentComponent pointers to Component pointers. Then it calls 
'create_task_specific(task, component_map)' for each TaskIndependentComponent in its fields to optain the parameters it needs to call 
the constructor of the component it is supposed to produce.

If 'create_task_specific' is called with ComponentMap it checks if the TaskIndepenedentComponent produced already, by checking if its 
pointer is in the ComponentMap. If so it extracts and returns the pointer to the Component from the ComponentMap.
If not it does actually create it by calling 'create_task_specific(task, component_map)' for each TaskIndependentComponent in its fields 
(you can see the recursion there).
Then it inserts the pointer produced Component and the pointer to itself into the ComponentMap. This way produced Components can be 
reused.
Finally it returns the pointer to the Component.

(TaskIndependent)IteratedSearch is a special case. It does not 'create_task_specific(task, component_map)' for each of its engines when 
'create_task_specific_root(task)' is called. It produces an IteratedSearch that contains a list of TaskIndependentSearchEngines as 
field. 
It only produces them when the phase of this engine begins.
msg11439 (view) Author: simon Date: 2023-10-06.18:58:25
step 3) "avoid duplicating the same task dependent object" is handled now.

See the prototype at:
https://github.com/salome-eriksson/downward/pull/2

As a prototype it is restricted to:
- search engines:
  - EagerSearch ('eager', 'astar')
  - IteratedSearch ('iterated')  // but not the nesting of multiple iterated searches inside one iterated search.
- task transformations:
  - CostAdaptedTask ('adapt_costs')
- open lists:
  - TieBreakingOpenList ('tiebreaking')
- evaluators:
  - SumEvaluator ('sum')
  - GEvaluator ('g')
  - LandMarkCutHeuristic ('lmcut')
msg11367 (view) Author: simon Date: 2023-09-11.16:56:17
While working on the prototype we encountered the oddity with the class OpenList 
and OpenListFactory. With the new pattern we create a new class 
`TaskIndependentOpenListFactory`... the TaskIndependent prefix implies a factory 
already. But the templates make it more complicated. We can not easily make the 
class 'OpenListFactory' to a task independent object.

At the moment we work with the less beautiful TaskIndependentOpenListFactory. 
This should be discussed how this could be improved. For this discussion we can 
talk about the templating of the OpenList (splitting it into an EdgeOpenList and 
StateOpenList) in general.
msg11348 (view) Author: simon Date: 2023-09-06.10:40:00
The next step is 3) "avoid duplicating the same task dependent object"
msg11347 (view) Author: malte Date: 2023-09-05.16:06:42
Does this mean that the next step is a code review, or is there more to do on your side next?
msg11346 (view) Author: simon Date: 2023-09-05.15:42:30
I worked further on 2) "Create TaskIndependent... classes" to bring it into a 
clean state.

See the Pull Request https://github.com/salome-eriksson/downward/pull/2
msg11236 (view) Author: simon Date: 2023-07-28.14:00:19
Claudia and I worked on the prototype.

We finished '1) Replace the "options" constructor with actual parameters and 
overwrite the create_component function from the features to use those 
constructors instead.'. 
We did not replace them but overload them.
See the Pull Request https://github.com/salome-eriksson/downward/pull/2

We also started on 2) but did not get to a clean state yet.
The current state is visible at https://github.com/SimonDold/downward/tree/ti-
branch
we decided to name the 'build/specialize/... function' create_task_specific().
msg11133 (view) Author: salome Date: 2023-07-19.17:21:04
The new prototype branch can be found here: https://github.com/salome-eriksson/downward/tree/issue559-prototype

We will follow the same strategy as below and at first restrict the build to only A*+lmcut. We split up the implementation in three parts:
(The details are mostly for us to remember how we wanted to implement it ;).)

1) Replace the "options" constructor with actual parameters and overwrite the create_component function from the features to use those constructors instead.
(This will still happen on the concrete/task dependent objects)

2) Create TaskIndependent... classes and change Features to be of this type instead. These classes all have a build/specialize/... function that recursively generates the task dependent object. The change from task independent to task dependent should be implemented in the function parse_cmd_line_aux‎.
(At this point the build function will simply create an object without checking for duplicates.)

3) Implement a way to avoid duplicating the same task dependent object, for example by passing some form of hash map through the build function where all already created objects are referenced).
msg11132 (view) Author: malte Date: 2023-07-19.13:45:16
We had a meeting about this during the Fast Downward sprint today. (There were about 8 people present.) I am attaching a photo of the blackboard.

We essentially confirmed the design from the previous message. The final result of the option-parsing code will be a data structure (mathematically, a DAG) that represents a planning algorithm where the planning task has not been fixed yet. This data structure is what we called a "builder" before. We now want to call it a "task-independent" algorithm/component. For example, we would say "TaskIndependentFFHeuristic" rather than "FFHeuristicBuilder".

The task-independent components can then be instantiated with a task in order to create algorithm/component objects that work on a specific task. This produces the objects of the kinds we currently have. At least for the purposes of the prototype, these "task-specific" classes will keep their current names (e.g. "FFHeuristic") to avoid unnecessary changes to very large portions of the code while things are still in prototype stage.

There are two important aspects to keep in mind when going from task-independent to task-specific objects:

1. If we instantiate the same task-independent object with the same task by following multiple paths in the DAG that lead to the same node, we want to return the same task-specific object, not multiple copies.

2. Note "with the same task" in the previous point. While instantiating, a component may decide to pass on a different task from the one that was passed to it. For example, we can define a task-independent heuristic called "unit_cost_heuristic" that takes another task-independent heuristic as a parameter. When it is instantiated with task Pi, it derives a unit-cost task Pi' from Pi and passes Pi' on to its child components. Therefore, the same DAG node may be reached on multiple paths but with different tasks, and reuse only happens for matching tasks. (We compare nodes and tasks by identity.)

The blackboard photo shows an example of this where the same task-independent lmcut object is reached on three different paths (left-hand side), where reuse of the task-dependent object is possible in two of them but not the third (right-hand side).


We also discussed where the boundary between Options objects vs. strongly typed parameters in constructors should be: the TaskIndependentXXX classes will already work with strongly typed parameters, i.e., the conversion from options objects to strongly typed parameters happens at the point where the option parsing code generates its final representation. I think this means that the code that goes from Options to strong types lives inside the feature classes. In particular, if we were to create a task-independent object for the h^3 heuristic (h^m with m=3) programmatically, we would construct it like this:

    auto h = TaskIndependentHMHeuristic(3);

and not like this:

   Options options;
   options.set<int>("m", 3);
   auto h = Task(options);

Another way to put this is that the options objects don't exist outside the option parser world.

A major reason why we currently let the options objects live much longer is that it allows easily passing on options in bulk. Changing this can lead to a lot of boilerplate code when there are many options. One way to address this could be to use classes that encapsulate the parameter set of a component that takes many parameters in a single data structure; or perhaps template trickery could help reduce boiler-plate code here. It's something we decided not to worry about for the prototype. We will live with boilerplate code if necessary.

The next step is then to work on a new prototype. We plan to look at the existing prototype for this, but we expect that we will likely start anew from the current main branch due to the option parser reimplementation that we happened since the previous prototype.

Note also that this has effects on where we should check for illegal option values, for example passing m=-1 to the h^m heuristic. If we only check for this in the option-parsing code, then these checks will be circumvented if the option-parsing code is circumvented, e.g. in the eventually planned Python interface, which will directly create TaskIndependentXXX objects.
msg10480 (view) Author: malte Date: 2022-02-01.13:12:42
Removing manuel and rik from the nosy list because their email addresses no longer exist.
msg10472 (view) Author: malte Date: 2022-01-31.13:40:40
New issue makes sense.
msg10470 (view) Author: florian Date: 2022-01-30.14:07:12
The current prototype (https://github.com/silvansievers/downward/pull/3/files) can parse expressions like
  --search "let(h, blind(), astar(var(h)))"
but the hack mentioned in msg10028 is still there: we abuse the predefinitions to recognize whether a string is a variable like "h" or something that should be parsed like "blind()".

Ideally, we would be able to parse expressions like
  --search "let(h, blind(), astar(h))"
(as above but without the "var" around "h") and recognize that "h" is a variable from the context.

Rewriting the option parser from scratch seems like a plausible option at this point. We could start with a variant that only supports the option strings we have so far and rewrite the old predefinitions to let expressions in the driver or before parsing in the search component. Should I create a new issue for this?
msg10028 (view) Author: florian Date: 2021-02-10.21:26:23
We discussed this further. It looks like the problem mentioned in msg9542 (reuse of a heuristic) is actually handled in the prototype by caching the created objects in the builders and creating at most one object per task.

However, we are still interested to move towards a design that uses "let" and "var" as described in msg9542. One advantage would be that we could limit the caching to builders for "var" expressions. All other builders could use a more straight-forward implementation of actually building and returning an object on each call of build(). We might also be able to get rid of the smart pointers and return the actual objects in those cases. The builders for "var" expressions would be different as they would have to ensure that the same object is returned when a variable is used in several places. Since a variable can be used in two places where the task is different (e.g., eager_greedy([h, cost_adapted(h)])), we would have to create a new object for each task.

So in the end, we want to end up with a configuration that interprets a command line like
  --evaluator h1=ff() --evaluator h2=cg() --search eager_greedy([h1,h2],preferred=[h1,h2])
as
  let(h1, ff(), let(h2, cg(), eager_greedy([h1, h2], preferred=[h1,h2])))

For now, we don't want to change the actual command line syntax but we plan to actually construct the second string from the first internally.

We then plan to parse the string into a set of nested builders, where "let" is parsed into a class like

  template<typename VarType, typename ReturnType>
  class LetBuilder {
      string var_name;
      Builder<VarType> element_builder;
      Builder<ReturnType> nested_builder;
  public:
      ReturnType build(VariableAssignment &context, Task task) {
          context.push(var_name, element_builder);
          ReturnType result = nested_builder.build(context, task);
          context.pop();
          return result;
      }
  }

Variables that occur in the string (h1, h2 in the example above) would be parsed into a class like this

  template<typename VarType>
  class VarBuilder {
      string var_name;
      PerTaskInfo<VarType> cache;
  public:
      ReturnType build(VariableAssignment &context, Task task) {
          if (!cache[task]) {
              const Builder<VarType> &builder = context.get(var_name);
              cache[task] = builder.build(context, task);
          }
          return cache[task];
      }
  }

All other tokens would be parsed into builders that create the respective objects directly as it is already done in the prototype (but without the cache).


The way the option parser works at the moment is not ideal for this setup and a larger rewrite would be useful there. For the prototype, we want to keep the changes small and construct the VarBuilder in the function predefine_plugin (in option_parser.h). This will then store the VarBuilder in the predefinitions object instead of the actual builder (e.g., for the predefinition "--evaluator "h=ff()", we'd store VarBuilder("h") instead of FFHeuristicBuilder()). The way the variables are resolved currently, this means that VarBuilders will be used in the correct places. We will then construct the string that includes "let" calls for all predefinitions and use the normal parser to parse it. In order for this to work, we also have to add parse methods for "let" which should be possible through our normal plugin mechanism.
msg9885 (view) Author: florian Date: 2021-01-28.15:52:44
Add TODO item from issue348 to summary.
msg9583 (view) Author: silvan Date: 2020-07-10.16:11:20
Another update to the pull request after rewriting history again: https://github.com/silvansievers/downward/pull/3
msg9569 (view) Author: silvan Date: 2020-07-09.18:36:28
Updated pull request on my fork of the official repository: https://github.com/silvansievers/downward/pull/2

(I intentionally did not make this a pull request to the official repo, but only to my fork, because this is not intended to be integrated.)
msg9542 (view) Author: malte Date: 2020-07-08.13:42:34
We had a meeting to discuss the prototype. If I remember correctly, the participants where Florian, Jendrik, Malte, Rik and Silvan. I am adding Rik to the nosy list, but of course feel free to remove yourself again.


Summary of discussion
=====================

The architecture of the prototype looks good in general, but one problem it doesn't currently solve is how to deal with variables on the command-line, as in the variable h in configurations like

    --evaluator "h=ff()"
    --search "lazy_greedy([h],preferred=[h])"

where we don't want to evaluate every state twice. This can happen explicitly in user-specified configurations like these and implicitly (not quite with the same mechanism) in plugins like "astar" that create the equivalent of variables under the hood. The discussion focused mostly on how to address this problem.

The suggestion we came up with was for the builders to keep track of the variable context, i.e., the currently existing variables and what they are assigned to. So every build function would be passed two parameters: a task to operate on, and a context, which is essentially a mapping from variable names to their bindings.

We discovered later in the discussion that these two parameters cannot always be resolved at the same time, but we did not fully work out the details. There was a suggestion that variable references need to be resolved earlier than tasks, and there was a suggestion that we could resolve them as early as the constructor of the builder, but we haven't sufficiently explored this yet.

We discussed the scope of variables and how to implement this cleanly in our existing framework, and this led to the suggestion that everything including the introduction and dereferencing of a variable could be represented internally as an expression. This would involve "let" expressions and "var" dereferencing expressions as in

    let("h", ff(), lazy_greedy([var("h")], preferred=[var("h")]))

for the configuration above. To emphasize, this would be an *internal* representation. There is no need to change the user-visible syntax at this point, although it may be useful at a later stage to allow users to explicitly control variable scope, which they currently cannot do. We expressed awareness of Greenspun's Tenth Law. :-)

[Something not mentioned in the meeting: I think with these expressions we can now also implement something like the astar plug-ins more cleanly and possibly even as a mechanical AST transformation, which I think we cannot currently do because sharing of evaluators "in code" is not currently quite the same thing as sharing by variable assignment. If we do this as a mechanical AST transformation, we have to be careful about variable names/shadowing, but I think this can be resolved. At worst, we need to have some concept of a "fresh variable". At best, name resolution rules and shadowing sort this out automatically in the correct way.]

"let" expressions add to the context and "var" expressions look up binding in the context. We originally thought that the context in an example like the above could be simply a mapping from "h" to an "FFHeuristic" object, but we then saw that this will not work in general because the same "h" might be applied to different tasks once task transformations happen, for example in

    let("h", ff(), eager_greedy([cost_adapt(one, var("x")), var("x")])

Our idea for now is to store mappings from tasks to evaluators for the variables. So in this example, our context would become something along the lines of

   "h" => (value=AST("ff()"),
           {CostAdaptedTask(unit, root_task) => <some FFHeuristic instance>,
            root_task => <some other FFHeuristic instance>})

References to the same variable binding would then return identical objects when applied to the same task, but not when applied to different tasks.

Another issue is that we would like "let" and "var" to be polymorphic. A "var" is in some sense parameterized by the variable type ("Evaluator" in the example). A "let" is in some sense parameterized by the variable type and expression result type ("Evaluator" and "SearchEngine" [sic]) in the example. This is more an implementation challenge than a conceptual challenge. One possible solution is to resolve the actual types fairly late and work with "Any" objects (or a yet-to-be-introduced common baseclass for objects like Evaluators, MergeStrategies etc.) for the most part.


Outcomes/decisions
==================

We want to create a new prototype that implements something like the above. Florian mentioned he might take a stab at it.

I suggest that we again try this out on a subset of the existing plug-ins first and disable all other plug-ins while prototyping. I suggest to use the following plug-ins in order, i.e., first make (1) work, then make (1)+(2) work etc.:

(1) some A* configuration using a nontrivial heuristic, such as astar(lmcut())

(2) some lazy_greedy configuration with a single heuristic and preferred operators, such as the lazy_greedy example above [This introduces explicit variable assignments as a complication.]

(3) lama-first [This introduces landmark graphs as a complication, which are currently variable-assignable and currently have a nasty implementation hack.]

(4) full LAMA [This introduces iterated search as a complication, which currently uses a very unusual way of instantiating the algorithms and will probably require further discussion in moving to this new world.]

I replaced eager_greedy with lazy_greedy here compared to what I suggested on the day.



Google Doc notes
================

During our meeting, we created some notes on Google Doc with pieces of pseudo-code etc. that were part of the discussion. These are just unstructured scratch notes that served as discussion aids. I'm including them here almost verbatim. [I am not including the notes on the "unique"-based formulation because this is not the direction we decided on moving into, and I made some other little edits.]

Bear in mind that most or all of these were created before we discovered that variables will not always refer to the same task.


let("x", ff(), sum_evaluator("x", "x"))
let("x", ff(), sum_evaluator(var("x"), var("x")))
x = ff()
sum_evaluator(x, x)

let("a", ff(), let("b", ff()),
  x(var("a"), var("b"), var("a"), var("b")))

a = ff()
b = ff()
x(a, b, a, b)


astar(eval):
    h = eval.build()
    g = g_evaluator()
    f = sum_evaluator(g, h)
    return eager_greedy(tie_breaking(f, h))

SumEvaluatorBuilder::build(context, task) {
    return SumEvaluator(
           first_builder.build(context, task),
           second_builder.build(context, task));
}

VarEvaluatorBuilder::build(context, task) {
    return context[variable_name].build_or_reuse(task);
}

template<typename VarType, typename ResultType>
ResultType let(string, VarType, ResultType)

let::build(context, task){
    context = context.copy();
    context[var_name] = CachedBuilder(context, assigned_value);
    return expression.build(context, task);
}

astar(lm_count(transform=project([0,1,2])))

astar(project_evaluator([0, 1, 2], lm_count())

let(x, ff(), astar(cost_adapt(1, var(x))))

let(x, ff(), eager_greedy([cost_adapt(1, var(x)), var(x)]))

astar(lm_count())

CostAdaptorEvaluator::build(context, task):
msg9364 (view) Author: jendrik Date: 2020-06-30.19:15:35
Add GitHub pull request link to summary.
msg8908 (view) Author: malte Date: 2019-06-14.16:38:57
> The problem I described in msg8897 is related to issue564.

It is related, but I think it's important to realize that it's not the same
thing. Using a single heuristic object in "astar(lmcut())" rather than three
different ones is not something our current code does, whereas issue564 is about
a different form of missed opportunity for reuse.

It may or may not be a good idea to consider these two different kinds of reuse
together. I think the first may be easier to address than the second and/or
require different mechanisms. No action needed on this right now, I just wanted
to make sure the distinction doesn't get lost.
msg8903 (view) Author: jendrik Date: 2019-06-14.12:11:04
I have updated the part about the component coordination problem in the wiki and
 added a TODO list to the summary of this issue. (With this, I think we can
consider the story for the sprint done.)
msg8900 (view) Author: jendrik Date: 2019-06-14.11:46:21
The problem I described in msg8897 is related to issue564.
msg8897 (view) Author: jendrik Date: 2019-06-13.19:45:36
A first prototype version is now finished (see link in msg8876). Obviously,
there are some rough edges, but I think the prototype is in a state in which we
can discuss whether the pieces fit together.

I think the main open question is when and how we want to reuse plugins instead
of rebuilding them. For example, the prototype caches builds of the blind()
plugin. This ensures that we only have one heuristic object for "astar(blind())"
even though the heuristic builder is called for the f-evaluator, h-evaluator and
tiebreaking-evaluator. However, this currently doesn't work with task
transformations: for "astar(adapt_costs(cost_type=one), evaluator=blind())",
each call to the builder sees a different AbstractTask and we can't cache anything.
msg8896 (view) Author: malte Date: 2019-06-13.13:54:40
I would treat this as a work-in-progress name.
msg8884 (view) Author: silvan Date: 2019-06-12.15:49:43
Also in the landmark and open list classes, we use the name factory.
msg8881 (view) Author: silvan Date: 2019-06-12.14:42:57
In the wiki, we wrote that the factories could be called "builders". In
merge-and-shrink, we already have factories called "factories", so this is
something to keep in mind.
msg8876 (view) Author: silvan Date: 2019-06-12.10:31:50
We started with implementing an exemplary factory for search engine/eager
search:
https://bitbucket.org/SilvanS/fd-dev/pull-requests/52/issue559-prototype/diff

This was pretty straight-forward. The next step will be to do the same for a
minimal set of evaluators and open lists.
msg8834 (view) Author: malte Date: 2019-06-06.11:54:56
I noticed we have an old issue (issue121) which is about reusing heuristic
values in LAMA between different iterations of RWA* that use the same heuristic.
In non-unit-cost problems, the first iteration uses different heuristics from
later ones because it ignores action costs, but later iterations all work with
the same heuristic. Currently, the heuristic values are needlessly recalculated
in each search.

I think that when working on this meta-issue, the question of a "reuse context"
should be considered, even if we don't want to immediately implement such a
reuse. It may be hard to retrofit it after the fact.
msg8467 (view) Author: jendrik Date: 2019-01-16.20:21:48
We discussed this further during our retreat and concluded that we would like to
try out the proposal from msg7674. When somebody tackles this, it's probably
best to first try it out for a small subset of evaluators and disable most
plugins to see if the design works.
msg7699 (view) Author: malte Date: 2018-09-21.10:24:40
Another random thought: in the factory-based model, perhaps PerStateInformation
doesn't need to support multiple state registries any more? Not 100% sure, but
it would be nice to get rid of this extra level of indirection when accessing
per-state data.

If we want to investigate this option, one thing we could do is change the
current code to test at runtime whether the PerStateInformation is only ever
used with one registry, and abort if it is used with more than one.
msg7675 (view) Author: malte Date: 2018-09-20.20:46:20
> A possible disadvantage is that this kind of transformations is specific to a
> particular kind of argument: "adapt_costs" here can only adapt evaluators,
> whereas our current design would allow the same transformation to be applied
> in different contexts.

Actually, it is probably possible to combine both approaches if we want (which I
don't advocate at the moment because of YAGNI, but it's perhaps good to think
about what the scope of possibilities is).

The idea is that we can reintroduce the notion of transformations, but they are
still external to the evaluator that is being applied, like so:

    transformed_evaluator(transformation=adapt_costs_transformation(one),
                          evaluator=ff())

Here, "adapt_costs_transformation" is a transformation (i.e., something that
knows how to map tasks to task and do some back-and-forth conversions), and
"transformed_evaluator" is an evaluator that allows evaluating another evaluator
(here: "ff()") under a generic transformation (here:
"adapt_costs_transformation(one)").

I think this gives us the expressive power of both approaches, it is still clear
which functionality lives where, and the notion of "transformations" is still
well encapsulated in the sense that in a planner with no notion of
transformations, the whole transformation mechanism could be added to the
planner externally by defining a new plugin type "transformation" and a new
evaluator "transformed_evaluator" without touching pre-existing plugins like "ff()".

And with this model transformations can be applied in other contexts, too: for
example, we can add the landmark factory "transformed_landmark_factory" and
immediately use it with all transformations that support the necessary
back-and-forth conversions that are necessary for making landmark factories work.
msg7674 (view) Author: malte Date: 2018-09-20.20:37:38
Some food for thought: in the factory-based world, transformations as objects
passed into evaluators are perhaps not actually necessary.

In our current model, we combine an evaluator with a transformation like this:

    ff(transform=adapt_costs(cost_type=one))

Here, "ff" is an evaluator class and has a "transform" parameter, where
"adapt_costs" in a transformation class.

In the factory-based world, we could instead make this more compositional, where
"ff" doesn't have to know about the adaptation and "adapt_costs" is itself an
evaluator class that takes an evaluator as an argument:

    adapt_costs(cost_type=one,evaluator=ff())

Here, the whole notion of a transformation doesn't have to be a planner-level
concept. It just happens to be the case that "adapt_costs" plays around with the
tasks it is applied to before passing them on to its inner evaluator "ff()", and
it applies some back-transformations to the results it gets from ff() (to
convert back preferred operators) before passing them on to *its* caller.

I think this is a more elegant design, but it is only possible in the
factory-based setting where tasks are not bound during initial construction (as
currently) but only bound later, because "ff()" is ultimately applied to a task
which is transformed. This is not apparent from the syntax "ff()", so this won't
work if we require the task to be passed into the constuctor of "ff()".

Advantages of the compositional view are that inner heuristics like ff() don't
need to know anything about the concept of transformations, and it is very clear
where the back-and-forth translation of states and preferred operators must
happen (within "adapt_costs").

We also get composition of transformations for free without having devise
something like a "chain" transformation as discussed previously.

A possible disadvantage is that this kind of transformations is specific to a
particular kind of argument: "adapt_costs" here can only adapt evaluators,
whereas our current design would allow the same transformation to be applied in
different contexts. However, we currently only have real use cases for adapting
evaluators, and transformations being specific to a particular kind of object
can also help make it easier to restrict that they are only applied in ways that
make sense/are safe.
msg7673 (view) Author: malte Date: 2018-09-20.20:33:32
One observation: in the factory-based world, transformations as objects passed
into evaluators are perhaps not actually necessary.

Currently, we think of specifying an evaluator like this:

    ff(transform=adapt_costs(cost_type=one))

Here, "ff" is an evaluator class and has a "transform" parameter, where
"adapt_costs" in a transformation class.

In the factory-based world, we could instead make this more compositional, where
"ff" doesn't have to know about the adaptation and "adapt_costs" is itself an
evaluator class that takes an evaluator as an argument:

    adapt_costs(cost_type=one,evaluator=ff())

Here, the whole notion of a transformation doesn't have to be a planner-level
concept. It just happens to be the case that adapt_costs plays around with the
tasks it is applied to before passing them on to its inner evaluator "ff()", and
it applies some back-transformations to the results it gets from ff() (to
convert back preferred operators) before passing them on to *its* caller.

I think this is a more elegant design, but it is only possible in the
factory-based setting where tasks are not bound during initial construction (as
currently) but only bound later, because "ff()" is ultimately applied to a task
which is transformed, but this is not apparent from the syntax "ff()".

Advantages of this view are that inner heuristics like ff() don't need to know
anything about the concept of transformations, and it is very clear where the
back-and-forth translation must happen (within "adapt_costs").

We also get composition of transformations for free without having devise
something like a "chain" transformation as discussed previously.

A possible disadvantage is that this kind of transformations is specific to a
particular kind of argument: "adapt_costs" here can only adapt evaluators,
whereas our current design would allow the same transformation to be applied in
different contexts. However, we currently only have real use cases for adapting
evaluators, and transformations being specific to a particular kind of object
can also help make it easier to restrict that they are only applied in ways that
make sense/are safe.
msg7672 (view) Author: malte Date: 2018-09-20.20:01:52
Some further brief notes of the discussion we had today (some of this replicated
information below for completeness):

1. "task transformations" as given to the option parser are currently
represented as tasks in our components. This is not enough: we need access to
both the source and the target of the transformation.

Therefore, we should actually store a task *transformation* there, which
describes how to go from one task to another, rather than just the transformed
task itself.

The only place where we need to pass in a task proper "from the outside" is when
we want to invoke a search algorithm, i.e., when calling SearchEngine::search
(or whatever it is called).

2. In the medium future, we don't want to introduce task transformations for
search algorithms. If we want to eventually support it, we first need to clarify
what exactly is meant by that. Does the user want a plan for the original or
transformed task? On which task are the evaluators evaluated? The search
algorithm has a potpourri of responsibilities, and if we introduce task
transformations for search algorithms eventually, it is even possible that we
support different transformations for different aspects of the algorithm.

3. The one place in search algorithms where we currently use something like task
transformations is in the adjusted action costs used for the g evaluators. (I
*think* this is the only place.) The proposal is to support this functionality
by moving more of the handling of g values out of the search algorithm and into
the g evaluator, which (as an evaluator) supports transformations.

4. Getting task transformations, not tasks, from the option parser means that we
no longer have access to the task upon construction. This should actually be a
blessing because the notion of "the task" is ill-defined at this place of the
code as explained by Silvan in the first message of this issue. So applying this
change will push us towards a more factory-based view of things, which is what
we need anyway, for example for component reuse.

On the negative side, it has the potential of introducing lots of boilerplate
code: the hierarchy of "task-independent" factories that will make up something
like the f-evaluator of A* will likely need to be mirrored by a hierarchy of
"task-specific evaluators" that are instantiated during search. It would be nice
if we could somehow minimize the boilerplate for the simple case.

It is also an interesting question *when* to construct these objects. It may be
a minor concern, but it would be nice if the time at which expensive
computations happen were predictable so that the log output order doesn't become
too confusing and we can still reliably time certain parts of the code without
worrying that we also capture heavy initializations of other objects.

5. In the "factories mirrored by concrete objects" world, our current "reuse" of
variables like "h" in places like "eager_greedy([h],preferred=[h])" can perhaps
be addressed by striving to reuse the same "concrete" object whenever we have
the same factory. But we have to be careful only to do this if the context
actually allows it. If both concrete "h"s operate on different tasks, clearly
they cannot reuse.

6. The factory-based view perhaps also gives us an elegant way of addressing the
"reparsing" hacks that we currently need for iterated_search.{h,cc}. Perhaps it
also means we don't need a (non-help-mode) dry run for the option parser any
more because processing the command line and constructing the factories will
actually be cheap. With the current option parser design, we will still need the
help-mode dry run, though.

Random thoughts about class names for factory vs. concrete:

EvaluatorFactory vs. Evaluator?
Evaluator vs. TaskEvaluator?

other ideas?
msg7671 (view) Author: malte Date: 2018-09-20.19:44:28
Extracting from Florian's msg7659 at issue833:

================================================================================

The current assumption that the task of a heuristic or search engine is known
when it its constructed, is not compatible with general task transformations and
predefined objects. For example, consider a heuristic h with a task
transformation f_h that is nested in two heuristics. If the other heuristics use
transformations f_1 and f_2 then the inner heuristic would have to compute
values for both f_h(f_1(root)) and f_h(f2(root)). This is not possible with our
current model. 

We discussed this today and plan to move towards code where task transformations
are represented as functions on tasks (mapping tasks to other tasks) and where
we use factories to pass the relevant task to objects like heuristics when they
are used. Once this is done, we can pass g_root_task to SearchEngine::search
which will use the factories to generate the required objects for the given task.

================================================================================
msg5122 (view) Author: malte Date: 2016-01-20.15:59:33
It's not about initialization: it's about what happens if two searches use the
same heuristic concurrently and hence share path-dependent information.
msg5121 (view) Author: florian Date: 2016-01-20.15:56:41
This also affects how path dependent heuristics should be initialized (see msg5120).
msg4419 (view) Author: silvan Date: 2015-07-21.15:02:48
A "component" in the sense of this issue is an object that can be created by the
user on the command line, such as an FF heuristic, a merge strategy, a landmark
factory or a search algorithm. Currently, we have two kinds of components: ones
that can be assigned to a variable (e.g. landmark factories, heuristics) and
ones that cannot (e.g. merge strategies). One question is whether we want to get
rid of this distinction. Currently it serves a purpose because some of our
components cannot work when used in two contexts at the same time, and not being
able to assign them to variables prevents them from being used in two contexts.

One of the general problems in planner architecture that we still have to solve
is: how do we ensure that different components of the planner fit together
properly. For example, a search algorithm and a heuristic somehow need to be
able to exchange information about the task (the search algorithm must tell the
heuristic which state to consider, and the heuristic must tell the search
algorithm the set of preferred operators), but they might both be working on
different task objects. We have partially solved this by thinking of "task
transformations" instead of tasks, but this is still rather half-baked.

There are also issues related to using the same component in two different
contexts without having interference due to its attributes. For example, if we
want to use the same landmark graph in two heuristics, I think this would
currently cause problems because there is information about ongoing computations
stored within the landmark graph object. (This could be resolved, for example,
by making sure that something like this never happens -- e.g. by making all
references to other components const.)

For now, we don't really have a good idea how to address this. The main purpose
of this issue is to have a reference point that we can use to annotate parts of
the code that relate to this issue. When working on this issue, we should make
sure to search the code comments for references to this issue number.
History
Date User Action Args
2024-01-02 18:23:26simonsetmessages: + msg11500
2023-11-02 18:39:15simonsetmessages: + msg11481
2023-11-02 16:12:58maltesetmessages: + msg11480
2023-11-02 15:51:35simonsetmessages: + msg11478
summary: TODO: - rework OptionParser to parse the let syntax (issue1040) - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation. -> TODO: - rework OptionParser to parse the let syntax (issue1040) - Use Options objects only in Features (issue1082) - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation.
2023-10-09 15:44:01simonsetmessages: + msg11445
2023-10-06 18:58:25simonsetmessages: + msg11439
2023-09-11 16:56:17simonsetmessages: + msg11367
2023-09-06 10:40:00simonsetmessages: + msg11348
2023-09-05 16:06:42maltesetmessages: + msg11347
2023-09-05 15:42:30simonsetmessages: + msg11346
2023-07-28 14:00:19simonsetmessages: + msg11236
2023-07-19 17:21:04salomesetmessages: + msg11133
2023-07-19 16:16:12simonsetnosy: + simon
summary: TODO: - rework OptionParser to parse the let syntax (issue1040) - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation. -> TODO: - rework OptionParser to parse the let syntax (issue1040) - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation.
2023-07-19 13:48:25salomesetnosy: + salome
2023-07-19 13:45:16maltesetfiles: + blackboard-2023-07-19.png
nosy: - salome
messages: + msg11132
2023-07-19 13:04:14salomesetnosy: + salome
2022-02-01 13:12:42maltesetnosy: - manuel, rik
messages: + msg10480
2022-02-01 10:05:11patfersetsummary: TODO: - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation. -> TODO: - rework OptionParser to parse the let syntax (issue1040) - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation.
2022-01-31 13:40:40maltesetmessages: + msg10472
2022-01-30 14:07:12floriansetmessages: + msg10470
2021-02-10 21:26:23floriansetmessages: + msg10028
2021-02-09 11:07:25clemenssetnosy: + clemens
2021-01-28 15:52:44floriansetmessages: + msg9885
summary: TODO: - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue. Converting GlobalStates to States made it slower (see issue554) and the change in issue348 should imply that the performance goes up again if we avoid copying the state due to a task transformation.
2020-07-10 16:11:20silvansetmessages: + msg9583
2020-07-09 18:36:28silvansetmessages: + msg9569
summary: TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 New link: https://github.com/aibasel/test-git/pull/1 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2020-07-08 13:42:34maltesetnosy: + rik
messages: + msg9542
2020-07-01 10:31:20silvansetsummary: TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 New link: https://github.com/silvansievers/test-git/pull/1 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 New link: https://github.com/aibasel/test-git/pull/1 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2020-06-30 19:15:35jendriksetmessages: + msg9364
summary: TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 New link: https://github.com/silvansievers/test-git/pull/1 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2019-06-14 16:38:57maltesetmessages: + msg8908
2019-06-14 12:11:04jendriksetmessages: + msg8903
summary: Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121. -> TODO: - Discuss prototype at https://bitbucket.org/SilvanS/fd-dev/pull-requests/52 - Reuse per-task information (which includes Evaluators now) (issue564) Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2019-06-14 11:46:21jendriksetmessages: + msg8900
2019-06-13 19:45:36jendriksetstatus: chatting -> in-progress
messages: + msg8897
2019-06-13 13:54:40maltesetmessages: + msg8896
2019-06-12 15:49:43silvansetmessages: + msg8884
2019-06-12 14:42:57silvansetmessages: + msg8881
2019-06-12 11:58:50manuelsetnosy: + manuel
2019-06-12 10:31:50silvansetmessages: + msg8876
2019-06-06 11:54:56maltesetmessages: + msg8834
summary: Some ancillary things to keep in mind: - Can we enable reuse of heuristics across different searches? See issue121.
2019-02-27 19:18:57mkatzsetnosy: + mkatz
2019-01-16 20:21:48jendriksetmessages: + msg8467
2018-10-04 10:13:37patfersetnosy: + patfer
2018-09-21 10:24:40maltesetmessages: + msg7699
2018-09-20 20:46:20maltesetmessages: + msg7675
2018-09-20 20:37:38maltesetmessages: + msg7674
2018-09-20 20:33:32maltesetmessages: + msg7673
2018-09-20 20:01:52maltesetmessages: + msg7672
2018-09-20 19:44:28maltesetmessages: + msg7671
2016-01-20 15:59:33maltesetmessages: + msg5122
2016-01-20 15:56:41floriansetmessages: + msg5121
2015-07-22 01:36:49floriansetnosy: + florian
2015-07-21 20:48:16jendriksetnosy: + jendrik
2015-07-21 15:02:48silvancreate