Created on 20101217.23:45:04 by malte, last changed by salome.
msg11693 (view) 
Author: salome 
Date: 20240917.14:04:00 

I tested all problems with astar(blind()): For bwnc_5_1.pddl and 6cups.pddl we find a solution almost immediately, while for bwnc_10_1.pddl the translator is done in 0.09s but the search takes longer than a minute (at which point I interrupted), which I presume is simply because the state space is big. graphprob.pddl still gets stuck in the translator, more precisely at "Completing instantiation..."
I also tried astar(hmax()) (which by default only approximates cycles so can lead to exponential runtime), and it can also solve bwnc_5_1 and 6cups in under 1 second. But for bwnc_10_1 it eventually exited with code 9 (after freezing my computer for a bit presumably due to swapping). Using the trivial overapproximation for all negations (i.e. astar(hmax(axioms=approximate_negative)) leads also to immediate search start.
To summarize, for blind search the performance issue is solved on all problems except graphprob. For heuristics with the axioms option, bwnc_10_1 still leads to exponential explosion, but it can be circumvented by using axioms=approximate_negative. The performance issue with graphprob does not seem to be related to negated axioms, but rather with the axiom being complicated and involving a lot of variables.

msg11656 (view) 
Author: malte 
Date: 20240719.14:12:36 

SalomÃ©: added you to the nosy list, but this doesn't mean you have to work on this.
Is this resolved now that issue454 is merged? Can we run the test cases with the current code? (I didn't reread everything below, but I think at least the blind heuristic shouldn't have these performance issues any more.)

msg11612 (view) 
Author: haz 
Date: 20240708.20:40:22 

Adding myself to the nosy list.

msg11568 (view) 
Author: malte 
Date: 20240201.11:51:28 

It would be nice to try this splitting at some point, and then it would be useful to have some benchmarks to test the new code that exercise this (= axioms that have many existentially quantified variables in the rules but comparatively low arity of the derived predicate itself). If anyone has more examples like this, I'd be interested. But of course I am aware that the original examples are 13+ years old.

msg11567 (view) 
Author: gabi 
Date: 20240201.11:46:15 

Regarding msg8279: Malte and I just had a look into this idea. On a second look, without the explicit full parameters in the rule head we would run into issues during instantiation because we don't immediately know for which external parameters we need to instantiate the axiom. We might be able to reconstruct this somehow from the model but it will definitively be more complicated than the change looks like at first glance.

msg8279 (view) 
Author: malte 
Date: 20181213.10:37:58 

I'm seeing some improvements in graph{domain,prob}.pddl with a change to
get_axiom_predicate, although it needs to be changed a bit more than in my
previous message. With this:
def get_axiom_predicate(axiom):
name = axiom
variables = [par.name for par in
axiom.parameters[:axiom.num_external_parameters]]
return pddl.Atom(name, variables)
the relaxed exploration phase of grounding terminates reasonably quickly, with
731493 relevant atoms, which is not impossibly high. However, we still get stuck
in the next stage (Completing instantiation). Part of the problem may be the
action with 7 parameters plus a further universally quantified variable in the
effect, but I haven't looked into it further. Since the whole thing didn't
finish, I also didn't have a chance to check if this change actually works. But
it does seem to indicate that there might be some easily obtainable gains here.

msg8278 (view) 
Author: malte 
Date: 20181213.10:15:10 

BTW, normalize.py also has a function "get_action_predicate" with the same (if
I'm not overlooking some important context) needless addition of existentially
quantified variables to the parameter set.

msg8277 (view) 
Author: malte 
Date: 20181213.10:13:01 

Hi Patrik, yes, the decomposition can be improved. It was just meant to be an
example of the general principle.
I don't think you can easily get rid of every variable that only occurs in the
body, but you can easily get rid of every variable that only occurs in the body
and only *once*. In general, as you say, a tree decomposition is needed to see
what can be projected away at which stage. If I have an axiom with no variables
in the head but a complete cooccurrence graph, I don't think there is much that
can be simplified, at least not with this kind of approach. But perhaps this is
already what you meant.
The translator already does something like that in its grounding phase (using a
greedy algorithm to create the decomposition), but it doesn't take the idea as
far as it could. In particular, it only uses this to minimize the size of
intermediate results, but (if I recall correctly) still computes the set of all
assignments to all variables occurring in the body where all facts of the body
are reachable (according to relaxed reachability). This is wasteful, and I
wonder if improving this could be as simple as changing
def get_axiom_predicate(axiom):
name = axiom
variables = [par.name for par in axiom.parameters]
if isinstance(axiom.condition, pddl.ExistentialCondition):
variables += [par.name for par in axiom.condition.parameters]
return pddl.Atom(name, variables)
in normalize.py to:
def get_axiom_predicate(axiom):
name = axiom
variables = [par.name for par in axiom.parameters]
return pddl.Atom(name, variables)
But even if that's the case, there is more that could be done to find better
decompositions, and there is more that could be done for actions (where it's
more complicated because we want to produce a fully grounded action in the final
output plan, but we could still do a lot better in intermediate computations,
for example of delete relaxation heuristics).

msg8275 (view) 
Author: patrik 
Date: 20181213.00:57:37 

You should be able to factor out any existential parameter in an axiom (i.e.,
any parameter that appears only in the body). For example, in temp2, you can
replace successor(?from, ?cursuccessor, ?dest1) with a new axiom tmp3(?from,
?dest1), whose body consists of this atom; doing the same for cost(?from,
?dest1, ?mycurrentcost), you end up with three axioms with three parameters,
instead of one with five.
I implemented something similar for action preconditions (only works for
deletefree problems), specifically to handle grounding of the caldera domain
from the last IPC. The idea is to find the existential parameters, build a
cooccurrence graph (edge between any pair of parameters that cooccur in a
predicate, including in the head of the axiom), then treedecompose this graph,
and root the tree in the node that corresponds to the derived predicate. The
edges of the decomposition (which are intersections between parameter subsets)
become the "interfaces", i.e., in this case the new temp axioms that you add.

msg8266 (view) 
Author: malte 
Date: 20181212.15:51:58 

Regarding msg8262, this looks like it needs some genuine new ideas. One of our
Master's students is considering work on something like this for their Master's
thesis beginning some time next spring. This may be an interesting example
domain to keep as a use case for them.
The first idea I had in mind for this would be some automated schema splitting
to reduce the arity of schematic axioms and operators. In the example, the
problematic axiom is likely the one with 8 parameters:
Axiom relaxdone(?from: object, ?to: object)
Conjunction
NegatedAtom triedrelax(?from, ?to)
Atom relaxing(?dest1, ?currentrelaxingcost1)
Atom successor(?from, ?cursuccessor, ?dest1)
Atom cost(?from, ?dest1, ?mycurrentcost)
Atom cost(?to, ?dest1, ?ncost)
NegatedAtom relaxed(?from, ?to, ?dest1)
Atom plus(?ncost, h1, ?mynewcost)
Atom cost(?from, ?to, h1)
NegatedAtom =(?currentrelaxingcost1, ?mynewcost)
The same information could be expressed using axioms with at most 5 parameters,
for example like this:
Axiom temp1(?to, ?dest1)
Conjunction
Atom relaxing(?dest1, ?currentrelaxingcost1)
Atom cost(?to, ?dest1, ?ncost)
Atom plus(?ncost, h1, ?mynewcost)
NegatedAtom =(?currentrelaxingcost1, ?mynewcost)
Axiom temp2(?from, ?to, ?dest1)
Conjunction
NegatedAtom triedrelax(?from, ?to)
Atom successor(?from, ?cursuccessor, ?dest1)
Atom cost(?from, ?dest1, ?mycurrentcost)
NegatedAtom relaxed(?from, ?to, ?dest1)
Atom cost(?from, ?to, h1)
Axiom relaxdone (?from, ?to)
Conjunction
Atom temp1(?to, ?dest1)
Atom temp2(?from, ?to, ?dest1)
I may have messed up the details, but perhaps the idea is clear.
And of course 5 parameters might still be too many.

msg8263 (view) 
Author: malte 
Date: 20181212.15:39:13 

To make the threading clear, my previous comment (msg8260) was a reply to Gabi's
msg8257.
Regarding Gabi's msg8258, this looks like an example of the exponential
CNFtoDNF transform and would be resolved by what I mention in points #2 (don't
create negative axioms) or #3 (use a linear algorithm for transforming CNF to DNF).

msg8262 (view) 
Author: gabi 
Date: 20181212.15:38:22 

graphdomain contains a somewhat complicated axiom that probably causes the problem:
(:derived (relaxdone ?from ?to)
(or
(exists (?dest  node ?currentrelaxingcost  hopcount)
(and
(not (triedrelax ?from ?to))
(relaxed ?from ?to ?dest)
(relaxing ?dest ?currentrelaxingcost)
)
)
(exists (?dest ?cursuccessor  node ?mycurrentcost
?currentrelaxingcost ?mynewcost  hopcount ?ncost  hopcount)
(and
(not (triedrelax ?from ?to))
(relaxing ?dest ?currentrelaxingcost)
(successor ?from ?cursuccessor ?dest)
(cost ?from ?dest ?mycurrentcost)
(cost ?to ?dest ?ncost)
(not (relaxed ?from ?to ?dest))
(plus ?ncost h1 ?mynewcost)
(cost ?from ?to h1)
(not (= ?currentrelaxingcost ?mynewcost))
)
)
(triedrelax ?from ?to)
)
)
It is somewhat easier to read after normalization:
Axiom relaxdone(?from: object, ?to: object)
Conjunction
NegatedAtom triedrelax(?from, ?to)
Atom relaxed(?from, ?to, ?dest)
Atom relaxing(?dest, ?currentrelaxingcost)
Axiom relaxdone(?from: object, ?to: object)
Conjunction
NegatedAtom triedrelax(?from, ?to)
Atom relaxing(?dest1, ?currentrelaxingcost1)
Atom successor(?from, ?cursuccessor, ?dest1)
Atom cost(?from, ?dest1, ?mycurrentcost)
Atom cost(?to, ?dest1, ?ncost)
NegatedAtom relaxed(?from, ?to, ?dest1)
Atom plus(?ncost, h1, ?mynewcost)
Atom cost(?from, ?to, h1)
NegatedAtom =(?currentrelaxingcost1, ?mynewcost)
Axiom relaxdone(?from: object, ?to: object)
Atom triedrelax(?from, ?to)
The intermediate axiom uses 8 different variables that need to be instantiated.
I don't even have the patience to wait for the "Computing model" step to terminate.

msg8260 (view) 
Author: malte 
Date: 20181212.15:36:36 

There are many opportunities for improvement here.
Firstly, for context on why we multiply out negative preconditions in an
exponential way at all, see issue111, which is in turn based on issue55.
Handling this differently would be one way to address the problem we see here,
as discussed in issue111.
Secondly, generating "negated axioms" is not necessary at all in principle, and
there are currently at least two open bugs related to it. The negated axioms are
potentially very useful for the heuristics that use them (e.g. h^max, h^ff,
h^add), but that doesn't mean we need to create them unconditionally.
Thirdly, the way the negated axioms are generated by converting a CNF to DNF is
an exponential transformation, but I think a linear transformation would be
possible with auxiliary derived variables. I don't think this is important in
this particular example, but I'm sure it can make the difference between
exponential and polynomial runtime in other cases.
Fourthly, it would be nice to avoid the redundant equivalent axioms that you
mention in your example. This won't be an exponential vs. polynomial difference,
but of course it is still a missed optimization opportunity.

msg8258 (view) 
Author: gabi 
Date: 20181212.15:29:25 

In the clean_up_cups domain, one source of the problem is the goal:
( and
( forall (?c  cup)
( and
( imply
( or ( holding ?c ) ( exists (?l  location) ( at ?c ?l )) )
( imply ( not ( clean ?c ) ) ( at ?c dishwasher ) )
) )) ))
which gets normalized to "NegatedAtom newaxiom@2()" with axioms
Axiom newaxiom@2()
Conjunction
NegatedAtom clean(?c)
NegatedAtom at(?c, dishwasher)
Atom holding(?c)
Axiom newaxiom@2()
Conjunction
NegatedAtom clean(?c)
NegatedAtom at(?c, dishwasher)
Atom at(?c, ?l)
So here the body contains a mixture of positive and negative literals.
Here, compute_negative_axioms takes forever. For debugging this, we need to have
a closer look into this method but I fear it is a conceptual problem.

msg8257 (view) 
Author: gabi 
Date: 20181212.15:18:23 

I am going over the different attached tasks that cause performance issues and
try to take a few notes what is the problem for each one. Starting with
crashtestdummy.pddl, which is a blocksworld domain without a "clear"
predicate. Instead, it uses preconditions of the form "(not (exists (?z) (on ?z
?x))".
There are five such (basically equivalent) preconditions, for which the
normalization introduces five axiom rules:
Axiom newaxiom@0(?x: object)
Atom on(?z, ?x)
Axiom newaxiom@1(?x: object)
Atom on(?w, ?x)
Axiom newaxiom@2(?z: object)
Atom on(?w1, ?z)
Axiom newaxiom@3(?y: object)
Atom on(?w1, ?y)
With some more intelligence, we could one of them instead.
Each of them gets instantiated to 25 ground rules like
(newaxiom@3 b5)
PRE: Atom on(b2, b5)
EFF: Atom newaxiom@3(b5)
Then the really bad things happen. In axiom_rules.py, we introduce a negated
version for each of them, e.g.
not (newaxiom@3 b4)
PRE: NegatedAtom on(b1, b4)
PRE: NegatedAtom on(b2, b4)
PRE: NegatedAtom on(b3, b4)
PRE: NegatedAtom on(b4, b4)
PRE: NegatedAtom on(b5, b4)
EFF: NegatedAtom newaxiom@3(b4)
The negative atoms in the condition are then multiplied out in the translation
to sas variables, generating the 25600 axiom rules.
Natural question: why do we do this? Can't we just evaluate the "positive" form
of the axioms and use the corresponding "negative" value of the derived variable
in the preconditions?

msg3809 (view) 
Author: thofmann 
Date: 20141013.17:01:07 

We are having similar issues with the translator which seem to be related to
this bug. I've attached a domain (clean_up_cups_domain.pddl) and a problem
(6cups.pddl) where the issue occurs. The translator uses >16GB of memory when
run with the given problem. I've also attached a profile generated with cProfile.
Note that the example is taken from an application, so it's probably not the
shortest example (although I've tried to clean it up a bit).

msg894 (view) 
Author: malte 
Date: 20101217.23:45:01 

Not sure if I should classify this as a "bug" since it looks like the code
works, but for Patrik's example it's unusable, and we tend to classify serious
performance issues as bugs too so that they get addressed more quickly that
"feature" or "wish" issues.
Background:
This is from an old (June 2009) bug report by Patrik (sorry, some context is
missing):
=======================================================================
Attached (domain: crashtestdummy, problem: bwnc_10_1). It doesn't
crash anymore because I fixed it too (by replaxing "axiom.name" with
"axiom", which produces a totally useless printout like "Removed axiom:
<sas_tasks.SASAxiom instance at 0xb7b3078c>" but at least doesn't crash).
There may be some other problems as well. I tested the the blind search
planner (I think: it's option "ob" to downward/search, right?), and it
claims to have exhausted the space after expanding one node, i.e., the
initial state has no applicable actions). But I don't think that's
right. Also attached is an alternative formulation (bwnoclear) of the
domain, which I think is equivalent, and can be fed to my parser. Other
planners I've tried (some hsp* variants, ff) explore more than one node
(ff even finds a solution, for both domain formulations).
=======================================================================
This bug does not exist any more; this was fixed when Gabi implemented proper
support for negative conditions. However, Patrik's example now triggers a
performance issue, which I think comes from the "multiply out conditions" code.
Patrik's original 10block problem fails to translate because it exhausts 3 GB
of memory. It bails out with a memory error in "multiply_out". This is with both
domain versions. I also created a smaller problem (5 blocks) which does
translate after a while, but it generates 25600 axiom rules, which looks a bit
excessive. :)
So we may need to rethink some of our algorithm choices here.


Date 
User 
Action 
Args 
20240917 14:04:00  salome  set  messages:
+ msg11693 
20240719 14:12:36  malte  set  nosy:
+ salome messages:
+ msg11656 
20240708 20:40:22  haz  set  nosy:
+ haz messages:
+ msg11612 
20240201 11:51:28  malte  set  messages:
+ msg11568 
20240201 11:46:15  gabi  set  messages:
+ msg11567 
20190614 14:29:22  malte  set  summary: See metaissue: issue924. 
20181213 12:23:50  jendrik  set  nosy:
+ jendrik 
20181213 10:37:58  malte  set  messages:
+ msg8279 
20181213 10:15:10  malte  set  messages:
+ msg8278 
20181213 10:13:01  malte  set  messages:
+ msg8277 
20181213 00:57:37  patrik  set  messages:
+ msg8275 
20181212 15:51:58  malte  set  messages:
+ msg8266 
20181212 15:39:13  malte  set  messages:
+ msg8263 
20181212 15:38:22  gabi  set  messages:
+ msg8262 
20181212 15:36:36  malte  set  messages:
+ msg8260 
20181212 15:29:25  gabi  set  messages:
+ msg8258 
20181212 15:18:23  gabi  set  messages:
+ msg8257 
20170125 06:16:12  sprabhu  set  files:
+ graphprob.pddl 
20170125 06:15:47  sprabhu  set  files:
+ graphdomain.pddl 
20141013 17:01:40  thofmann  set  files:
+ translate.profile 
20141013 17:01:33  thofmann  set  files:
+ 6cups.pddl 
20141013 17:01:07  thofmann  set  files:
+ clean_up_cups_domain.pddl nosy:
+ thofmann messages:
+ msg3809 
20141004 19:33:27  malte  set  keyword:
+ translator 
20101217 23:45:33  malte  set  files:
+ bwnc_5_1.pddl 
20101217 23:45:27  malte  set  files:
+ bwnc_10_1.pddl 
20101217 23:45:17  malte  set  files:
+ bwnoclear.pddl 
20101217 23:45:04  malte  create  
