Issue454

Title get rid of negated axiom rules in translator
Priority bug Status resolved
Superseder Nosy List augusto, gabi, haz, jendrik, malte, patrik, salome
Assigned To Keywords translator
Optional summary
See meta-issue: issue924.

We want to simplify the translator output format by only allowing binary derived
variables whose default value is always false, and only allowing axioms setting
a derived variable to true. "Negated axioms" should instead be computed in the
search component when needed (for example in cg/landmarks/relaxation 
heuristics).

~~See msg8204 of issue862 for an overview of how the axiom code currently works.
(May become out of date while we're working on the axiom code further.)~~
(the axiom code was heavily reworked with issue453)

Check for references to "issue454" in the code, specifically axiom_rules.py and
sas_tasks.py. (Best look for references to issue454 in the whole codebase.)

Further TODOs/plans moved from issue862:
- ~~Try moving the computation of axiom layers earlier, having everything 
related
  to negated axiom rules at the end. Then simplify the layer computation code
  to take into account that it no longer has to deal with negated heads, and
  enable the test that verifies the layering condition properly, removing all
  the hacks that are currently there because of the negated rules.~~
  Add assertions that there are no negated heads. Then rerun the experiments.
- ~~Open an issue for the bug with compute_necessary_axiom_literals
  (see msg8204).~~ (this was resolved as part of issue453)
- Make a plan for further steps for the axiom code. See the various notes etc.
  in msg8204.

Created on 2014-08-17.11:33:53 by malte, last changed by salome.

Summary
See meta-issue: issue924.

We want to simplify the translator output format by only allowing binary derived
variables whose default value is always false, and only allowing axioms setting
a derived variable to true. "Negated axioms" should instead be computed in the
search component when needed (for example in cg/landmarks/relaxation 
heuristics).

See msg8204 of issue862 for an overview of how the axiom code currently works.
(May become out of date while we're working on the axiom code further.)
(the axiom code was heavily reworked with issue453)

Check for references to "issue454" in the code, specifically axiom_rules.py and
sas_tasks.py. (Best look for references to issue454 in the whole codebase.)

Further TODOs/plans moved from issue862:
- Try moving the computation of axiom layers earlier, having everything 
related
  to negated axiom rules at the end. Then simplify the layer computation code
  to take into account that it no longer has to deal with negated heads, and
  enable the test that verifies the layering condition properly, removing all
  the hacks that are currently there because of the negated rules.
  Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
  (see msg8204). (this was resolved as part of issue453)
- Make a plan for further steps for the axiom code. See the various notes etc.
  in msg8204.
Files
File name Uploaded Type Edit Remove
domain-pr222.pddl gabi, 2024-07-12.14:37:40 application/octet-stream
problem-pr222.pddl gabi, 2024-07-12.14:37:57 application/octet-stream
Messages
msg11652 (view) Author: salome Date: 2024-07-19.12:37:13
After a final code review from Gabi today we merged the issue.
msg11643 (view) Author: salome Date: 2024-07-17.10:09:10
After a discussion with Malte and Gabi I changed the option for the trivial overapproximation of all negated axioms to an enum. I also ran a quick sanity experiment in debug mode to check that the newly introduced assertion never triggers:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-assert-check-eval/issue454-assert-check.html
I ran just one configuration that uses the task transformation and one that doesn't; and only ran the current version. As far as I can tell everything looks good.
msg11636 (view) Author: salome Date: 2024-07-15.15:57:18
I looked into the different expansions with blind and goalcount (in particular, I looked at the first philosopher problem with goalcount). The different expansions stem from the fact that removing axioms in the translator can change the variable order, which in turn can affect the expansion order during search. We're satisfied with this explanation and consider the question resolved.
msg11635 (view) Author: salome Date: 2024-07-15.09:49:38
I did another experiment after Gabi merged the new main branch, changed some minor things in the translator, and I added the three changes mentioned in msg11606 (all of which should not affect performance). I also cleared the revision cache; this seems to have solved the memory issues, thanks for the tip Malte!

Here is the comparison report:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v6-eval/issue454-base-v6-compare.html
It looks much the same (except for memory). The only thing that slightly worries me is the difference in expansions in philosophers for blind and goalcount, for which I don't think we have an explanation yet.

For completeness here is also the comparison with the simple overapproximation (adding rules nondefault <- T for all relevant derived variables), which also looks the same:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v6-eval/issue454-v6-simple-compare.html

And the extra domains, where I added the file that Gabi attached here:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v6-eval/issue454-v6-extra-domains.html
In all v6 configurations, the pr222 file is solved, and Christian's tasks behave the same as in v5.
msg11633 (view) Author: gabi Date: 2024-07-12.14:37:57
Fix file extension.
msg11632 (view) Author: gabi Date: 2024-07-12.14:33:52
I attach a task by Andras Salamon, which he provided in the related pull request 222 on github. Salomé, maybe we can include them into the separate experiments that we already run for Christian's tasks?

https://github.com/aibasel/downward/pull/222
msg11610 (view) Author: malte Date: 2024-07-05.17:53:37
> One thing that immediately stands out in the comparative report is that v5 uses a lot
> more memory. [...]
> Does anyone have any idea what might explain this?

One option is external changes (compiler versions, libraries) etc. that happened over time. Today's compiler could create smaller or larger binaries than yesterday's compiler.

Note that this can happen *even if you rerun all configurations in a single experiment* due to the revision cache that lab uses. So one thing you can try is check if the differences persist in a completely fresh directory where you can rule out that an old compile from the revision cache is used.
msg11607 (view) Author: salome Date: 2024-07-05.13:03:02
Addendum: Christian recently shared some new tasks where the CNF to DNF transformation explodes, and I also included them in the experiment (but excluded them from the reports). Instead I created a report that includes just them:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v5-eval/issue454-v5-computing-minors.html

As expected all 4 tasks fail with base, while v5 always manages prob_triangle_in_square with the edge-queue domain encoding. blind and goalcount (which do not compute negated axioms) as well as all the simple variants also solve prob_triangle_in_square with the complex isomorphism encoding. If a task has been solved, it took around a second or less.
msg11606 (view) Author: salome Date: 2024-07-05.12:53:03
I added some changes to the code after a code review from Gabi and ran new experiments. Here are the absolute and comparative reports:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v5-eval/issue454-v5.html
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v5-eval/issue454-base-v5-compare.html

One thing that immediately stands out in the comparative report is that v5 uses a lot more memory. The increase in a single task is often 37384KB, suggesting that the binary got bigger. The magnitude of the increase surprises me however. I double checked the output on one example and indeed the difference is already there in the first output (17444KB for base and 54832KB for v5). I then looked at the binaries on the grid, and v5 is 57568 bigger, but this is bytes (I used ls -la to get the values) and the number in the report is KB. So this doesn't really explain the memory increase. I also could not reproduce the difference locally, the memory on first output was 9944KB for base and 9988KB for v5. Does anyone have any idea what might explain this?

[Whenever I talk about time in what follows, I mean planner_time]
Regarding coverage we are fairly stable: blind, goalcount and cea all gain two tasks, while ff loses 1 and lama-first 2.The gains in blind and goalcount are in miconic-fulladl on the same problems (f16-2 and f17-2). This could be because blind and goalcount don't compute negated axioms at all any more, but the v5 version still requires 1720+ seconds for any problem-config combination. The gain for cea comes from 2 problems in psr-large that v5 could solve with 2079 and 2371 seconds respectively. The two losses for lama-first come from airport-adl/p33-airport4halfMUC-p10.pddl which base could solve in 344 seconds, and psr-large/p35-s189-n45-l2-f30.pddl which base solved in 581 seconds. This most likely happens because we compute slightly different negated axioms (see msg11573) but even though we actually compute "better" axioms, we end up with more expansions:
https://ai.dmi.unibas.ch/_visualizer/?c=eJx9j0tuwzAMRK9iaF1LDppsco7uDTpmHAK2JJCUPwly90oNCqebLvVmqJl5GPTKhNKOJGrOlalIJOHxdKw7EKxHmKC-EotWuzKf3rj5qMzAIcVuK_d9mIB8gYwjKM2YqXLCTFbQHNYlLczgGsELBS_FvQrdCz40TZOf2z_W7a81cojIWkZcafzx31SjnJ0Dsv1ENnnKY-zl5tr8EzJNebVk1fVh8Qtw7363uR4U3PtSnGF0e4ZVYLveXwNjYP3aYsn8zGBGLiVLg4NtiuWSRMPUTuTbhXL7c_V47hiGgXEADfxSnt-DYYbd
(The plot compares the numbers of expansions between base-lama-first and v5-lama-first)

To add a somewhat positive note, it seems that overall we do get better planner times:
https://ai.dmi.unibas.ch/_visualizer/?c=eJx9kc1ugzAMgF-l4rwSqraXPseOk5ABA5ZCghyHn1Z99yWr1i5U2tGfP8d2fMvQCBO6UpOT7LLLyDmPp_NpX4HDfaXJNLsnm84P8mVSrUZIpBC_KV1qdFuhbROhbbdCZ0HX1htJvCfd6j006eQRvEkDLKkUwFbSMMC-JXZp5xfeFnA_J2aIs49d1rH1Y7XGX27sAGQiZNQgNGGgwh4DWUDCSSovkWWjBmOQS6EBo784usbEoSiKEK7_ymsqj2xHZInnbkn_VPQio7soBZQ3A-XeUNggr3tV4hLc8I4RF7KqsbOZgRv1u5hqQED9XRMn0OrVIxfgfLk-lhwty-c6xp7HACZkR9bECQ55EZXaO7FDOZApZzIuZG73F4auY-xALD8y92_Npe3R
(The plot compares planner time between base and v5 versions)
Looking at only blind and goalcount (which do not compute negated axioms at all in v5) we see that the time save is moderate but there:
https://ai.dmi.unibas.ch/_visualizer/?c=eJx9j8tuwyAQRX_FYt2AoyabfEeXlayxPXVGwoCGwY9E-fdCrSb1pkvOPXC5d4VOmDA2lqKoS6UoxoSn8-nQQsRDa8n11ZNN5418ur02eLCdT0526pOqt0oxWhCaMHcIJ8xkAcnVbZLCVLDgHHIjNGLxl0i3Ehzrus7H9V953cuBfUCWMuuL7M-Nq0iIF2OAdD-STo7yx3V3NQ0u2c3vOIk5Nb2f3Qzcm98lpgcB83cXTmDNq0MLsF5u28jgWT7WUDrfM5iQI3lXfnDUdVG6FMWPzUiumcnFnNwfLwzDwDiAeN6SxzdqrY6V

One major change in the code compared to v4 is that we now have an option for all heuristics that need default axioms which allows us to choose whether the axioms should be computed exactly where possible or whether they should always just be overapproximated with an axiom with an empty body. Enabling this option avoids the potential exponential blowup, but leads to worse heuristic guidance. I also tested this:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v5-eval/issue454-v5-simple-compare.html
While this is in general very detrimental (we lose up to 150 tasks depending on the configuration), it can sometimes also help (for example in trucks or psr-large some configurations gain coverage instead, but it is minor).


There are three major things left to do:
 1) Gabi suggested to change variable names, comments etc to about default/non-default values instead of true false, since we work on binary finite-domain variables.
 2) In cmake, I added the new task transformation in the core library. It should probably instead be its own library and all heuristics that need it should depend on it.
 3) The current code removes the "const" qualifier from the shared pointer the heuristic stores for the task, because we change the task in the constructor if axioms are present. We can avoid this by calling a function in the initializer list that performs the conditional change, but that function currently would need to return a const reference to an Options object, which is difficult (we need to copy and alter the existing one and that copy needs to live somewhere). Instead, the current plan is to wait until issue1082 is merged, at which point the Options object vanishes and we instead just need to return a shared pointer.

I can work on 1) and 2), but for 3) I need to wait unless someone has a better idea.
msg11576 (view) Author: salome Date: 2024-02-08.15:03:41
I looked into the different expansions for psr. It had nothing to do with the transformation to multi-valued variables, but with the fact that axioms are simplified after computing dependencies between derived variables. The simplification can remove cyclic dependencies, which influence how we compute negated axioms. If a derived variable is needed negatively but has cyclic dependencies, we overapproximate the negation with a default true (i.e. we introduce one axiom that sets the variable to false without any conditions). Since the task transformation computes the dependencies after simplification, it thus can exactly compute the negation in more cases now.

For example, in psr-large/p02-s46-n3-l5-f50.pddl we have the following two axioms before simplification:
(unsafe sd2 side2)
PRE: Atom closed(sd2)
EFF: Atom unsafe(sd2, side2)

(unsafe sd2 side2)
PRE: Atom closed(sd2)
PRE: Atom unsafe(sd3, side1)
EFF: Atom unsafe(sd2, side2)

The second one is unnecessary and thus simplified away, but it introduces a dependency between unsafe(sd2, side2) and unsafe(sd3, side1), which eventually leads to a cycle involving those two. (The latter variable doesn't even occur in output.sas anymore since it is irrelevant.)
msg11573 (view) Author: salome Date: 2024-02-08.09:22:43
I implemented the computation of negated axioms as a task transformation that heuristics apply if they can make use of negated axioms. (These heuristics are all relaxation heuristics, the landmark sum heuristic and the cg and cea heuristics which depend on the domain transition graph.)

Here is the pull request:
https://github.com/aibasel/downward/pull/220

And here the experimental results:
https://ai.dmi.unibas.ch/_experiments/ai/downward/issue454/data/issue454-v3-eval/issue454-v3-issue454-base-issue454-v3-compare.html


In general the results are encouraging in the sense that not much changes. With hmax and cea we even gain 1 task. But as usually happens, the landmark configurations are a bit worse :). lama-first looses 2 tasks and rhw (that is just eager greedy search with landmark sum over the rhw graph) recognizes one less task as unsolvable.

There are slight differences in the number of expansions. For blind and goalcount (which should not be influenced by negated axioms) this happens in philosophers and trucks; I think we know that these domains can have nondeterministic translator output (for trucks I know for sure). The other configurations except for rhw also have different expansions in psr-large and psr middle; for all but lama-first it is less.
My best guess is that this happens because the translator computed the negated axioms still on the pddl task; that it is just encoded the negated conditions as "Negated Literal", and the transformation to multi-valued variables happens later. The task transformation on the other hand needs to negate a multi-valued variable. I do this by replacing "¬v=d" with "\/_{d' \in dom(v), d' != d) v=d'".
For example, assume we have two variables v0 and v1 with dom(v0) = dom(v1) = {0,1,2}, and two axioms for some fact f: a1 with conditions (v0=0 and v1=1) and a2 with conditions (v0=1 and v1=1). The negation of these conditions in the translator is simply (¬v0=0 or ¬v1=1) for a1 and (¬v0=1 or ¬v1=1) for a2. The task transformation on the other hand turns this into (v0=1 or v0=2 or v1=1 or v1=2) for a1 and (v0=0 or v0=2 or v1=0 or v1=2). When multiplying this out it will ignore combinations that set different values to the same var, so the multiplied out version (removing dominated conjunctions) will be (v0=1 and v1=0) or (v0=2) or (v1=1 and v0=0) or (v1=2).
msg11551 (view) Author: gabi Date: 2024-01-31.09:18:44
If for the experiments you need a list of domains with axioms:

- airport-adl
- assembly
- miconic-fulladl 
- openstacks
- openstacks-opt08-adl
- openstacks-sat08-adl 
- optical-telegraphs
- philosophers
- psr-large
- psr-middle
- trucks
msg9648 (view) Author: salome Date: 2020-07-17.17:48:35
While working on issue453, we changed the handling of axioms in the translator to
set the default value of derived variables to false. We can now continue to clean
up the translator output format and move the computation of negated axioms rules
into the search component. I updated the summary accordingly.
msg8378 (view) Author: malte Date: 2018-12-18.12:48:22
I added the remaining TODOs from the resolved issue862 to the summary because
they fit this issue better.
msg8207 (view) Author: malte Date: 2018-12-08.13:25:12
Add a pointer to msg8204 to the summary.
msg8045 (view) Author: malte Date: 2018-11-04.04:57:38
BTW, I'm no longer convinced that changing this would only affect a few parts of
the code. It looks like at least most of our relaxation heuristics (h^FF, h^add,
h^max) make use of the axioms provided by the code, and ripping out the negated
axiom versions can have a strong negative impact on them.

I don't mean this as an excuse to leave this issue unfixed, of course. But it
does mean we need to tread a bit carefully. In particular, we should make sure
not to ruin performance in cases without cyclic dependencies, which includes all
cases where derived variables are currently artificially introduced by the
translator.
msg8044 (view) Author: malte Date: 2018-11-04.04:54:06
Made the summary more comprehensive.
msg8043 (view) Author: malte Date: 2018-11-04.04:28:20
I'm reclassifying this as a bug because the negated axiom rules mean that we can
currently have axiom rules with heads (v=0) and (v=1) for the same variable. (At
least I think that this can happen.)

This is not necessarily a problem due to the way the axioms are actually
constructed, but it does not agree with how we specified the layering condition
in the AIJ paper on the translator and elsewhere. So at the very least, we
should make the semantics and the behaviour of the translator clear.

One reasonable place where this could be documented (in addition to the code
itself) is http://www.fast-downward.org/TranslatorOutputFormat, in particular
sections "Variables section" and "Axiom section" and section "Evaluation axioms"
at the end.
msg8041 (view) Author: malte Date: 2018-11-04.04:10:22
The proposed fix for issue862 interacts with this issue.

I've added code comments to axiom_rules.py which explain the interaction. All
relevant comments mention the keyword "issue454", so searching for this keyword
in axiom_rules.py while working on this issue should be enough to find all of
them. I've added a reminder of this to the summary.
msg4289 (view) Author: malte Date: 2015-06-29.23:01:51
Note to self: when fixing this, search the code (in particular the translator)
for references to issue454. The work-in-progress code for issue533 has a code
comment related to this issue.
msg3320 (view) Author: malte Date: 2014-08-17.11:33:53
We currently generate "negated forms" of axiom rules in certain cases. This can
be very expensive, is only used in few contexts, and is apparently currently
buggy (see issue453).

This issue is about getting rid of these negated-form axioms. The most difficult
part in this is probably detecting all the places where they are implicitly used
(e.g. in relevance analysis?). We should also evaluate the performance impact on
planner configurations that make use of these negated axiom rules (such as the
CG heuristic and possibly some of the landmark generators) and possibly
reintroduce them locally for planner configurations that use them.
History
Date User Action Args
2024-07-19 12:37:13salomesetstatus: chatting -> resolved
messages: + msg11652
2024-07-17 10:09:10salomesetmessages: + msg11643
2024-07-15 15:57:18salomesetmessages: + msg11636
2024-07-15 09:49:38salomesetmessages: + msg11635
2024-07-12 14:38:03gabisetfiles: - problem-pr222.txt
2024-07-12 14:38:00gabisetfiles: - domain-pr222.txt
2024-07-12 14:37:57gabisetfiles: + problem-pr222.pddl
messages: + msg11633
2024-07-12 14:37:40gabisetfiles: + domain-pr222.pddl
2024-07-12 14:33:52gabisetfiles: + problem-pr222.txt
messages: + msg11632
2024-07-12 14:30:28gabisetfiles: + domain-pr222.txt
2024-07-05 17:53:37maltesetmessages: + msg11610
2024-07-05 13:03:02salomesetmessages: + msg11607
2024-07-05 12:53:03salomesetmessages: + msg11606
2024-06-26 05:49:41hazsetnosy: + haz
summary: See meta-issue: issue924. We want to simplify the translator output format by only allowing binary derived variables whose default value is always false, and only allowing axioms setting a derived variable to true. "Negated axioms" should instead be computed in the search component when needed (for example in cg/landmarks/relaxation heuristics). ~~See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.)~~ (the axiom code was heavily reworked with issue453) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - ~~Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules.~~ Add assertions that there are no negated heads. Then rerun the experiments. - ~~Open an issue for the bug with compute_necessary_axiom_literals (see msg8204).~~ (this was resolved as part of issue453) - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204. -> See meta-issue: issue924. We want to simplify the translator output format by only allowing binary derived variables whose default value is always false, and only allowing axioms setting a derived variable to true. "Negated axioms" should instead be computed in the search component when needed (for example in cg/landmarks/relaxation heuristics). ~~See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.)~~ (the axiom code was heavily reworked with issue453) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - ~~Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules.~~ Add assertions that there are no negated heads. Then rerun the experiments. - ~~Open an issue for the bug with compute_necessary_axiom_literals (see msg8204).~~ (this was resolved as part of issue453) - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204.
2024-02-08 15:03:41salomesetmessages: + msg11576
2024-02-08 09:22:43salomesetmessages: + msg11573
2024-01-31 09:18:44gabisetmessages: + msg11551
2020-07-17 18:56:29jendriksetnosy: + jendrik
2020-07-17 17:53:14salomesetsummary: See meta-issue: issue924. We want to simplify the translator output format by only allowing binary derived variables whose default value is always false, and only allowing axioms setting a derived variable to true. "Negated axioms" should instead be computed in the search component when needed (for example in cg/landmarks/relaxation heuristics). --See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.)-- (the axiom code was heavily reworked with issue453) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - --Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules.-- Add assertions that there are no negated heads. Then rerun the experiments. - --Open an issue for the bug with compute_necessary_axiom_literals (see msg8204).-- (this was resolved as part of issue453) - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204. -> See meta-issue: issue924. We want to simplify the translator output format by only allowing binary derived variables whose default value is always false, and only allowing axioms setting a derived variable to true. "Negated axioms" should instead be computed in the search component when needed (for example in cg/landmarks/relaxation heuristics). ~~See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.)~~ (the axiom code was heavily reworked with issue453) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - ~~Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules.~~ Add assertions that there are no negated heads. Then rerun the experiments. - ~~Open an issue for the bug with compute_necessary_axiom_literals (see msg8204).~~ (this was resolved as part of issue453) - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204.
2020-07-17 17:48:35salomesetnosy: + salome
messages: + msg9648
summary: See meta-issue: issue924. See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules. Add assertions that there are no negated heads. Then rerun the experiments. - Open an issue for the bug with compute_necessary_axiom_literals (see msg8204). - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204. -> See meta-issue: issue924. We want to simplify the translator output format by only allowing binary derived variables whose default value is always false, and only allowing axioms setting a derived variable to true. "Negated axioms" should instead be computed in the search component when needed (for example in cg/landmarks/relaxation heuristics). --See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.)-- (the axiom code was heavily reworked with issue453) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - --Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules.-- Add assertions that there are no negated heads. Then rerun the experiments. - --Open an issue for the bug with compute_necessary_axiom_literals (see msg8204).-- (this was resolved as part of issue453) - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204.
2019-06-14 14:29:48maltesetsummary: See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules. Add assertions that there are no negated heads. Then rerun the experiments. - Open an issue for the bug with compute_necessary_axiom_literals (see msg8204). - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204. -> See meta-issue: issue924. See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules. Add assertions that there are no negated heads. Then rerun the experiments. - Open an issue for the bug with compute_necessary_axiom_literals (see msg8204). - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204.
2019-06-06 15:10:40augustosetnosy: + augusto
2018-12-18 13:23:03gabisetnosy: + gabi
2018-12-18 12:48:22maltesetmessages: + msg8378
summary: See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) -> See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) Further TODOs/plans moved from issue862: - Try moving the computation of axiom layers earlier, having everything related to negated axiom rules at the end. Then simplify the layer computation code to take into account that it no longer has to deal with negated heads, and enable the test that verifies the layering condition properly, removing all the hacks that are currently there because of the negated rules. Add assertions that there are no negated heads. Then rerun the experiments. - Open an issue for the bug with compute_necessary_axiom_literals (see msg8204). - Make a plan for further steps for the axiom code. See the various notes etc. in msg8204.
2018-12-08 13:25:12maltesetmessages: + msg8207
summary: Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.) -> See msg8204 of issue862 for an overview of how the axiom code currently works. (May become out of date while we're working on the axiom code further.) Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.)
2018-11-04 04:57:38maltesetmessages: + msg8045
2018-11-04 04:54:06maltesetmessages: + msg8044
summary: Check for references to "issue454" in the code, specifically axiom_rules.py. -> Check for references to "issue454" in the code, specifically axiom_rules.py and sas_tasks.py. (Best look for references to issue454 in the whole codebase.)
2018-11-04 04:28:20maltesetpriority: wish -> bug
messages: + msg8043
2018-11-04 04:10:22maltesetmessages: + msg8041
summary: Check for references to "issue454" in the code, specifically axiom_rules.py.
2015-06-29 23:01:51maltesetmessages: + msg4289
2014-10-04 19:50:27maltesetkeyword: + translator
2014-08-18 04:59:08patriksetnosy: + patrik
2014-08-17 11:33:53maltecreate