Created on 2018-11-01.13:47:34 by gabi, last changed by malte.
msg8379 (view) |
Author: malte |
Date: 2018-12-18.12:50:26 |
|
This was merged, but not yet marked as resolved. I marked it as resolved now and
moved the remaining tasks in the summary to issue454, where they fit better. If
you want to keep updated about this, perhaps add yourselves to the nosy list
there. Right now only Patrik Haslum and I are on that nosy list. Another related
issue is issue453, whose nosy list consists of Gabi, Patrik and me.
|
msg8300 (view) |
Author: malte |
Date: 2018-12-13.19:28:01 |
|
Hmmm, I'm sure I looked at the top right before and saw nothing there. Now it
tells me 0 forks. Oh well.
It's strange that it says 0 forks there because when doing the strip I got a
warning that there existed 1 fork of the repository (which is of course relevant
when stripping). But I guess it doesn't matter.
|
msg8299 (view) |
Author: jendrik |
Date: 2018-12-13.19:25:56 |
|
https://bitbucket.org/malte/downward tells me that there are 0 forks of your repo.
|
msg8298 (view) |
Author: malte |
Date: 2018-12-13.19:20:11 |
|
I merged this.
To those who might have pulled everything from my repository, note that I have
stripped the issue862-debug branch, so you might want to strip it, too (revision
aafec239a767).
Incidentally, bitbucket told me that my repository has been forked once, but I
don't know how to find out what these forks are. Any ideas? Googling and
searching the help have been unsuccessful so far.
|
msg8287 (view) |
Author: gabi |
Date: 2018-12-13.14:57:49 |
|
I had a look at the changes outside axiom_rules.py and have no objections to
merging this.
|
msg8274 (view) |
Author: malte |
Date: 2018-12-12.20:18:30 |
|
I also tested David's original PDDL files (attached to this issue), and we can
solve them now. (However, the original error wasn't perfectly reproducible anyway.)
|
msg8272 (view) |
Author: malte |
Date: 2018-12-12.18:57:42 |
|
The results look good to me. We lose some coverage here and there, but from
looking at the detailed data, this just seems to be random unluckiness with the
grid. The number of expansions doesn't change (which was the main thing here),
and the speed of axiom evaluation also seems to be unaffected based on the total
time scores (which makes sense).
Any objections to merging this? I'm not sure how useful it is to review the
changes at this point, except perhaps the changes that are outside of
axiom_rules.py. We should soon touch axiom_rules again to deal with the negated
axiom issue(s), and at that point few stones will remain unturned in that file,
and it will probably require reading in whole to review that change.
|
msg8268 (view) |
Author: jendrik |
Date: 2018-12-12.17:25:08 |
|
Here are the results:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue862-v5-planner-issue862-base-issue862-v5-compare.html
|
msg8235 (view) |
Author: malte |
Date: 2018-12-11.17:05:04 |
|
Thanks, Jendrik! Nothing suspicious so far, so now it would be good to see what
the changes mean experimentally. Also for future issues related to axioms, I had
a look at the set of domains for which the translator produces axioms and the
set of planner configurations that use axioms.
Relevant domains:
- airport-adl
- assembly
- miconic-fulladl
- openstacks
- openstacks-opt08-adl
- openstacks-sat08-adl
- optical-telegraphs
- philosophers
- psr-large
- psr-middle
- trucks
In this issue, the output only changes for five of these, so these are the only
ones we need to test:
- airport-adl
- assembly
- miconic-fulladl
- psr-large
- psr-middle
The following parts of the search compoent use the axioms:
- h^max, h^add, h^FF
- h^cg, h^cea
- lm_rhw, lm_zg
- more generally the landmark code and the causal graph code, which in turn
affects things like variable orders (but I'm not aware of anything in our code
that uses variable orders and supports axioms)
To get a decent coverage of all of these except lm_zg, I suggest running
experiments with
- five configurations of lazy greedy search with preferred operators and one of
the five heuristics mentioned above (h^max, h^add, h^FF, h^cg, h^cea); of these,
h^max doesn't support preferred operators if I recall correctly
- lama-first
- A* + blind search
|
msg8230 (view) |
Author: jendrik |
Date: 2018-12-11.11:10:28 |
|
Here is a report containing all runs and the "error" attribute:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue862-v5.html
|
msg8225 (view) |
Author: jendrik |
Date: 2018-12-11.09:11:09 |
|
I updated the filtered report to include the "error" and "node" attributes:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue862-v5-filtered.html
|
msg8221 (view) |
Author: jendrik |
Date: 2018-12-10.21:12:22 |
|
Here is the report that only includes tasks where the SAS file changes:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue862-v5-filtered.html
The "unexplained errors" section is only added if there are unexplained errors.
The "error" and "node" attribute haven't been parsed since I forgot to add the
exit code and planner parsers. I can re-parse the results tomorrow.
|
msg8213 (view) |
Author: malte |
Date: 2018-12-10.15:04:55 |
|
Hi Jendrik, thanks! Sorry to take so long to reply; I didn't previously have
enough bandwidth to download the result file. :-)
The HTML result file looks great, but I think we need a few more things:
- The "info" section says: "Used nodes: {<unknown: planner parser run?>}".
Knowing the used nodes is not critical for this experiment, but it would
be good if translator experiments could get that information, too.
Not a reason to redo this experiment, but perhaps for the future.
- There is no "unexplained errors" section. Since we recently often got
unexplained errors from the translator, I wonder if that information is
missing rather than just empty. (Also because of the previous point.)
- The "error" attribute or whatever it is called would be very useful. The
one that tells us the driver script exit code.
- The csv file was really just a crutch because that's what the previous
translator experiment I found used. If it's possible, it would be better
to have the output.sas hash value attribute shown in the HTML table report
(for example as "translator_hash"). What I will need is a quick way to
determine if there are tasks where this attribute differs between revisions
(including the case where it's missing in some revisions), so anything that
could help with that would also be appreciated (e.g. something that only
shows results for instances where not all values are identical). But if
that is hard to do, I can of course scan these manually.
|
msg8211 (view) |
Author: jendrik |
Date: 2018-12-09.08:56:14 |
|
I ran the requested experiment and here are the results:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue862-v5.html
https://ai.dmi.unibas.ch/_tmp_files/seipp/different_output_sas.csv
Please tell me if these reports are not what you had in mind.
|
msg8209 (view) |
Author: malte |
Date: 2018-12-08.13:45:52 |
|
I have created two more tags, v4 and v5. Updated overview of the tags:
- issue862-base: the old code (before working on this issue)
- issue862-v1: the old code with added assertions that test the layering
condition; this is somewhat imperfect due to the interactions with issue453 and
issue454, but for cases like David's it should catch all bad layerings
- issue862-v2: the new layering algorithm with the assertions that test the
layering condition enabled. This version has a bug.
- issue862-v3: ditto, but with the assertions disabled. This version has a bug.
- issue862-v4: v2 with the bug fixed and a minor improvement to axiom simplification
- issue862-v5: ditto, but with the assertions disabled
Jendrik, there is already an experiment for this issue (in bitbucket/malte
repository) that compares base, v1, v2 and v3. Can you run it again, but replace
v2 and v3 with v4 and v5?
Also, the report in the experiment is a bit of a hack. What I would really like
to have is a "normal" report which includes the translator attributes (most
importantly translator runtime, translator memory, number of derived variables
and number of axioms) but also includes the custom attribute that computes a
hash over the translator output file, so that we can see in which cases the
translator output is different in the different versions.
|
msg8208 (view) |
Author: malte |
Date: 2018-12-08.13:37:35 |
|
Updated summary: The first item is now done, minus the experiments.
|
msg8205 (view) |
Author: malte |
Date: 2018-12-08.13:23:27 |
|
Created a summary with the TODO items for this issue.
|
msg8204 (view) |
Author: malte |
Date: 2018-12-08.13:19:06 |
|
To understand all this better, I went over the axiom code in the translator
again (axiom_rules.py) and came up with a high-level description of what it
does. I'm including it here because it might be useful for further work on this
issue and other issues related to axioms in the translator. The description is
based on the current revision in the issue branch (77fed8cbac75) rather than the
current default code.
I think this also uncovered an additional bug or two (see description of
"compute_necessary_axiom_literals").
==============================================================================
*Notes for improvement*:
- Compute the axiom layers much earlier. The various transformations
should not violate the layering property if they are correct, so
there is no problem with computing the layers early. In particular,
it would be useful to have the things to do with negative axioms at
the very end, so that they cannot mess up anything else, and all the
early processing steps can rely on all axiom heads being positive.
function handle_axioms(operators, axioms, goals):
axioms_by_atom = get_axioms_by_atom(axioms)
axiom_literals = compute_necessary_axiom_literals(axioms_by_atom, operators,
goals)
axiom_init = get_axiom_init(axioms_by_atom, axiom_literals)
axioms = simplify_axioms(axioms_by_atom, axiom_literals)
axioms = compute_negative_axioms(axioms_by_atom, axiom_literals)
axiom_layers = compute_axiom_layers(axioms, axiom_init)
return axioms, list(axiom_init), axiom_layers
function get_axioms_by_atom(axioms):
return dict mapping each axiom head to a list of axioms with that head
function compute_necessary_axiom_literals(axioms_by_atom, operators, goal):
Computes the set of "necessary" axiom literals, defined as
follows.
1. All derived literals occurring in the goal are necessary.
2. All derived literals occurring in operator preconditions are
necessary.
3. All derived literals occurring in conditions of *positive*
conditional effects are necessary.
4. For all derived literals occurring in conditions of *negative*
conditional effects, their negation is necessary.
5. For all necessary derived literals l, look up all axioms ax
where variable(l) occurs in the head.
- If l is positive, all literals in the body of ax are
necessary.
- If l is negative, the negations of all literals in the body
of ax are necessary.
Note: Rules 3. and 4. only make sense if we interpret "necessary"
as "desirable" and only if we have "positive is good" for
non-derived variables. I don't think we have that because we
support negative preconditions, effect conditions and goals
directly. Right? To correct rules 3 and 4, literals occurring in
effect conditions should be necessary and their negations should
be necessary, no matter what type of effect it is.
Note: Rule 5 is only complete if we don't have axioms with
negative heads. I think this is the case at the point of the code
where we compute this, but it should perhaps be guarded by an
assertion.
Note: The purpose and correctness of this computation would
perhaps be clearer if we renamed "necessary" to "useful". A
literal is "useful" if we might ever want it to be true (in order
to reach a goal, to reach a precondition, to enable or disable a
conditional effect, or to make an axiom fire that would indirectly
lead to something useful).
Note also that if l is not useful, we cannot simply prune all
axioms that derive l. We might want negation(l), and axioms that
derive l are then important for correct semantics because they
can prevent us from having l.
Side remark: The set of derived variables is defined as the set of
variables that show up in axiom heads. There is no "external"
definition. Should that change?
function get_axiom_init(axioms_by_atom, necessary_literals):
Returns all derived variables whose NBF value we want to be "True"
(so the result is a set of derived variables, not a general
set of derived literals).
These are all the derived variables that are not necessary in
positive form, but are necessary in negative form.
The idea behind this is that we want the "useful" value for these
derived variables to be derived actively, not through negation by
failure, so that heuristics can get a better insight into how to
achieve the useful value.
This function is *not* necessary for correctness, only to end up
with axioms that might be more useful for the heuristics.
function simplify_axioms(axioms_by_atom, necessary_literals):
necessary_atoms = the variables of necessary_literals
- This function also assumes that all heads are positive, which is
presumably true at this stage.
- We process the axioms in groups, where each group consists of
all axioms with a given head.
- axioms_by_atom is updated with the new groups, and the new list
of simplified axioms is returned at the end.
How each group is processed:
- Sort and uniquify all axiom bodies. Note that this affects
the existing bodies, as they are modified in-place. Would
perhaps be cleaner if this processing step occurred earlier
and was then maintained throughout.
- Return all axioms in the group where the body is minimal (i.e.
not a strict subset of another body). For identical axioms, only
keep one copy.
- Note that the dominance test takes O(N^2) time. We hope that
this is acceptable because we generally don't have too many
axioms with the same head, but this may potentially become a
performance bottleneck.
- *Note*: We could also remove axioms where the head occurs in the
body, which does actually happen, for example in PSR-Middle #3.
- *Note*: If we sorted axiom bodies by size, any given rule could
only dominate rules occurring later (except for the case where
the bodies are exactly identical), which might help with the
logic.
function compute_negative_axioms(axioms_by_atom, necessary_literals):
- Recompute the list of axioms: for each derived variable, include
the positive (original) version of the axioms if the variable is
needed positively. Include the negative version of the axioms
(to be computed, as discussed below) if it is needed negatively.
If it is needed in both forms, include both versions. If it is
needed in neither form, include neither version.
- The negative versions of the axioms for a given derived variable
is computed as follows:
- Think of the set of bodies of the rules for the given variable
as a DNF formula.
- If it has any rule with an empty body, the DNF formula is
always true, and hence the negation is always false. In that
case, return an empty list of axioms.
- Otherwise, negate the DNF (turning it into a CNF) and turn it
into a DNF again by multiplying it out. Note that this is
exponential in the worst case.
- At the end, simplify the axiom group using the same procedure
as in simplify_axioms.
- Note that this procedure is *wrong* in general (if there are any
dependency cycles involving this derived variable). See issue453.
function compute_axiom_layers(axioms, axiom_init):
derived_atoms = all variables occurring in heads of axioms
depends_on = dictionary mapping each derived atom to the list
of all derived atoms it depends on (if there are no
dependencies, this is an empty list; it is important that
every derived atom occurs as a key)
*Note*: It looks like we currently include duplicates in
depends_on. This should not cause problems, but it is not clean
because we then pass a multigraph to the SCC-finding algorithm,
which it does not officially support. So this should be fixed.
weighted_depends_on = set of all triples (u, v, weight)
where u and v are derived variables such that there exists an
axiom where u occurs in the head, v occurs in the body,
weight = 1 if v occurs with its NBF value (i.e.,
v occurs negatively and its NBF is negative, or
v occurs positively and its NBF is positive), and
weight = 0 otherwise
An entry in weighted_depends_on represents the constraint
layer(u) >= layer(v) + weight
*Note*: This should take into account the situation that arises
when we have both positive and negative copies of an axiom. We can
detect an "unnecessary additional axiom" if the head occurs with
its NBF value. Such unnecessary additional axioms should be
ignored by this computation. (They can only arise for axioms where
we represent both the positive and the negative form.) For
efficiency, they could be ignored also by the axiom evaluator in
the search component. (I don't know how this is currently
handled.)
Compute the SCCs of dependencies according to depends_on, in
topological order.
atom_to_scc_id[atom] is the ID of the SCC containing atom.
Compute a weighted digraph of SCC dependencies. For SCCs U and V,
scc_weighted_depends_on[id(U)][id(V)] exists iff there
exists a weighted dependency between some u in U and some v in V,
and then we have that weight =
scc_weighted_depends_on[id(U)][id(V)] equals 0 if all such
dependencies have a weight of 0, and weight = 1 if at least
one such dependency has a weight of 1.
We ignore self-loops of weight 0 to make the graph acyclic.
Self-loops of weight 1 are an error (a violation of
stratifiability) and cause an exception.
The layer of an SCC U is then the length (taking into account the
edge weights) of the longest path from U to any other SCC in this
weighted digraph. This can easily be computed bottom-up because
the graph is acyclic. The layer of a derived variable is then the
layer of its SCC.
*Note*: This assigns the smallest possible layer to each variable,
i.e., we make each layer include as many derived variables as
possible, leading to large layers and hence a small number of
layers. For example, if there are no negative dependencies,
everything will end up in the same layer.
Alternatively, we could make each layer as small as possible,
leading to a large number of layers, by putting each SCC into its
own layer. In this case we wouldn't actually have to compute this
directed graph at all. (We would still need to test the
stratifiability condition, but that's it.) For example, let's say
we find four SCCs: V1 = {v1, v2}, V2 = {v3, v4}, V3 = {v5, v6}, V4
= {v7, v8}. Let's say that the dependencies are V1 to V4 (only
positively), V2 to V4 (negatively) and V3 to V4 (only positively).
Our algorithm would assign layer 0 to V1, V3 and V4 and layer 1 to
V2. Alternatively, we could just go bottom-up and assign layer 0
to V1, layer 1 to V2, layer 2 to V3, and layer 3 to V4.
|
msg8203 (view) |
Author: malte |
Date: 2018-12-08.13:08:11 |
|
Added Jendrik, who offered to help with the experiments.
|
msg8196 (view) |
Author: malte |
Date: 2018-12-08.10:13:04 |
|
> Note to self: check if pruning axiom rules where the head occurs in the body
> is already enough to resolve the problems we are having here.
No, it is not. It helps for psr-middle instances #3-#7, but for example not for #8.
|
msg8190 (view) |
Author: malte |
Date: 2018-12-07.17:57:36 |
|
> For 64 runs, the new translator code claims that there is a cyclic dependency
> between atoms that means they cannot be stratified. This should not happen for
> correct PDDL specs, so there is probably a bug in my new code. All of these are
> in psr-middle or psr-large.
This seems to be related to the negated axioms thing (see issue453 and issue454)
and looks difficult to resolve cleanly without dealing with these issues at
least partially. I'll try to think of a way that is not too hacky and then
hopefully work on these two issues soon, so that we can end up with a somewhat
clean solution in the not-too-distant future.
Note to self: check if pruning axiom rules where the head occurs in the body is
already enough to resolve the problems we are having here.
|
msg8084 (view) |
Author: malte |
Date: 2018-11-15.00:42:54 |
|
I've run an experiment to check for which IPC tasks the translator doesn't
behave the same for all four tags (base, v1, v2, v3) and/or the translator fails
to terminate for at least one tag. Sorted output file attached
(different_output_sas.csv).
The analysis is not very informative beyond telling us where we need to look
more closely. The affected domains and numbers of affected tasks by domain are:
- agricola-sat18-strips (3 tasks)
- airport-adl (50 tasks)
- assembly (30 tasks)
- caldera-sat18-adl (6 tasks)
- flashfill-sat18-adl (6 tasks)
- miconic-fulladl (134 tasks)
- organic-synthesis-opt18-strips (13 tasks)
- organic-synthesis-sat18-strips (17 tasks)
- organic-synthesis-split-sat18-strips (1 task)
- psr-large (50 tasks)
- psr-middle (50 tasks)
- trucks-strips (5 tasks)
Of the IPC 2018 domains, in many cases it's probably the translator running out
of time or memory that we're seeing here rather than a genuine difference in
behaviour, but I have yet to check that. With trucks-strips, we know that the
translator behaves nondeterministically because of the invariant synthesis timeout.
The remaining five domains (airport-adl, assembly, miconic-fulladl, psr-large,
psr-middle) are the most suspicious ones.
For 64 runs, the new translator code claims that there is a cyclic dependency
between atoms that means they cannot be stratified. This should not happen for
correct PDDL specs, so there is probably a bug in my new code. All of these are
in psr-middle or psr-large.
|
msg8042 (view) |
Author: malte |
Date: 2018-11-04.04:24:06 |
|
Potential fix pushed to my repository:
https://bitbucket.org/malte/downward/pull-requests/8/issue862/diff
Interestingly, for David's example, the correct minimal layering appears to be
to assign everything to layer 0. I suspect that (part of) the problem is that
our old layering algorithms treated non-derived variables in bodies as if they
were in layer 0, which would needlessly push derived variables to layer 1 when
it looked like a "negated" version of a non-derived variable appeared in the
body of an axiom. (I don't think this was the only issue, but this would explain
why the old layering algorithm would move things to layer 1 even though
everything can actually be placed in a single layer.)
Gabi, I've added you as a reviewer for the pull request, but that doesn't mean
you necessarily have to review it. Let me know if you want to have a look. If
you do have a look, the changes to variable_order.py and sccs.py (new file) may
be easier to review commit-by-commit, as there is a large diff caused by moving
something from variable_order.py to sccs.py, and apart from this one change all
changes to these two files are small.
The fix interacts with the (unfixed) issues issue453 and issue454 in some way,
so the code may lead to additional breakage at the moment, but only in problem
cases that should go away once issue453 and issue454 are addressed. More
specifically, the problem cases are ones where both "v=0" and "v=1" occur in
heads of axioms. We officially don't allow this, but due to our "negated
axioms", I think it may actually happen. I've made notes of this at issue453 and
issue454.
I've pushed two branches, "issue862" and "issue862-debug". The latter has
additional debug output and debug scripts which I don't want to push to the
master repository later on, so I intend to strip them from my bitbucket
repository at some point. But I thought they might be useful to keep at least
for a while until this issue is addressed.
I have created four tags:
- issue862-base: the old code (before working on this issue)
- issue862-v1: the old code with added assertions that test the layering
condition; this is somewhat imperfect due to the interactions with issue453 and
issue454, but for cases like David's it should catch all bad layerings
- issue862-v2: the new layering algorithm with the assertions that test the
layering condition enabled
- issue862-v3: ditto, but with the assertions disabled
I tried to phrase the assertions leniently enough to be correct even before
addressing issue453 and issue454. So they should only fire in cases that are
definitely currently broken. Because of the interaction with issue453 and
issue454, there may currently be cases that are broken which the assertions
overlook.
The next step would be to run the translator on everthing and see where the
assertions fire and where the four tags behave differently. I'd like to set up a
lab experiments that allows us to find out whether or not output.sas is
identical in two revisions, but I don't know how to do that. I'll ask the
mailing list.
|
msg8039 (view) |
Author: malte |
Date: 2018-11-04.00:18:49 |
|
I have added more debug code (pushed to the issue862 branch in my bitbucket
repository), and it confirms that the bug is in compute_axiom_layers/find_level.
I have failed to understand the idea behind the algorithm implemented in
find_level, so I suggest reimplementing compute_axiom_layers with a new algorithm.
Here are the basic ideas:
For derived variables u and v, u "depends" on v if there is an axiom with u in
the head and v in the body (with any polarity).
For derived variables u and v, u "NBF-depends" on v if there is an axiom with u
in the head and v in the body with its "negation by failure" value (i.e., the
value that v takes on *unless* a different value is derived). So "NBF-depends"
is a special case of "depends".
We need to satisfy the following constraints:
layer(u) >= layer(v) whenever u depends on v
layer(u) > layer(v) whenever u NBF-depends on v.
This is possible iff there is no strongly connected cycle of dependencies that
includes at least one NBF-dependency. The minimum possible layers can then be
computed as follows:
1. Construct the directed graph G of dependencies.
2. Compute the strongly connected components (SCCs) of G. If there exists any
NBF-dependency between two variables in the same SCC, the axioms are not
stratifiable. Otherwise they are stratifiable.
3. All variables in the same SCC must receive the same layer, so we treat them
as a group from now on, and it remains to assign a layer to each SCC.
4. Compute the SCC dependency graph, i.e., the graph G' whose nodes are the SCCs
of G and which has an arc from U to V if there are variables u in U and v in V
such that u depends on v. We make the SCC dependency graph weighted: the weight
of an edge from U to V is 1 if there is an NBF-dependency between U and V and 0
otherwise.
5. The minimum layer that can be assigned to U is the cost of a longest path
starting at U in G', taking into account the edge weights. ("Longest path" means
the longest path from U to anywhere.) G' is a DAG, which means that we can
efficiently compute longest paths in linear time through a bottom-up
computation, or top-down with memoization.
Does this sound reasonable? Next, I'll try to implement this. We already compute
SCCs elsewhere in the translator, so hopefully that code can be reused.
|
msg8038 (view) |
Author: malte |
Date: 2018-11-03.22:01:50 |
|
I'll have a look at this.
|
msg8037 (view) |
Author: malte |
Date: 2018-11-03.06:14:20 |
|
I've written a function that verifies that the layering condition is satisfied
at the end of handle_axioms. It is perhaps a bit stricter than necessary in
degenerate cases (e.g. it complains about derived variables that never occur in
the head of an axiom), so it may generate false negatives, i.e., complaining
that something is wrong when nothing is wrong. But I think it will be useful for
debugging regardless.
The diff for this change is attached (test-layering.diff). Apart from adding the
test, it also exits the translator immediately after running the test so that
this can be tested more quickly.
I see violations of the layering condition, but only very infrequently. In 1000
runs on the example task provided by David, I only got 5 problematic runs.
Tested like this:
$ for x in {1..1000}; do ./translate.py ~/tmp/domain_2.pddl ~/tmp/p07.opt08.pddl
2>&1 | tee $x.log; done
$ grep Assertion *.log
212.log:AssertionError: (1, 0)
658.log:AssertionError: (1, 0)
694.log:AssertionError: (1, 0)
91.log:AssertionError: (1, 0)
946.log:AssertionError: (1, 0)
Of course this makes things somewhat challenging to debug. :-) With such a
transient error I'm not sure how easy it is to figure out where it comes from. I
suppose it would help to find a way to make it more reproducible, or to figure
out why things are not reproducible to start with.
Fortunately, axiom_rules.py itself looks simple enough, so if the bug is in that
file, we should be able to find it. But I suppose the bug could also be
somewhere else, like in the input to handle_axioms (unlikely?) or in the
underlying data structure for axioms, variables etc. The non-reproducible nature
of the problem suggests that it might have something to do with hash functions,
object addresses or similar things.
|
msg8036 (view) |
Author: malte |
Date: 2018-11-03.05:00:33 |
|
Added the "translator" keyword.
|
msg8033 (view) |
Author: gabi |
Date: 2018-11-01.13:51:57 |
|
I had a brief look into this and indeed there seems to be a problem in the
computation of the topological sorting.
I could not reproduce the issue for the same variables, but for other ones.
The computation is done in compute_axiom_layers in axiom_rules.py
From what I have seen, the computation of the depends_on dictionary is find, so
the problem seems to be in find_level(...).
|
msg8032 (view) |
Author: gabi |
Date: 2018-11-01.13:47:34 |
|
This was reported by David Speck via the mailing list.
The problem that occurs in some runs of the translator is that some rules have
variables in the body that are part of a higher level than the rule itself. I
have attached the file output.sas, which is sometimes created.
In line 55138 of the attached file there is the following rule:
begin_rule
2
112 0
62 0
111 1 0
end_rule
which corresponds to ~var111 <= ~var112 & ~var62. The levels of the variables are:
var111 (level 0)
var112 (level 1)
var 62 (level 0)
Here the variable of the body (var112) has a higher level than the variable of
the head (var111). I think this violates the stratification rule and hence the
layering property?
In some runs of the translator var111 is assigned to level 1 which results in a
valid stratification. The same problem occurs with similar variables.
|
|
Date |
User |
Action |
Args |
2018-12-18 12:50:27 | malte | set | status: chatting -> resolved messages:
+ msg8379 summary: TODOs:
- ~~In the computation of axiom layers, ignore rules whose head is the NBF
value
of the derived variable.~~ (Done in v4.) Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all
the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. -> |
2018-12-13 19:28:01 | malte | set | messages:
+ msg8300 |
2018-12-13 19:25:57 | jendrik | set | messages:
+ msg8299 |
2018-12-13 19:20:11 | malte | set | messages:
+ msg8298 |
2018-12-13 14:57:49 | gabi | set | messages:
+ msg8287 |
2018-12-12 20:18:30 | malte | set | messages:
+ msg8274 |
2018-12-12 18:57:42 | malte | set | messages:
+ msg8272 |
2018-12-12 17:25:08 | jendrik | set | messages:
+ msg8268 |
2018-12-11 17:05:04 | malte | set | messages:
+ msg8235 |
2018-12-11 11:10:28 | jendrik | set | messages:
+ msg8230 |
2018-12-11 09:11:09 | jendrik | set | messages:
+ msg8225 |
2018-12-10 21:12:22 | jendrik | set | messages:
+ msg8221 |
2018-12-10 15:04:55 | malte | set | messages:
+ msg8213 |
2018-12-09 08:56:14 | jendrik | set | messages:
+ msg8211 summary: TODOs:
- ~~In the computation of axiom layers, ignore rules whose head is the NBF value
of the derived variable.~~ (Done in v4.) Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. -> TODOs:
- ~~In the computation of axiom layers, ignore rules whose head is the NBF
value
of the derived variable.~~ (Done in v4.) Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all
the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. |
2018-12-08 13:45:52 | malte | set | messages:
+ msg8209 |
2018-12-08 13:37:35 | malte | set | messages:
+ msg8208 summary: TODOs:
- In the computation of axiom layers, ignore rules whose head is the NBF value
of the derived variable. Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. -> TODOs:
- ~~In the computation of axiom layers, ignore rules whose head is the NBF value
of the derived variable.~~ (Done in v4.) Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. |
2018-12-08 13:23:27 | malte | set | messages:
+ msg8205 summary: TODOs:
- In the computation of axiom layers, ignore rules whose head is the NBF value
of the derived variable. Then rerun the experiments.
- Try moving the computation of axiom layers earlier, having everything related
to negated axiom rules at the end. Then simplify the layer computation code
to take into account that it no longer has to deal with negated heads, and
enable the test that verifies the layering condition properly, removing all the
hacks that are currently there because of the negated rules.
Add assertions that there are no negated heads. Then rerun the experiments.
- Open an issue for the bug with compute_necessary_axiom_literals
(see msg8204).
- Make a plan for further steps for the axiom code. See the various notes etc.
in msg8204. |
2018-12-08 13:19:06 | malte | set | messages:
+ msg8204 |
2018-12-08 13:08:11 | malte | set | nosy:
+ jendrik messages:
+ msg8203 |
2018-12-08 10:13:04 | malte | set | messages:
+ msg8196 |
2018-12-07 17:57:36 | malte | set | messages:
+ msg8190 |
2018-11-15 00:42:54 | malte | set | files:
+ different_output_sas.csv messages:
+ msg8084 |
2018-11-04 04:24:06 | malte | set | messages:
+ msg8042 |
2018-11-04 00:18:49 | malte | set | messages:
+ msg8039 |
2018-11-03 22:01:50 | malte | set | assignedto: malte messages:
+ msg8038 |
2018-11-03 06:14:20 | malte | set | files:
+ test-layering.diff messages:
+ msg8037 |
2018-11-03 05:00:33 | malte | set | messages:
+ msg8036 keyword:
+ translator |
2018-11-02 22:36:06 | malte | set | nosy:
+ malte |
2018-11-01 13:51:57 | gabi | set | status: unread -> chatting messages:
+ msg8033 |
2018-11-01 13:48:29 | gabi | set | files:
+ p07.opt08.pddl |
2018-11-01 13:48:06 | gabi | set | files:
+ domain_2.pddl |
2018-11-01 13:47:34 | gabi | create | |
|