Issue348

Title Rethink how and when to unpack states
Priority wish Status in-progress
Superseder Nosy List andrew.coles, cedric, florian, gabi, guillem, jendrik, johannes, malte
Assigned To florian Keywords
Optional summary
TODOs for prototype implementation:
1. ~~Return EvaluationContext instead of registered state in search engines.~~
2. ~~Check for additional copies added by the change so far.~~
3. Unpack state data on demand.


TODOs for final implementation:
1. Check that goalcount heuristic benefits from the changes in this issue 
(converting GlobalStates to States made it slower, see issue554).

Created on 2012-08-30.13:28:12 by florian, last changed by jendrik.

Summary
TODOs for prototype implementation:
1. Return EvaluationContext instead of registered state in search engines.
2. Check for additional copies added by the change so far.
3. Unpack state data on demand.


TODOs for final implementation:
1. Check that goalcount heuristic benefits from the changes in this issue 
(converting GlobalStates to States made it slower, see issue554).
Files
File name Uploaded Type Edit Remove
branch.callgrind.out florian, 2019-06-05.10:22:07 application/octet-stream
callgrind.airport.p09.default florian, 2014-10-08.17:34:34 application/octet-stream
callgrind.airport.p09.patched florian, 2014-10-08.17:34:50 application/octet-stream
default.callgrind.out florian, 2019-06-05.10:22:20 application/octet-stream
Messages
msg8920 (view) Author: jendrik Date: 2019-06-19.09:18:50
Here are the results.

base (05d98c9): start of the branch
v13 (2f7831a): "rename interface for unpacked states"
v14 (cba70832): "keep unpacked and packed data synchronized instead of unpacking
at the end of successor generation"
v19 (540e85c): branch tip ("Allow to generate unregistered successor states on
tasks with axioms")

v13 vs. 14:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v14-blind-issue348-v13-issue348-v14-compare.html
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v14-blind-issue348-v13-issue348-v14-total_time-blind-strips.png
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v14-blind-issue348-v13-issue348-v14-total_time-blind-adl.png

base vs. v19:
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v19-blind-issue348-base-issue348-v19-compare.html
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v19-blind-issue348-base-issue348-v19-total_time-blind-strips.png
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v19-blind-issue348-base-issue348-v19-total_time-blind-adl.png
msg8919 (view) Author: jendrik Date: 2019-06-18.15:10:28
Experiments are running (see msg8911).
msg8915 (view) Author: florian Date: 2019-06-17.11:14:54
The pull request at
https://bitbucket.org/FlorianPommerening/downward-issues/pull-requests/59
is the one that we made for review. It contains the changes from the prototype
branches that we want to keep. The other pull requests were of the prototype
branches and we only used them while working on them to keep track of the
changes. There is also pull request 60 that was just to see if we missed
anything important in the pull request above (see msg8904).
msg8914 (view) Author: malte Date: 2019-06-17.10:59:20
Which pull request have you been reviewing? There seem to be multiple open ones.
I'm not entirely sure which one the comments refer to.

> * Check how "lightweight" classes such as StateHandle are passed around as
> function parameters. Didn't see any reference to that in the coding guidelines,
> but StateHandle is ATM being passed in most (all?) of the new code by value, not
> by const ref.

I don't think we have a written-down policy for this. In principle, very small
objects can be more efficiently passed by value than by const reference, both
because there is less to pass, but more importantly because indirection can be
reduced and more optimization assumptions are possible. For example, we wouldn't
want to pass an int by const reference. But it is not easy to come up with
general rules that are easy to understand and avoid code churn.

> * The SearchNode class now holds a unique_ptr to the unpacked State (which acts
> as a cache to the calls to SearchNode::get_state). We should perhaps evaluate
> the memory impact of this (I would expect 8Bytes times the number of nodes, which
> could be maneageable, but still)

I didn't see such a change (which is one reason why I'm asking which pull
request you are referring to), but for what it's worth: SearchNode is a
short-lived temporary class. I don't think we have more than five or so
SearchNode instances alive at the same time. An overhead of 8 bytes per state is
not something we would introduce without a very strong need.
msg8913 (view) Author: guillem Date: 2019-06-17.10:37:44
Cedric and me went through the proposed changes in the prototype and wrote down a few notes that might (or might not :-)) be 
of some use:

* The State class now contains some redundancy: StateHandle contains a pointer to the registry, and the AbstractTask pointer 
contains that as well.
* Florian was not too happy about the introduction of the StateHandle class - perhaps this could be avoided.
* The State class seems to have too many constructors, some could probably be refactored in terms of others.
* The comments about design decisions related to the State class in task_proxy.h should be revised / updated.
* The modifications in StateRegistry::get_successor_state might be introducing some performance overhead, this should be 
checked.
In particular, both the parallel update of the packed and unpacked state, and the way axioms are evaluated.
* Check that the class EvaluatorCache is still needed - i.e., seems like a very thin wrapper around EvaluationResults that 
provides very little extra functionality.
* Check how "lightweight" classes such as StateHandle are passed around as function parameters. Didn't see any reference
to that in the coding guidelines, but StateHandle is ATM being passed in most (all?) of the new code by value, not
by const ref.
* The SearchNode class now holds a unique_ptr to the unpacked State (which acts as a cache to the calls to 
SearchNode::get_state). We should perhaps evaluate the memory impact of this (I would expect 8Bytes times the number
of nodes, which could be maneageable, but still)
msg8911 (view) Author: florian Date: 2019-06-14.21:55:35
The first two points are about the performance and surprised me a lot because
they seem to directly contradict our previous experiments. I ran blind search on
gripper/prob07 locally and it took 27 seconds, roughly 10% slower than on the
default branch.

Jendrik, could you run an experiment comparing revision cba70832 to its parent
(2f7831a) and create relative scatter plots for tasks with axioms and tasks
without axioms separately? This way we could have a look at the change for
axioms in isolation.

Could you also run an experiment for the tip of the issue (540e85c) vs. the
start of the branch (05d98c9) to see the overall impact?

I'll start another experimental branch off of the issue branch to try out the
unpacking on demand.
msg8910 (view) Author: florian Date: 2019-06-14.21:50:27
I went through the changes again and tried to collect the rough edges and open
questions. 
I came up with the following list:

1) Tasks with axioms now do an additional copy of the unpacked state data.
   msg7500 concluded that unpacking the state for the axiom evaluation was not
worth it.
  The new code doesn't unpack but it copies the full state data, evaluates the
axioms on
  it, then packs it again. From my understanding of packing and unpacking, this
should
  take longer than unpacking. I don't see how we save time with this change.

2) msg8880 showed a performance drop when unpacking every generated state but we
   currently do that. Why is this not a problem anymore?

3) Successor state creation is implemented twice, once for packed and once for
unpacked
   data. Can we avoid this duplication?

4) State::operator[] returns a FactProxy where GlobalState::operator[] returned
an int,
   do we want to change this?

5) There are now two ways to get to the ID of a state: s.get_id() and
   s.get_handle().get_id(). This smells of bad design but using the second one
   everywhere also seems bad.

6) We use unique_ptr<T> for data that is created on demand (SearchNode::state and
   State::values). Should we use optional<T> instead?
msg8906 (view) Author: jendrik Date: 2019-06-14.15:07:55
I reviewed the individual commits and only had a few minor comments. I think
unpacking the state data on demand could be deferred to a follow-up issue.
msg8904 (view) Author: florian Date: 2019-06-14.13:58:46
I turned the prototype branch into a pull request for this issue by splitting it
into 20 commits that are hopefully easier to review. I left out some stuff that
I think we don't need any more.


Here is the pull request from the issue branch to default. I recommend looking
at the individual commits instead of the full diff.
https://bitbucket.org/FlorianPommerening/downward-issues/pull-requests/59

Here is a pull request from the prototype to the issue branch. It shows the
left-overs, i.e., the parts I didn't include in the new issue branch. Guillem
and Cedric are currently looking over it to see if I missed anything important.
https://bitbucket.org/FlorianPommerening/downward-issues/pull-requests/60


I think there is still quite a number of open points to do before we can
consider merging them. the main things for me are:

* We currently have two functions that compute successor states, with very
similar code. We should try to get rid of one of them.
* The StateHandle class is not that nice, especially if we keep getters for both
handle and ID in the state. 
* I'm not convinced that copying the full state, then evaluating the axioms on
the unpacked data, and packing everything again is worth it (over evaluating
axioms on packed data and then unpacking). We should test that.


We might be able to remove the StateHandle again if we switch the state class to
contain only the handle as required data and treat the buffer lookup and the
state unpacking as optional cached data.
msg8891 (view) Author: jendrik Date: 2019-06-12.18:18:38
For completeness, here are some additional results:

https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-all-unpacked-exp1.html

v1: store state data packed in registry and unpacked in State
v2: remove copy and assignment constructors for State
v3: store state in SearchNode to avoid unpacking it again
v4: compute successors in registry
v5: set effects in packed and unpacked state data simultaneously

total search_time scores:
v1: 568.65
v2: 568.38
v3: 570.45
v4: 571.56
v5: 577.54
msg8888 (view) Author: jendrik Date: 2019-06-12.17:25:30
I'm done making the changes we discussed offline. The new pull request is at
https://bitbucket.org/FlorianPommerening/downward-issues/pull-requests/57 .
msg8880 (view) Author: jendrik Date: 2019-06-12.13:53:23
We suspect that the problem slowing down blind search is that all generated
states have to be unpacked. The following experiment confirms this hypothesis:

https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-base-exp1.html

v1: default
v2: make dummy copy of each evaluated State
v3: add dummy unpacking for each expanded GlobalState
v4: add dummy unpacking for each generated GlobalState

total search_time scores:
v1: 578.82
v2: 578.01
v3: 576.58
v4: 568.57
msg8822 (view) Author: jendrik Date: 2019-06-05.21:26:08
I ran an experiment evaluating the changes we made today, but the changes have
no noticeable effect:

https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v3-opt.html
msg8809 (view) Author: jendrik Date: 2019-06-05.12:37:42
Update summary.
msg8806 (view) Author: florian Date: 2019-06-05.10:22:07
I'm attaching two profiles of gripper/prob06 in the default branch and in our
prototype. It looks like the main place where we lose time is in
get_successor_state. In the default branch, this was handled by the state
registry and it created the successor state in the packed buffer. The packed
buffer of the predecessor was copied and then modified. In the new code, we move
the successor generation out of the registry and now copy the unpacked state,
modify the copy, then pack it (copies it again) and register it (copies the
packed buffer again). So instead of one copy we now have three.
msg8805 (view) Author: jendrik Date: 2019-06-04.23:40:37
We made some progress on the prototype and ran some experiments comparing the
prototype to the default branch. Here are the results:

https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v1-opt-issue348-v1-base-issue348-v1-compare.html
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue348-v1-sat-issue348-v1-base-issue348-v1-compare.html

The search time score decreases for most configurations, so we should profile
the runtime to see where the slowdown comes from.
msg8791 (view) Author: cedric Date: 2019-06-03.17:46:14
The current plan is to store the state in the state registry in its packed form
and unpack it everywhere else. This means that we will merge the functionality
of the classes State and GlobalState into one class (called State). State
objects can be registered or unregistered but they are always unpacked. This has
some wide-ranging implications because copying a state is now an expensive
operation. We thus have to think carefully about how to pass states to where
they are used. The current plan is to store them inside an EvaluationContext and
move them where possible.

We started working on a separate branch to test the effects of this change. Once
we are happy with it, we can decide which parts of it we want to use. 
https://bitbucket.org/FlorianPommerening/downward-issues/pull-requests/51
msg7500 (view) Author: florian Date: 2018-09-17.22:00:21
In issue794 we tested if its worth unpacking the state just for the axiom
evaluation. The answer apparently is "no" (msg7498, msg7499). While the axiom
evaluator is much faster on unpacked data, the unpacking and packing offsets
this benefit. Since we already unpack the state in most heuristics, it would
probably be worth it if we can unpack every state when it is taken out of
storage and pack it when it is stored.
msg4337 (view) Author: florian Date: 2015-07-07.22:20:58
@Gabi: I'm adding you to the nosy list because you are working on issue545 and
this might be relevant. Feel free to undo this.

This issue has been lying around for a while again. In the meantime the
situation has changed once again. We currently have two state classes:

There is GlobalState which has an id, is used in the search, by StateRegistry,
and PerStateInformation but does not know anything about the new task interface.
GlobalState objects are immutable and hold a pointer to their data (stored in a
StateRegistry). Each access to variable values is packed/unpacked on-the-fly
individually. In the long run, we want the global task to disappear, which means
we also hav to get rid of GlobalStates.

Then there is the class State which contains unpacked state data. State objects
know about the task interface and can generate FactProxy objects for it. Access
to variable values is a direct array access, but copying States is expensive.
States can currently be created from GlobalStates by the intermediary helper
method convert_global_state. States are currently used within heuristics (those
that already are updated to the new task interface). Operator application is
implemented for them, but does not support axioms.

The problem of packing/unpacking currently exists for both state classes:
GlobalState is always packed, which makes evaluating axioms slow when generating
successors in the search algorithms. State is always unpacked which is a problem
for heuristics like the blind heuristic or PDBs that only access very few facts.
For example, if we currently generate a successor and then evaluate a pdb
heuristic for it, the following happens:

1. StateRegistry::get_successor_state reserves space for packed state data and
copies the the current packed state data there.
2. It then loops through the effects of the operator and checks for each one, if
it fires. To do that, it unpacks each value individually. The effects are then
set in the new packed state individually as well.
3. The AxiomEvaluator is called on the new packed state data and again accesses
each value individually (read and write access).
4. A GlobalState object is created that points to the new packed state data.
5. A State object is created by accessing/unpacking each value individually and
storing the result in an int array.
6. The PDB accesses the values for variables in its pattern.

This is inefficient mostly in step 3. (evaluating axioms would be faster on
unpacked states) and step 5./6. (The PDB only needs a few values and should not
unpack the state).

Malte mentioned in msg4326:
> One possibility is to replace the current concrete State class with
> an abstract interface with unpacking and non-unpacking implementations.

Gabi currently also works on issue545 which will change the state class(es) once
again.
msg3741 (view) Author: malte Date: 2014-10-09.10:48:52
Sure, sounds good.
msg3728 (view) Author: florian Date: 2014-10-08.20:54:46
What do you suggest as a next step? You pointed out a problem in the code review
which accounts for some of the lost runtime, but it cannot account for the whole
difference. Should we have a more detailed look at the profile together?
msg3721 (view) Author: malte Date: 2014-10-08.17:40:59
It makes a certain amount of sense that this causes overhead, but it's
surprising that it's so large and that it varies so much from task to task
(little difference for some tasks, more than five times the runtime for others).
Perhaps there's something special about these airport tasks, such as a high rate
of duplicates.
msg3720 (view) Author: florian Date: 2014-10-08.17:34:34
I ran callgrind on airport/p09 but did not have a close look at the results. On
first glance it looks like pack/unpack just takes too much time. With the blind
heuristic, we unpack every value even though we do not look it during heuristic
calculation.
msg3717 (view) Author: florian Date: 2014-10-08.13:04:42
It might take me a while to get to this but I'll look at the profile for the
Airport tasks.

The code is in my bitbucket repository. Merging in the new default should have
updated the existing pull request, so it now shows the diff of the versions we
compared:
https://bitbucket.org/flogo/downward-issues-issue348/pull-request/1/always-unpack-state/diff
msg3713 (view) Author: malte Date: 2014-10-08.12:02:03
The slowdown in, for example, some Airport tasks with blind search is quite
large. Perhaps you can profile one of these? I'd be interested in having a look
at the code, ideally a comparison of the two versions compared in the
experiment. Can you set up a code review for this delta?
msg3700 (view) Author: florian Date: 2014-10-07.21:25:08
Results for the optimal suite are in (40MB):

http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-v3-base-issue348-v3-compare.html

Coverage and memory look good, but everything seems to run a little slower, and
in some cases a lot slower. In the last reports we also compared to the code
before issue214 (packed states), this is not done here.

What do think would be a good next step (more reports, experiments for sat,
profiling, ...)?
msg3670 (view) Author: florian Date: 2014-10-06.12:30:49
I will merge in the new code and re-run the experiments.
msg3638 (view) Author: malte Date: 2014-10-04.19:49:35
It would be good to make progress on this issue -- I think it's an important one.

Can I do anything to help get this one moving again? Also, perhaps we should run
some new experiments that give before/after numbers against the newest code
version? The original experiments are now 4-5 months old, we've had many changes
since then.
msg3407 (view) Author: johannes Date: 2014-09-03.13:23:02
This issue has implications for my numeric fast downward extension too. I
currently tag variables in preprocess with "constant" (contains the value of a
primitive numeric expression such as road length) "derived" (result of a numeric
axiom) "instrumentation" (variable never occurs in preconditions -> only
relevant to compute metrics) and "state-relevant".

The state-relevant variables have to be packed, while states that only differ in
the instrumentation variables can be considered the same state. Constants can be
stored globally once (they never change) and the handling of derived variables
has multiple possible solutions. Following florian the current solution would be
to evaluate all axioms when unpacking the state and not store them in the packed
buffer (and also do not evaluate them on the fly).
msg3188 (view) Author: florian Date: 2014-05-23.16:55:50
We still have an open issue in the code, maybe you could have a look. Its the
TODO in state_registry.cc: the question is whether we can avoid to copy the
state data by reserving uninitialized memory in the pool and then pack the state
into this memory.
(https://bitbucket.org/flogo/downward-issues-issue348/pull-request/1)

Apart from that, we wanted to have a closer look (i.e. profile) at iPDB. It
seems to be the config where the state packing has the largest negative effect.
msg3187 (view) Author: malte Date: 2014-05-23.15:55:06
Thanks! What's the next step?
msg3185 (view) Author: florian Date: 2014-05-23.15:22:07
I ran full experiments after Jendriks review for optimal domains:

http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-v2-opt.html

The full results for satisficing domains are for the code version before the
review, but this should be fine, because we mostly changed minor things in the
review

http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-v1full-sat.html

Both reports show a lot of failed merge and shrink configs because the
configuration file in lab was out of date.

Finally, I tried to create a report that only shows runs with a significant
change in total time. It compares the code before issue214 to the code after
this issue. It excludes a run if it wasn't solved in both configs, or it was
solved in both configs in under 0.5 seconds. The remaining runs are only
included if their runtime difference is at least 20%.

http://ai.cs.unibas.ch/_tmp_files/pommeren/notablechangesbyrevisionreport.html#total_time
msg3180 (view) Author: florian Date: 2014-05-21.17:53:47
Argh, yes! The reports had the same name and the second one replaced the first.
Looked ok in my browser because I checked the first before uploading the second.
Let me try again:

http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-v1-opt.html
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-v1-sat.html
msg3179 (view) Author: malte Date: 2014-05-21.17:49:49
Numbers look good! Did you want to give two different experiment links?
msg3178 (view) Author: florian Date: 2014-05-21.17:42:44
I did a first round of experiments for unpacking in every state and invited
Jendrik to the following pull request:

https://bitbucket.org/flogo/downward-issues-issue348/pull-request/1

If anyone else wants to have a look, let me know.

The experiments only test the domains and configs Malte suggested in msg3049. We
should run a larger set of benchmarks before the final merge.

http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-base-issue348-v1-compare.html
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue348-base-issue348-v1-compare.html
msg3092 (view) Author: florian Date: 2014-03-28.12:00:38
Some notes from our last meeting:

As a first step, we want to completely unpack the state everywhere outside the
state registry, i.e., only pack it for storage.

As I said offline, I'll finish some other things first and will then work on
this issue if it is still open. I'll leave the issue unassigned until I actually
have time to work on it in case anyone else wants to work on it in the meantime.
msg3082 (view) Author: malte Date: 2014-03-24.13:32:26
Any volunteers to lead this? I'm happy to give advice regarding the design etc.
msg3065 (view) Author: malte Date: 2014-03-15.16:31:52
Andrew reports (in msg3061 and msg3063 of issue214) a nice speed improvement for
blind search in the tidybot domain by inlining the get/set methods for the
packed states. See msg3061 for details.

We should consider the question of inlining or not inlining in the context of
this issue, since

1) inlining speeds up access to packed states, making unpacking less necessary
2) inlining might be less beneficial if we unpack states in one go, since there
will be fewer calls to get/set.

My suspicion regarding 1) is that we will want to unpack even if we inline calls
to get/set. Regarding 2), we should measure and look at particularly bad
domains/configurations.
msg3049 (view) Author: malte Date: 2014-03-14.01:33:48
For these and the related issues on the state space, I suggest we identify a set
of domains and configurations that are particularly worth watching, and limit
our initial experiments to this smaller subset. (Or, if you prefer to run all
experiments on all benchmarks and configs, that's also fine, but I'd like to
additionally have result tables that only talk about the "interesting subset".)

I find it very hard to find problem cases in the 100 MB tables, since the
summaries don't nearly tell the whole story. For example, we know that for this
issue we mainly need to watch total time, and the total time summaries are
rather useless with a large set of configs because the set of commonly solved
instances becomes tiny. I would never have found h^CG + PSR-Large as a problem
case if I hadn't been looking for it specifically in the detailed tables.

Here are my candidates for configs to watch:
- A* + blind
- A* + h^2
- A* + ipdb
- eager_greedy + h^CG (presumably with preferred ops? whatever the config in our
satisficing experiment for issue214 was)
- LM-Cut
- BJOLP

(The last two are not really problem cases, but it's good to have something not
expected to be problematic to compare against.)

Here are my candidates for domains:
- tidybot (e.g. the opt variant)
- pegsol (e.g. opt-11, but I expect pegsol-2008 should work equally well)
- psr-large

There may be other critical cases I am missing. Ideally, I'd be interested in a
list of cases (config, instance) where total time is at least 0.5 seconds for
the old code and increases by at least 20% in the new code in the experiments of
issue214. Can someone prepare this, or if not, can someone make the properties
files for the two main experiments (opt and sat) from this issue available?
(Actually, it'd be nice to have the properties files either way.)
msg3043 (view) Author: jendrik Date: 2014-03-13.17:02:57
Rephrased title to make this issue more general. Originally this was only about the axiom evaluator.

Quoting Malte from issue214:

"""
There is one outstanding point ...: the
question if, when and where to unpack the state information. We already have
issue348 related to the axiom evaluator that discusses this point, but this
should not be limited to the axiom evaluator, as our results for heuristics that
access state information many times show.

I suggest we [...] deal with it soon, either straight
away or after issue420, which should be quick. We can take the information from
msg2994 and msg3004 to identify the most interesting configurations and
benchmarks for this, and msg3001 has some additional considerations regarding
IDA* etc. that might help guide the code design for the new issue.
"""
msg3041 (view) Author: jendrik Date: 2014-03-13.01:10:08
Other related things we should take care of:

1) The initial state representation contains unevaluated axiom values.

2) Update the comment above the declaration of g_initial_state_data in globals.h 
once 1) is done.
msg3033 (view) Author: florian Date: 2014-03-06.16:10:08
After we finished issu214 and issue420, we should try to refactor the axiom
evaluator. States should be unpacked before axioms are evaluated. This would
mean that the axiom evaluator works on unpacked state data and we could move the
typedef for PackedStateEntry into the state class and make it private. We should
then also make sure that axioms are evaluated on the global variable holding the
initial state data. Currently this is done only when the initial state object is
created by the state registry, which is confusing because the global variable
name is not what it seems.
msg2327 (view) Author: florian Date: 2012-08-30.13:28:12
From Malte's comment in Rietveld:
> The way that the axiom evaluator accesses the state at the moment will
probably need some additional thought at some point.

At the moment the evaluator accesses the State class directly and changes the
state variables of it. In the patch for issue344 this is changed to a solution
where the evaluation works directly on the state_var_t array and is called in
the constructor for successor states and in the named constructor for the
initial state.
History
Date User Action Args
2019-06-19 09:18:50jendriksetmessages: + msg8920
2019-06-18 15:10:28jendriksetmessages: + msg8919
2019-06-17 11:14:54floriansetmessages: + msg8915
2019-06-17 10:59:20maltesetmessages: + msg8914
2019-06-17 10:37:44guillemsetnosy: + guillem
messages: + msg8913
2019-06-14 21:55:35floriansetmessages: + msg8911
2019-06-14 21:50:27floriansetmessages: + msg8910
2019-06-14 15:07:55jendriksetmessages: + msg8906
2019-06-14 13:58:46floriansetmessages: + msg8904
2019-06-12 18:18:38jendriksetmessages: + msg8891
2019-06-12 17:25:30jendriksetstatus: chatting -> in-progress
messages: + msg8888
2019-06-12 13:53:23jendriksetmessages: + msg8880
2019-06-05 21:26:08jendriksetmessages: + msg8822
2019-06-05 12:37:42jendriksetmessages: + msg8809
summary: TODOs for prototype implementation: 1. Return EvaluationContext instead of registered state in search engines. 2. Check for additional copies added by the change so far. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue (converting GlobalStates to States made it slower, see issue554). -> TODOs for prototype implementation: 1. ~~Return EvaluationContext instead of registered state in search engines.~~ 2. ~~Check for additional copies added by the change so far.~~ 3. Unpack state data on demand. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue (converting GlobalStates to States made it slower, see issue554).
2019-06-05 10:22:20floriansetfiles: + default.callgrind.out
2019-06-05 10:22:07floriansetfiles: + branch.callgrind.out
messages: + msg8806
2019-06-04 23:40:37jendriksetmessages: + msg8805
2019-06-03 17:46:14cedricsetmessages: + msg8791
summary: TODOs (incomplete): 1. Check that goalcount heuristic benefits from the changes in this issue (converting GlobalStates to States made it slower, see issue554). -> TODOs for prototype implementation: 1. Return EvaluationContext instead of registered state in search engines. 2. Check for additional copies added by the change so far. TODOs for final implementation: 1. Check that goalcount heuristic benefits from the changes in this issue (converting GlobalStates to States made it slower, see issue554).
2019-06-03 14:05:41cedricsetnosy: + cedric
2018-09-17 22:00:21floriansetmessages: + msg7500
2015-10-29 14:02:12jendriksetsummary: TODOs (incomplete): 1. Check that goalcount heuristic benefits from the changes in this issue (converting GlobalStates to States made it slower, see issue554).
2015-07-07 22:20:58floriansetnosy: + gabi
messages: + msg4337
2014-10-09 10:48:52maltesetmessages: + msg3741
2014-10-08 20:54:46floriansetmessages: + msg3728
2014-10-08 17:40:59maltesetmessages: + msg3721
2014-10-08 17:34:50floriansetfiles: + callgrind.airport.p09.patched
2014-10-08 17:34:34floriansetfiles: + callgrind.airport.p09.default
messages: + msg3720
2014-10-08 13:04:42floriansetmessages: + msg3717
2014-10-08 12:02:03maltesetmessages: + msg3713
2014-10-07 21:25:09floriansetmessages: + msg3700
2014-10-06 12:30:49floriansetmessages: + msg3670
2014-10-04 19:49:35maltesetmessages: + msg3638
2014-09-03 13:23:02johannessetnosy: + johannes
messages: + msg3407
2014-05-23 16:55:50floriansetmessages: + msg3188
2014-05-23 15:55:06maltesetmessages: + msg3187
2014-05-23 15:22:07floriansetmessages: + msg3185
2014-05-21 17:53:47floriansetmessages: + msg3180
2014-05-21 17:49:49maltesetmessages: + msg3179
2014-05-21 17:42:44floriansetmessages: + msg3178
2014-05-21 14:46:09floriansetassignedto: florian
2014-03-28 12:00:38floriansetmessages: + msg3092
2014-03-24 13:32:26maltesetmessages: + msg3082
2014-03-15 16:31:52maltesetmessages: + msg3065
2014-03-15 16:27:05andrew.colessetnosy: + andrew.coles
2014-03-14 01:33:48maltesetmessages: + msg3049
2014-03-13 17:02:57jendriksetmessages: + msg3043
title: Rethink the way that the axiom evaluator accesses the state -> Rethink how and when to unpack states
2014-03-13 01:10:08jendriksetnosy: + jendrik
messages: + msg3041
2014-03-06 16:10:08floriansetstatus: unread -> chatting
messages: + msg3033
2012-08-30 13:28:12floriancreate