msg3831 (view) |
Author: silvan |
Date: 2014-10-15.22:31:33 |
|
I am totally happy with merging, which I just did.
|
msg3830 (view) |
Author: malte |
Date: 2014-10-15.18:22:35 |
|
From the other data (especially on time), it looks like we're just unlucky, so
I'm not concerned about performance.
Regarding code review vs. continuing with other things, how about we just merge
this? I think the change is an improvement, and even if we later see things we
can improve further, I don't think this has to keep us up now. In other words,
we don't have to review this change further now if we keep open the option of
reviewing the whole code (or the most interesting parts of it) later.
|
msg3828 (view) |
Author: silvan |
Date: 2014-10-15.17:53:45 |
|
Here they are:
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-14-issue477-v9-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-14-issue477-v9-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-14-issue477-v9-rl-comp.html
The two coverage we lose with cggl-g-inf on tidybot are both due to the v8
version solving the instance in more than 1790seconds, and the v9 not. I don't
know if that could be due to the high number of labels and the need to store a
pair<int, vector<int> > rather than only the vector for every label, but I
(hopefully) did not change any other functionality beside renaming methods,
attributes and having different assertions at different places.
I know you want to have a look at the code before merging and I know you said
you probably cannot do so very soon, so I wonder how I should continue with
further refactoring issues: branch away from this issue branch, or do a
preliminary merge into default in order to branch from there?
|
msg3826 (view) |
Author: silvan |
Date: 2014-10-14.15:52:34 |
|
Okay, I'll run the experiment and let you know about the results.
|
msg3825 (view) |
Author: malte |
Date: 2014-10-14.15:50:48 |
|
The experiments for v8 looked good.
The latest changes are substantial, so I suggest another experiment comparing v8
and v9. If you don't want to wait, I'll have a look at the code after the
experiment -- I'll be busy for the next time. It would also make sense to look
at the code before the experiment, of course, but then I guess it would take
longer. Let me know your preference.
I don't think we need a separate comparison of the baseline and v9, but of
course feel free to add one if you'd like to have it for future reference.
|
msg3824 (view) |
Author: silvan |
Date: 2014-10-14.15:47:02 |
|
I made another round of refactoring and documentation update. Should I run
another experiment? What do you think of the experiments posted below?
|
msg3820 (view) |
Author: silvan |
Date: 2014-10-14.10:23:29 |
|
I refactored normalize() into apply_label_reduction(label_mapping) and
normalize_transitions(). The first method, for transition systems in which only
locally equivalent labels are mapped, simply moves the transitions to the new
label and deletes the old ones. For the other case, it performs the combination
of labels as was previously done in normalize(). The second method is set back
to the form it had before the new label reduction.
Comparison against default:
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-v8-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-v8-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-v8-rl-comp.html
Not a lot changed, maybe the comparison is slightly better than before. But we
decided to live with the memory increase anyway.
Comparison against the previous version of this issue:
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-compare-v6-v8-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-compare-v6-v8-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-13-issue477-compare-v6-v8-rl-comp.html
This comparison definitely looks like an improvement.
Malte, if you find some time, could you please have a look at the code? From my
side, everything should be ready for merge.
|
msg3765 (view) |
Author: silvan |
Date: 2014-10-10.13:20:16 |
|
Yes, I'll add this.
|
msg3764 (view) |
Author: malte |
Date: 2014-10-10.13:16:37 |
|
Perhaps we should an assertion that only labels of the same cost are defined and
explain what would need to be changed if we generalized this?
|
msg3763 (view) |
Author: silvan |
Date: 2014-10-10.13:13:17 |
|
Right, I've missed the same cost thing. So at least there is no problem there.
Then everything as it is now is fine (distances are computed at construction
time and recomputed if required after shrinking).
|
msg3762 (view) |
Author: malte |
Date: 2014-10-10.12:43:03 |
|
I think it would be best to keep distance data valid. Label reduction (the way
we use it currently) always keeps distances the same because combining same-cost
labels leaves distances unchanged. So could we do something like recomputing the
distances when they might have changed, but not otherwise?
|
msg3761 (view) |
Author: silvan |
Date: 2014-10-10.11:41:54 |
|
I just realized that there is still an open question regarding when and when not
to compute distances. So far, compute_distances_and_prune does not do anything
if distances have been computed before and not reset, which only happens in
apply_abstraction, if the abstraction is not f-preserving. That means that
currently, even if we immediately normalize after every single label reduction,
we do not recompute distances, because distances are computed at construction
time of transition systems. I actually wonder whether the distance information
we obtain by computing it at construction time and not recomputing it after
label reduction is correct for the transition system for which we computed the
combinable relation when performing label reduction. The experiment data says
that the resulting heuristics are always the same, but this could be luck, e.g.
due to shrinking which triggers a recomputation, or because the behaviour of the
shrinking doesn't change.
We could fix this by always recomputing distances, after every single label
reduction, which would be in the sense of this issue, but I believe that this
will be too expensive, especially considering that we do not need distance
information during label reduction. What do you think we could and should do here?
|
msg3754 (view) |
Author: silvan |
Date: 2014-10-09.23:31:13 |
|
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-09-issue477-v6-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-09-issue477-v6-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-09-issue477-v6-rl-comp.html
The version with cutting down the capacity to the actual size after normalizing
only slightly changed memory usage, not resulting in a change in coverage. So I
think I'll polish the code and merge this (tomorrow) to have the default branch
updated and being able to start new branches.
|
msg3748 (view) |
Author: malte |
Date: 2014-10-09.14:20:27 |
|
I have no objections to merge this now -- we should just make sure we revisit
memory management later.
|
msg3747 (view) |
Author: silvan |
Date: 2014-10-09.14:18:59 |
|
We can of course not merge this branch and let the other issues be side branches
of this issue.
|
msg3746 (view) |
Author: silvan |
Date: 2014-10-09.14:16:43 |
|
Only cutting down the vectors is not the only reason. I tried around a bit with
randomly adding calls to normalize here and there, and I found that adding a
single normalize in the merge-and-shrink main loop can increase peak memory from
24MB to 29MB roughly, on a toy problem (depot1).
I can still run an experiment with swapping the vectors to get their size
"right" (and this sometimes improves over the old version, which also not always
had vector.size() equal vector.capacity()), but this seems somewhat expensive in
some local experiments (concerning runtime). I'd try to only copy and swap back
the vector if capacity doesn't equal size. But other than that, I'd really like
to continue working on the other plans, where we require having valid transition
systems.
|
msg3740 (view) |
Author: malte |
Date: 2014-10-09.10:24:52 |
|
The observation on capacities is interesting. As a stop-gap measure, maybe we
want to try adding some code to normalize() at the end to cut the transition
vectors down to the actually required size (by building new vectors of the right
size and swapping them). This might help with memory and hopefully wouldn't take
a huge amount of time. It wouldn't be a perfect solution, but it might solve the
immediate performance problem. Later on, I think we should look at better
representations and better memory management more generally.
|
msg3733 (view) |
Author: silvan |
Date: 2014-10-08.22:56:19 |
|
I had a look at one of the shrink-fh configurations. The only reason for the
observed differences I could found is that when using the old version, after
normalization, all entries in transitions_by_label (i.e. vector<Transition> for
every label) have capacities equalling their sizes, whereas with the new
version, size is always considerably higher, usually rounded up to the next
multiple of 2. I really don't understand this, as I didn't change the normalize
method, only the place where it is called.
Do you have any idea why the differences are so large? If you wanted to have a
look at the code, that could maybe be helpful (although the change is really
small), but please ignore the "exit_with" in place of assertions which are still
present :-)
You find the code here
https://bitbucket.org/SilvanS/fd-dev/pull-request/1/m-s-refactoring-transition-systems-should/diff
(or use hg meld -r issue432-v4:issue477-v5)
|
msg3732 (view) |
Author: silvan |
Date: 2014-10-08.21:50:53 |
|
The most recent experiments do not show a difference in h-values or expansions
until last jump anymore, but performance still goes down (due to increase in
memory, although there is a decrease in time, which really astonishes me).
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-08-issue477-v5-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-08-issue477-v5-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-08-issue477-v5-rl-comp.html
I'll try to have another round of debugging to find possible sources of the
memory increase.
|
msg3725 (view) |
Author: malte |
Date: 2014-10-08.19:52:53 |
|
If only that configuration is affected, it may be something to do with the g
values, which are only really used there. Not sure. Given that the initial_h
values changed, there must be a change of behaviour here somewhere, not just a
refactoring change. Let me know if you want to me to go over the code or do
anything else on this issue; otherwise I'll wait for the next update.
|
msg3705 (view) |
Author: silvan |
Date: 2014-10-08.10:46:03 |
|
I have some new results with the fixed computation of is_normalized() and some
other minor changes:
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-06-issue477-compare-base-v4-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-06-issue477-compare-base-v4-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-10-06-issue477-compare-base-v4-rl-comp.html
Interestingly, performance drops for the shrinkfh configuration. I'll have a
look into this.
Btw, note that the code in the bitbucket repo is ahead of these results (there
is an even newer version of experiments running).
|
msg3630 (view) |
Author: malte |
Date: 2014-10-04.19:15:25 |
|
I had a look at the code on bitbucket and added some comments. So far, I mainly
looked at the diff, but after the next round of changes, it may be a good idea
to go over the complete code in merge_and_shrink_heuristic.{h,cc},
transition_system.{h,cc} and the label-related code to get a more global view.
|
msg3604 (view) |
Author: silvan |
Date: 2014-09-30.12:11:00 |
|
You should also have access to my repo on bitbucket now, in case you which to
look at the (small) diff to the default branch.
|
msg3603 (view) |
Author: silvan |
Date: 2014-09-30.11:55:06 |
|
I have been experimenting on this for some while now, but unfortunately, we lose
some coverage:
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-09-29-issue477-compare-base-v2-cggl-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-09-29-issue477-compare-base-v2-dfp-comp.html
http://ai.cs.unibas.ch/_tmp_files/sieverss/2014-09-29-issue477-compare-base-v2-rl-comp.html
The reasons seem to be both memory and time increase. The runtime increase
probably stems from increased compute_distances() calls, but regarding the
memory, I really don't understand why it constantly increases. I investigated
this issue and in several local experiments, not normalizing immediately after
"apply_abstraction" seems to have impact on memory usage (normalizing
immediately after label reduction for example did not change the memory usage of
the transition system). I do not, however, understand *why* normalizing
immediately should consume more memory afterwards...
|
msg3533 (view) |
Author: malte |
Date: 2014-09-24.18:05:27 |
|
I suggest we initially leave the "maybe" part out because I'd be interested in
the performance impact without this part. Perhaps best to leave that one for a
separate issue.
|
msg3528 (view) |
Author: silvan |
Date: 2014-09-24.13:31:35 |
|
This is part of the meta issue432. We want transition systems to always be
valid, which means:
- Transitions are sorted (by labels, by states) and there are no
duplicates.
- All labels are incorporated
- Distances are computed and stored
- Maybe: locally equivalent labels are computed and stored (we then could
store the transitions for locally equivalent labels only once. We could
even consider dropping '(ir)relevant labels' and making self loops
explicit in this case.
As a consequence, immediately after shrinking, after label reduction and after
construction, transition systems must be normalized. The goal is to have only
three non-const public methods apply_abstraction, apply_label_mapping and
release_memory.
|
|
Date |
User |
Action |
Args |
2014-10-15 22:31:33 | silvan | set | status: chatting -> resolved messages:
+ msg3831 |
2014-10-15 18:22:35 | malte | set | messages:
+ msg3830 |
2014-10-15 17:53:46 | silvan | set | messages:
+ msg3828 |
2014-10-14 15:52:34 | silvan | set | messages:
+ msg3826 |
2014-10-14 15:50:48 | malte | set | messages:
+ msg3825 |
2014-10-14 15:47:02 | silvan | set | messages:
+ msg3824 |
2014-10-14 10:23:29 | silvan | set | messages:
+ msg3820 |
2014-10-10 13:20:16 | silvan | set | messages:
+ msg3765 |
2014-10-10 13:16:37 | malte | set | messages:
+ msg3764 |
2014-10-10 13:13:17 | silvan | set | messages:
+ msg3763 |
2014-10-10 12:43:03 | malte | set | messages:
+ msg3762 |
2014-10-10 11:41:54 | silvan | set | messages:
+ msg3761 |
2014-10-09 23:31:13 | silvan | set | messages:
+ msg3754 |
2014-10-09 14:20:27 | malte | set | messages:
+ msg3748 |
2014-10-09 14:18:59 | silvan | set | messages:
+ msg3747 |
2014-10-09 14:16:43 | silvan | set | messages:
+ msg3746 |
2014-10-09 10:24:52 | malte | set | messages:
+ msg3740 |
2014-10-08 22:56:19 | silvan | set | messages:
+ msg3733 |
2014-10-08 21:50:53 | silvan | set | messages:
+ msg3732 |
2014-10-08 19:52:53 | malte | set | messages:
+ msg3725 |
2014-10-08 10:46:03 | silvan | set | messages:
+ msg3705 |
2014-10-04 19:15:25 | malte | set | messages:
+ msg3630 |
2014-09-30 12:11:00 | silvan | set | messages:
+ msg3604 |
2014-09-30 11:55:06 | silvan | set | messages:
+ msg3603 |
2014-09-24 18:05:27 | malte | set | messages:
+ msg3533 |
2014-09-24 13:31:35 | silvan | create | |