Issue778

Title stubborn sets: compute operator relations on demand
Priority wish Status resolved
Superseder Nosy List florian, jendrik, malte, silvan
Assigned To jendrik Keywords
Optional summary

Created on 2018-04-16.09:22:30 by jendrik, last changed by jendrik.

Messages
msg7074 (view) Author: jendrik Date: 2018-04-24.17:45:37
I went ahead and merged the changes from this issue. For making the stubborn sets 
computation more efficient I created issue781.
msg7072 (view) Author: malte Date: 2018-04-24.11:36:08
Thanks! We see some slowdown and some reduction in coverage for blind search and
expansion core. In some cases it takes substantially longer, though looking at
individual instances is always a bit problematic because of noise. The results
for blind search are not great, but they are also not terrible.

I left one small comment on bitbucket.

I am somewhat in two minds about this change.

On the one hand, it is a nice performance improvement, and it is not all that
complicated.

On the other hand, it does make things somewhat more complicated, and I don't
think it addresses the problem that we should be addressing here. The main
problem with the disabling/interference data structures is that they can be so
large, O(N^2) in the number of operators. Only computing the information on
demand can help a bit with that, but only on problems where a large portion of
operators never becomes applicable. I think it would be better to look into
avoiding the O(N^2) data structure altogether. In other words, I think this is a
case where we should be thinking about the algorithm rather than polishing the
implementation details.

I think that by thinking about this a bit more deeply, we might be able to come
up with a better idea at the algorithm level, perhaps ending up with a much more
substantial improvement. I feel somewhat similarly about issue773: I'm not sure
if that change would have been necessary if we had addressed things more
directly and made the stubborn set computations more efficient.

If someone is interested in looking into this with me, I suggest we set aside
half an hour to look at what is actually computed and whether the same
information could not be computed more intelligently. I am 80% confident that we
can make the stubborn sets substantially faster and also get rid of the
worst-case O(|Actions|^2) data structures. But I suppose this does not need to
keep up the merging of this isuse.
msg7070 (view) Author: jendrik Date: 2018-04-24.09:09:55
Here are the new results: http://ai.cs.unibas.ch/_tmp_files/seipp/issue778-v1-no-min-ratio-issue778-base-issue778-v1-compare.html
msg7069 (view) Author: jendrik Date: 2018-04-23.23:35:56
Yes, the only change is that the computation is now incremental.

I noticed that the old experiment contains a bug: instead of testing 
stubborn_sets_simple() and stubborn_sets_ec(), only stubborn_sets_simple() is 
evaluated.

A new experiment with the requested configurations is running.
msg7068 (view) Author: malte Date: 2018-04-23.11:54:30
Thanks! Can you also run experiments with the default versions of the pruning
methods, i.e., without the min pruning ratio?

And can you add experiments with blind search instead of LM-Cut? Because LM-Cut
is so slow, it tends to exercise the runtime and memory efficiency of the
partial-order reduction algorithms less.

Looking at the code again, it looks like my previous comment was wrong: it was
already previously the case that the disabling relation was stored twice, and
nothing has changed there other than making the computation incremental. Is that
correct?

Then the only concern could be that the computation of these relations might now
be slower because we could previously more efficiently compute both relations
together, but this does not seem to be an issue in the experiments. Still, I'd
like to see results for blind search and for the default version of the pruning
strategies before we merge.
msg7067 (view) Author: jendrik Date: 2018-04-23.11:46:29
Here are the results: http://ai.cs.unibas.ch/_tmp_files/seipp/issue778-v1-opt-issue778-base-issue778-v1-compare.html

Total coverage increases by 3 and 2 for simple stubborn sets and expansion core stubborn sets, respectively. The sums of 
time and memory scores also increase by ~4 and ~25, respectively, for both configurations.
msg7058 (view) Author: malte Date: 2018-04-16.12:09:17
I have had a first look at the code. I can't easily follow the logic change, but
if the experiments show no difference in expansions, I can be convinced that
it's correct. I'm not convinced that it will be a great deal in terms of memory,
but perhaps we'll see that in the experiments. In the worst case, it seems we
can double the memory usage because disabling is now stored twice.
msg7055 (view) Author: jendrik Date: 2018-04-16.09:51:41
I made a pull request at https://bitbucket.org/jendrikseipp/downward/pull-requests/83 .

Experiments are running.
History
Date User Action Args
2018-04-24 17:45:37jendriksetstatus: reviewing -> resolved
messages: + msg7074
2018-04-24 11:36:08maltesetmessages: + msg7072
2018-04-24 09:09:55jendriksetmessages: + msg7070
2018-04-23 23:35:56jendriksetmessages: + msg7069
2018-04-23 11:54:31maltesetmessages: + msg7068
2018-04-23 11:46:29jendriksetmessages: + msg7067
2018-04-16 17:04:16silvansetnosy: + silvan
2018-04-16 12:09:17maltesetmessages: + msg7058
2018-04-16 12:03:36floriansetnosy: + florian
2018-04-16 09:51:41jendriksetstatus: in-progress -> reviewing
messages: + msg7055
2018-04-16 09:22:30jendrikcreate