Thanks! We see some slowdown and some reduction in coverage for blind search and
expansion core. In some cases it takes substantially longer, though looking at
individual instances is always a bit problematic because of noise. The results
for blind search are not great, but they are also not terrible.
I left one small comment on bitbucket.
I am somewhat in two minds about this change.
On the one hand, it is a nice performance improvement, and it is not all that
complicated.
On the other hand, it does make things somewhat more complicated, and I don't
think it addresses the problem that we should be addressing here. The main
problem with the disabling/interference data structures is that they can be so
large, O(N^2) in the number of operators. Only computing the information on
demand can help a bit with that, but only on problems where a large portion of
operators never becomes applicable. I think it would be better to look into
avoiding the O(N^2) data structure altogether. In other words, I think this is a
case where we should be thinking about the algorithm rather than polishing the
implementation details.
I think that by thinking about this a bit more deeply, we might be able to come
up with a better idea at the algorithm level, perhaps ending up with a much more
substantial improvement. I feel somewhat similarly about issue773: I'm not sure
if that change would have been necessary if we had addressed things more
directly and made the stubborn set computations more efficient.
If someone is interested in looking into this with me, I suggest we set aside
half an hour to look at what is actually computed and whether the same
information could not be computed more intelligently. I am 80% confident that we
can make the stubborn sets substantially faster and also get rid of the
worst-case O(|Actions|^2) data structures. But I suppose this does not need to
keep up the merging of this isuse.
|