In issue668, we added sbMIASM (previously called dyn-MIASM) to Fast Downward. We
want to discuss alternatives to imitating the shrink behavior of the
merge-and-shrink heuristic when computing merge candidates (i.e., temporary
product systems). In particular, it is unclear if non-perfect shrinking is
desirable at this place.
Quoting a discussion from a code review for issue668:
Malte:
Why should it [the shrink strategy] match? I would find it more natural to
always use exact bisimulation-based shrinking, and in fact it's not clear to me
that we want a choosable shrink strategy here at all. If we use a shrink
strategy that shrinks to a fixed size, doesn't this lose relevant information?
My understanding is that e.g. if merging A and B and then doing only the perfect
shrinks and irrelevance pruning takes us to 1100 states, while merging A and C
would take us to 3000 states, then we'd prefer to merge A and B. But if we use
an overall shrinking strategy that shrinks everything to 1000 states, we
wouldn't be able to distinguish them, right?
Silvan:
That's right. The idea was to look at the merged transition systems obtained
from shrinking/merging in the same way as the regular merge-and-shrink
computation would shrink merge, to "predict" the amount pruning for these
merges. Only using perfect shrinking would very likely result in many merges
being too large to be computed, because the sizes would explode.
Malte:
OK, I guess the details depend on whether we consider shrinking before or after
merging; I think they play slightly different roles here. I was mainly thinking
of the shrinking that happens after merging here.
See also 3) in msg6218 of issue668.
|