The following function in the M&S code is wrong:
static void breadth_first_search(const vector<vector<int> > &graph,
const vector<vector<int> > &cost_graph,
deque<int> &queue,
vector<int> &distances) {
while (!queue.empty()) {
int state = queue.front();
queue.pop_front();
for (int i = 0; i < graph[state].size(); i++) {
int successor = graph[state][i];
int cost = cost_graph[state][i];
if (distances[successor] > distances[state] + cost) {
distances[successor] = distances[state] + cost;
queue.push_back(successor);
}
}
}
}
To support general costs, this should not be a breadth-first search
(using a deque as a queue) but a uniform cost search (using e.g. an
AdaptiveQueue, ideally one that is owned by the M&S heuristic instance
so that it is not reinitialized all the time -- cf. what other
heuristics do there).
The problem with the current code is that it can result in costs that
are too large, and hence in inadmissible heuristics.
Once this is fixed, we should do a before/after comparison on the IPC 2011
benchmarks and inform the organizers if any of our plans found there are not
actually optimal.
|