There are a number of ways in which we could make the search code more
cost-sensitive without paying inordinate amounts of performance. Some of them
might be good unconditionally; others might be usefully enabled by options.
1. When finding a cheaper path to a closed node, update the path, even
when reopening is set to false. This means that g values are no longer
consistent with the paths (g values are now upper bounds on the true costs
of paths that would be extracted from the search ndoes), but I can't see how
this could hurt.
2. When doing cost-insensitive search (cost_type=0), still update the paths
based on real costs. Maybe even do *reopening* based on real costs when
giving a suitable option, but avoid expensive heuristic reevaluations etc.
in this case when we now that the unit-cost g and h values have not changed.
(Actually, what should the unit-cost g value be when we find a longer, but
cheaper path to a previously considered node? Should it be updated to the
larger value for the longer, cheaper path? I think not -- it should remain
the cost of the shortest path. That would mean that g and real_g no longer
describe the same path, but I don't see how that would hurt.)
With reopening like this, an iterated search based on unit cost g would
eventually find a cost-optimal solution. Without, it wouldn't.
3. For settings that do not do full reopening, implement a post-processing
option that goes through the generated part of the search space and performs
a Dijkstra search on it. This should be pretty fast compared to the effort
of generating that graph in the first place since it doesn't need
heuristics, but would find an optimal path *among those that don't leave
this space*. Is this like Hootan Nakhost's post-processing technique with a
radius of 0?
|