Heuristics like the operator-counting heuristic and some landmark heuristics compute fractional heuristic values. In case of admissible heuristics, we would like to round them up to the nearest integer. Because doubles are inherently inaccurate, these values can be off from their mathematical value by a small amount. Rounding up a value like 1.00000000000001 to 2 could thus make the heuristic inadmissible. Currently, we deal with this be subtracting an epsilon of 0.01 before rounding up (the exact value may be different for different heuristics).
However, in issue 1134, we saw heuristic values like 18.01 (see [1]) which actually should be rounded up but are not. For LP-based heuristics, we may be able to get information out of the LP solver that tells us something about the interval the actual solution can be in. For example, the optimality tolerance [2] sounds like a candidate. We have to find out if this really is limiting the distance to the true optimal value.
[1] Call to reproduce the value from issue1134:
./fast-downward.py $DOWNWARD_BENCHMARKS/airport/p11-airport3-p1.pddl \
--search "astar(operatorcounting([delete_relaxation_rr_constraints(\
acyclicity_type=time_labels, use_integer_vars=false)]), bound=0)"
[2] https://www.ibm.com/docs/en/icos/12.8.0.0?topic=parameters-optimality-tolerance
and "opttol" in SoPlex: https://soplex.zib.de/doc/html/PARSLIST.php
|