Issue838

Title reduce memory usage of causal graph heuristic cache in 64-bit builds
Priority wish Status resolved
Superseder Nosy List cedric, jendrik, malte, silvan
Assigned To jendrik Keywords
Optional summary

Created on 2018-09-20.13:16:35 by jendrik, last changed by jendrik.

Messages
msg8789 (view) Author: jendrik Date: 2019-06-03.14:28:07
Merged.
msg8753 (view) Author: malte Date: 2019-04-09.10:45:04
Looks good to me, but it makes sense to have this configurable, for the same
reason that we want to have things like abstraction sizes in iPDB, M&S and CEGAR
configurable. People who run the planner with different memory limits might also
want to choose different cache sizes. Pull request looks good to merge.
msg8752 (view) Author: jendrik Date: 2019-04-09.08:35:32
I added an option for configuring the size of the causal graph cache and ran an
experiment evaluating different cache sizes: 
https://ai.dmi.unibas.ch/_tmp_files/seipp/issue838-v2-cache-size.html

The results suggest that the previous cache size of 1M entries per variable
offers a good compromise between speed and memory. Therefore, I see no reason to
change this value. If you like, we can keep the max_cache_size option, though.
Here is the pull request:
https://bitbucket.org/jendrikseipp/downward/pull-requests/132
msg7779 (view) Author: malte Date: 2018-09-23.19:16:34
I wouldn't discard it that quickly.

It does lead to solving two more tasks in the 32-bit configuration, and the
starting point here was to bring the 64-bit configuration up to the level of the
32-bit configuration, so we would hope to get a similar improvement there
eventually.

Using a few more megabytes is usually not a big deal, but saving a lot of time
can be a big deal. For example, if we placed a limit of 300 seconds on total
time, the coverage difference of the 32-bit configuration would be +7 if I
looked at the data correctly. For people who care about time, or for example
when using the CG heuristic in a portfolio, what happens with lower runtime
bounds matters.

Time and memory are resources of different kinds: with memory, we don't care so
much how much of it we use as long as we don't exhaust it. With time, faster is
always better. The cache uses an easily calculatable and configurable amount of
memory. It may look bad on the memory score because that is a logarithmic scale
and the cache can make the difference between "uses very little memory" to "uses
little memory", which is a big deal on that kind of scale. But it is not so
large that it will commonly make the difference between "barely fits within the
memory limit" and "exhausts the memory limit".

Also, if we think it uses too much memory, we can reduce its size. I'm not sure
the cache size has ever been tweaked in 15 years, so we might even get
substantially better performance by making it smaller or larger.

Finally, the cache code is not very complicated and has been trivial to maintain.

The main concern I have with the cache is that, going through the runtime
results, there also seem to be quite a few cases where it slows things down. If
we want to consider getting rid of it, I would like to see how often it actually
reduces runtime and by how much.

A good way to see that would be a cactus plot of problems solved over time using
the "total_time" attribute. It would also be interesting to see a similar plot
for the memory attribute (for solved tasks only), i.e., number of problems
solved over memory used. If these pictures don't look good, I wouldn't mind
removing the cache, although to be diligent we should perhaps also experiment a
little bit with the cache size, which was probably set with computers around
2004 in mind.
msg7778 (view) Author: jendrik Date: 2018-09-23.18:48:21
Since keeping the cache incurs maintaining more code, using more memory and 
doesn't lead to solving more tasks, we could think about whether we want to 
remove the cache. Do you agree?
msg7777 (view) Author: malte Date: 2018-09-23.18:11:46
Looks good! The results make sense, also how often the planner runs out of
memory vs. out of time when using vs. not using the cache. All looking reasonable.
msg7776 (view) Author: jendrik Date: 2018-09-23.18:05:29
I ran an experiment testing the usefulness of the cache:

https://ai.dmi.unibas.ch/_tmp_files/seipp/v1-use-cache.html

Both use_cache=true and use_cache=false lead to roughly the same number of solved 
tasks in 32-bit and 64-bit mode (1890-1892 solved tasks). On a per-domain basis 
there is also not much difference between using and not using the cache when 
looking at the number of solved tasks. Using the cache leads to a higher 
total_time score (+8.07 in 32-bit mode and +8.27 in 64-bit mode), but to a lower 
memory score (-39.14 in 32-bit mode and -48.33 in 64-bit mode).
msg7765 (view) Author: malte Date: 2018-09-22.08:38:36
Yes, it would be good to test this with and without the cache. This is from back
in 2003 when there was no support for options in the planner at all.
msg7763 (view) Author: jendrik Date: 2018-09-21.23:43:20
I think the first step here should be acting on a TODO in the code (cg_heuristic.cc): 

// TODO: Turn this into an option and check its impact.
#define USE_CACHE true
msg7681 (view) Author: malte Date: 2018-09-20.21:51:38
Confusingly, it is neither the domain transition graph nor the causal graph that
is cached. :-)

What is cached can be considered like a causal graph heuristic analog of h^add
values for propositions and their best achievers. Not sure how much that
explanation helps.
msg7678 (view) Author: silvan Date: 2018-09-20.21:24:49
(From "The culprit for the extra 
memory usage is exclusively the helpful_transition_cache 
(std::vector<std::vector<domain_transition_graph::ValueTransitionLabel *>> 
helpful_transition_cache;) in file cg_cache.h." I understood that this should be
the domain transition graph rather than the causal graph? Anways, I guess it is
used by the CG heuristic (cache) as mentioned in the title, so this is fine.)
Sorry for the noise.
msg7643 (view) Author: jendrik Date: 2018-09-20.14:39:56
Attaching the massif profiles.
msg7637 (view) Author: malte Date: 2018-09-20.13:23:11
Thanks, Jendrik! That isn't the causal graph cache, but rather a cache of the
causal graph *heuristic*. The file is named poorly.
msg7636 (view) Author: jendrik Date: 2018-09-20.13:19:05
Repeating msg7635 from issue213:

I made some local measurements to analyze the increase in memory usage of the configuration 
using the causal graph heuristic:

peak memory for airport:p19, dynamically linked build, revision issue213-v7 (e1efb55725b3):

--heuristic "h=goalcount()" --search "lazy_greedy([h], preferred=[h], bound=0)":
32-bit:  7724 KB
64-bit: 21220 KB

--heuristic "h=goalcount()" --search "lazy_greedy([h], preferred=[h])":
32-bit: 13316 KB
64-bit: 26740 KB


--heuristic "h=cg()" --search "lazy_greedy([h], preferred=[h], bound=0)":
32-bit: 51156 KB
64-bit: 85228 KB

--heuristic "h=cg()" --search "lazy_greedy([h], preferred=[h])":
32-bit: 58552 KB
64-bit: 92540 KB


The data suggests that the memory overhead of the cg configuration stems from the 
initialization of the heuristic, not the search. In 32-bit mode the heuristic needs about 
51156-7724=43432 KB. In 64-bit mode it needs about 85228-21220=64008 KB. This is 
corroborated by the massif profiles for the cg heuristic (attached): the 32-bit version 
needs about 20 MB less memory than the 64-bit version at the peak. The culprit for the extra 
memory usage is exclusively the helpful_transition_cache 
(std::vector<std::vector<domain_transition_graph::ValueTransitionLabel *>> 
helpful_transition_cache;) in file cg_cache.h.
History
Date User Action Args
2019-06-03 14:28:07jendriksetstatus: reviewing -> resolved
messages: + msg8789
2019-04-09 10:45:04maltesetmessages: + msg8753
2019-04-09 08:35:32jendriksetstatus: in-progress -> reviewing
messages: + msg8752
summary: part of issue213 ->
2018-09-23 19:16:34maltesetmessages: + msg7779
2018-09-23 18:48:21jendriksetmessages: + msg7778
2018-09-23 18:11:46maltesetmessages: + msg7777
2018-09-23 18:05:29jendriksetmessages: + msg7776
2018-09-23 11:21:18jendriksetstatus: chatting -> in-progress
assignedto: jendrik
2018-09-22 08:38:36maltesetmessages: + msg7765
2018-09-21 23:43:20jendriksetmessages: + msg7763
2018-09-20 21:51:38maltesetmessages: + msg7681
2018-09-20 21:24:49silvansetnosy: + silvan
messages: + msg7678
2018-09-20 14:39:56jendriksetmessages: + msg7643
2018-09-20 14:37:19cedricsetnosy: + cedric
2018-09-20 13:23:11maltesetmessages: + msg7637
title: reduce memory usage of causal graph cache in 64-bit builds -> reduce memory usage of causal graph heuristic cache in 64-bit builds
2018-09-20 13:19:05jendriksetstatus: unread -> chatting
messages: + msg7636
2018-09-20 13:16:35jendrikcreate