Message1220

Author malte
Recipients erez, malte, silvia
Date 2011-01-23.11:29:53
Content
It's true that the LM graph storage is not particularly efficient, but that
cannot explain memory usage in the gigabytes. In fact, I'd be surprised if the
landmark graph contributed even a single megabyte here. We should really
optimize where it matters first, and that means reducing *per-state* memory
cost, not per-problem memory cost. That means we can safely ignore 1) and 2),
although 3) is indeed an issue.

Also, going from vector<int> to vector<state_t> etc. on vectors that only
usually have one or two elements as in point 1) saves next to nothing (or even
exactly nothing due to memory alignment issues); the vector and dynamic-memory
management overhead dwarfs everything else here. Feel free to try the
optimizations in 1) and 2) out, since seeing is believing :-).

Note that so far we have no evidence that this memory explosion only happens for
the landmark configurations; they might happen everywhere. Landmark
configurations were the only ones I had the chance to look at so far.
History
Date User Action Args
2011-01-23 11:29:53maltesetmessageid: <1295778593.98.0.439018299821.issue213@gmail.com>
2011-01-23 11:29:53maltesetrecipients: + malte, erez, silvia
2011-01-23 11:29:53maltelinkissue213 messages
2011-01-23 11:29:53maltecreate