Currently, discovering the impact of an implementation change in terms of memory
usage is a bit like stumbling in the dark -- if a change is bad enough to push
the algorithm over the memory limit, then we see that something went wrong, but
otherwise we have no information.
It'd be good to measure memory usage more directly. Sebastian has a script
called "memtime" he uses for measuring memory usage. Maybe we could adapt that
without too much work. We could then devise a "memory usage" score similarly to
the coverage, runtime and guidance scores we have at the moment.
|