| msg1725 (view) |
Author: malte |
Date: 2011-09-01.13:31:31 |
|
Hmmm, maybe that only happens when there's nothing at the front (like my
"Jendrik wrote:")? Because it looks like nothing was stripped from my mail. In
any case, closing again.
|
| msg1724 (view) |
Author: malte |
Date: 2011-09-01.13:30:03 |
|
On 01.09.2011 13:02, Jendrik wrote:
> When I post by mail the first part that is indented by ">" is always left out in
> the message.
Testing this... Shouldn't be like that. There's a setting for that, but
I didn't enable it.
|
| msg1723 (view) |
Author: jendrik |
Date: 2011-09-01.13:02:04 |
|
When I post by mail the first part that is indented by ">" is always left out in
the message.
In msg1714 I wanted to say that this wish has been implemented. In the domain-
wise report there are now some descriptions below the concerned tables and the
aggregation functions are now printed fully (AVERAGE) instead of AVG. So I'll
mark this one as resolved.
|
| msg1719 (view) |
Author: malte |
Date: 2011-09-01.12:35:11 |
|
Thanks! That leaves the "make the reports clearer" part (msg1688). From my
interactions with many people who've worked with the reports now, I think this
is needed, but it doesn't have to be a very high priority.
Since this one has a ton of unrelated information, I suggest opening a separate
wish issue for that and marking this one as resolved. Jendrik, can you do that?
|
| msg1716 (view) |
Author: erez |
Date: 2011-09-01.11:36:18 |
|
I've been ignoring this issue until now.
I'll open a new one for the landmark assertion problems, and assign this one back
to Malte.
|
| msg1714 (view) |
Author: jendrik |
Date: 2011-09-01.02:50:02 |
|
This has been implemented and pushed to the main repo.
>
> Also, I think it doesn't make much sense to use a
> geometric mean to aggregate with a domain and then a sum to aggregate across
> domains -- if that's what is currently done.)
This has been changed now. Geometric mean is used in both cases.
|
| msg1713 (view) |
Author: jendrik |
Date: 2011-09-01.02:48:25 |
|
I can't help much with the points 1 and 2, I guess, so I'll remove the assignedto
me option.
|
| msg1688 (view) |
Author: malte |
Date: 2011-08-26.12:11:17 |
|
> (How) do you want to change that?
The behaviour doesn't need to be changed, just explained more.
> > It'd be very nice if each section contained a short explanation
> > of what the values mean.
> You find this at the top of the report:
> If in a group of configs not all configs have a value for an attribute,
> the concerning runs are not evaluated. However, for the attributes
> search_error, validate_error, coverage we include all runs unconditionally.
That doesn't really help: when I'm looking at something like the
score_total_time or total_time section, that's many pages away. The reports are
generally too long to expect that someone reads everything from top to bottom,
so the report should be friendly to people who jump directly to the parts that
interest them.
Also, the current text at the top doesn't really explain the difference between
things like "score_total_time" and "total_time", since the user would have to
know what precisely it means to "have a value for an attribute", which I would
see as an implementation detail.
I would suggest that the text at the top should go away and be replaced with
some short text accompanying each individual table, ideally at the end, since
that's a bit more prominent than the top for long table (since it's close to the
summary data). For total_time in a domain summary file, it could be something
like: "Only instances solved by all configurations are considered. Each table
entry gives the geometric mean (true?) of runtimes for that domain. The last row
gives the sum across all domains (true?)."
(Or whatever is the case -- I don't actually know, which is a sign that it
should be made clear. Also, I think it doesn't make much sense to use a
geometric mean to aggregate with a domain and then a sum to aggregate across
domains -- if that's what is currently done.)
|
| msg1687 (view) |
Author: jendrik |
Date: 2011-08-26.12:00:05 |
|
That's correct. It is due to the fact that we give an unsolved run a
score_total_time value of 0. So all instances are considered for this
attribute. Unsolved tasks however get total_time = None so some
instances are filtered in the domain reports. (How) do you want to
change that?
> It'd be very nice if each section contained a short explanation
> of what the values mean.
You find this at the top of the report:
If in a group of configs not all configs have a value for an attribute,
the concerning runs are not evaluated. However, for the attributes
search_error, validate_error, coverage we include all runs unconditionally.
|
| msg1686 (view) |
Author: malte |
Date: 2011-08-26.11:34:11 |
|
> massively coverage than the ones without.
massively *worse* coverage
> 3) I think that two significant digits is now enough for the score_...
is *not* enough
|
| msg1685 (view) |
Author: malte |
Date: 2011-08-26.11:25:24 |
|
OK, it looks like we shouldn't enable assertions -- in some
configurations they clearly cost a lot. Using -fomit-frame-pointer has
a clear (even if small) benefit, as can be seen most clearly from the
total_time values, and there is no real drawback, so it should remain
in.
Some more observations from the experimental results:
1) With BJOLP and SelMax, the configurations with assertions have
massively coverage than the ones without.
In SelMax, it looks like this is caused because we have 304 assertion
failures (152 each in the configs with and without the
-fomit-frame-pointer option). Erez, can you look into this? There are
two different assertions that commonly fail. Check
downward@turtur:~/jendrik/downward/new-scripts/exp-js-102/FAILURES to
see more about this. (Runs #3045 and #4485 are about LAMA-2011;
everything else is about SelMax.)
2) In BJOLP, there seem to be no assertion failures, but there is
still strange stuff going on, with coverage in PegSol going down from
27 to 2 when disabling assertions, and coverage in some other domains
going down to 0. Erez, can you also look into this?
3) I think that two significant digits is now enough for the score_...
data. There are many cases where total_time shows clear differences,
but score_total_time is identical. I suggest that we scale this value
by a factor of 100, like a percentage, as we do in our papers (e.g. 87
instead of 0.87) and keep two digits after the point (e.g. 86.89
instead of 87).
4) Given the miniscule difference in score_total_time, the sometimes
large differences in total_time are surprising (and in some cases vice
versa). I assume that score_total_time in the domain summaries covers
all instances, but total_time et al. only cover the commonly solved
ones? It'd be very nice if each section contained a short explanation
of what the values mean.
Once these points are resolved or moved to separate issues, this one
can be closed.
Thanks, Jendrik!
|
| msg1684 (view) |
Author: jendrik |
Date: 2011-08-25.23:50:50 |
|
Here are the results.
|
| msg1669 (view) |
Author: jendrik |
Date: 2011-08-16.18:10:03 |
|
Yes we can :) I just submitted an experiment.
|
| msg1638 (view) |
Author: malte |
Date: 2011-08-15.08:29:45 |
|
Interesting. -O1 seems to be just as good as -Os in terms of coverage if I
counted correctly, with -O2 and -O3 slightly worse. But the differences are very
small, and score_total_time doesn't give a very clear picture either.
Taking into account that our experiments are also usually a bit noisy, I'd be
inclined only to draw the conclusion that -O0 is worse than the others and trust
the gcc people that they know what they're doing, i.e. leave the option at -O3.
Can we also generate new data for -DNDEBUG and -fomit-frame-pointer (given -O3)?
|
| msg1636 (view) |
Author: jendrik |
Date: 2011-08-14.23:23:44 |
|
Here are the results, grouped by configuration. It seems that at least for
coverage Os is best.
|
| msg1428 (view) |
Author: malte |
Date: 2011-07-26.20:34:05 |
|
Looks good and sufficient.
|
| msg1427 (view) |
Author: jendrik |
Date: 2011-07-26.20:13:50 |
|
I am preparing a new experiment for this. Which configs would you like to see
included? So far I have added the following:
seq-sat-lama-2011-unit
seq-sat-lama-2011-nonunit
seq-opt-fd-autotune
seq-opt-selmax
seq-opt-bjolp
seq-opt-lmcut
|
| msg1146 (view) |
Author: malte |
Date: 2011-01-11.20:08:53 |
|
I think IPC-style reports on coverage and runtime should be sufficient.
The more interesting question is which configurations to use, since the
different settings can have widely different effects in different configs. For
example, some heuristic might have much more expensive assertions than others,
or benefit from -O3 more than others.
Maybe best to leave this until after the IPC since we're still optimizing some
code, and then look at the configs we submitted to the IPC. These should be a
nice representative sample of the configs we care about.
|
| msg1145 (view) |
Author: jendrik |
Date: 2011-01-11.19:53:45 |
|
what kind of reports and which attributes do you need here?
|
| msg816 (view) |
Author: jendrik |
Date: 2010-12-10.18:45:03 |
|
I think you're right. There was this little error in the experiment
script. I fixed it and got:
11f4db589eba-nopoint_assert/src/search/Makefile:CCOPT_RELEASE = -O3
-fomit-frame-pointer
11f4db589eba-nopoint_noassert/src/search/Makefile:CCOPT_RELEASE = -O3
-DNDEBUG -fomit-frame-pointer
11f4db589eba-point_assert/src/search/Makefile:CCOPT_RELEASE = -O3
11f4db589eba-point_noassert/src/search/Makefile:CCOPT_RELEASE = -O3
-DNDEBUG
this should be correct, right? nopoint means "omit frame-pointer".
I'll run a new experiment soon.
|
| msg812 (view) |
Author: malte |
Date: 2010-12-10.17:46:10 |
|
> Never mind, I found it in issue102.py. The names are precisely opposite to what
> I would have thought. "nopoint" means that the frame pointer is used, and
> "noassert" means that assertions are used. :-)
Never mind that comment -- I was mistaking some of the Python code for option
settings, where in truth I now think they are replacements. So the names were as
I would have expected.
> Shouldn't the first and third line be different?
That looks like an actual problem, though. I think the reason is that this:
> for orig, replacement in replacements:
> new_make = makefile.replace(orig, replacement)
should be:
for orig, replacement in replacements:
new_make = new_make.replace(orig, replacement)
or else the first replacement gets clobbered if there are two replacements.
|
| msg810 (view) |
Author: malte |
Date: 2010-12-10.17:41:29 |
|
> For the assertion options, can you briefly say which name corresponds
> to which setting?
Never mind, I found it in issue102.py. The names are precisely opposite to what
I would have thought. "nopoint" means that the frame pointer is used, and
"noassert" means that assertions are used. :-)
But hmm... there seems to be something wrong with the experiments:
8d7338539166-nopoint_assert/src/search/Makefile:CCOPT_RELEASE = -O3
-fomit-frame-pointer
8d7338539166-nopoint_noassert/src/search/Makefile:CCOPT_RELEASE = -O3 -DNDEBUG
-fomit-frame-pointer
8d7338539166-point_assert/src/search/Makefile:CCOPT_RELEASE = -O3
-fomit-frame-pointer
8d7338539166-point_noassert/src/search/Makefile:CCOPT_RELEASE = -O3 -DNDEBUG
Shouldn't the first and third line be different?
|
| msg809 (view) |
Author: malte |
Date: 2010-12-10.17:25:36 |
|
For what it's worth: I looked through all usages of NDEBUG and assert in the
code, and there seems to be nothing that is prohibitively expensive anywhere.
So if we're happy enough with A* + LM-cut results, I think we can enable
assertions by default.
|
| msg808 (view) |
Author: malte |
Date: 2010-12-10.16:46:45 |
|
For the assertion options, can you briefly say which name corresponds to which
setting?
|
| msg736 (view) |
Author: jendrik |
Date: 2010-11-16.18:33:31 |
|
Also I am appending the results for the assertion options.
|
| msg735 (view) |
Author: jendrik |
Date: 2010-11-16.18:33:04 |
|
It took longer than half an hour, but I ran some tests for the optimization
options again (memory measurement included).
|
| msg689 (view) |
Author: malte |
Date: 2010-11-03.18:53:48 |
|
Short summary: -O0 is (as expected) much worse than any of the optimizing
options, and the optimizing options are surprisingly close to each other, with
each of -O[123s] winning for at least one of the six configurations in terms of
score_total_time. It looks like there's no reason to move away from the current -O3.
Jendrik, how difficult would it be to repeat the experiments now that we also
dump the memory usage and give a report on that, too? (If it's more than half an
hour's work, we shouldn't do it.)
While we're looking at the compiler options: currently, the difference between
our release and debug target (apart from static linking) is that the release
target adds the "-DNDEBUG" and "-fomit-frame-pointer" options. Can you test the
impact of these, too? A report similar to issue102-reports-first25-mintime.tar
on the same tasks with the following options would be great:
* the current "make release" (= "make") options
* the current "make release" (= "make") options without -DNDEBUG
* the current "make release" (= "make") options without -fomit-frame-pointer
* the current "make release" (= "make") options without either -DNDEBUG and
-fomit-frame-pointer
|
| msg468 (view) |
Author: jendrik |
Date: 2010-08-18.16:30:31 |
|
ok, the times are now max(original_time, 0.1).
The geometric mean is used for the mean value inside a domain and the last row is
the geometric mean of all problems for the configuration.
|
| msg466 (view) |
Author: malte |
Date: 2010-08-18.15:13:54 |
|
> Where did you want me to use the geometric mean? To get a mean value of the
> results IN one domain or a mean value of the summed/averaged results BETWEEN >
the domains (or both?)?
Wherever averages over things that would be plotted against a log scale are used
(most importantly for all averages over runtimes or expanded/evaluated/generated
nodes). Also for averages over all kinds of ratios that are not bounded (e.g.
speedup ratios), but I don't think we measure anything like that at the moment.
> At the moment I use the geometric mean for both in and between the domains
> for the attributes "search_time" and "total_time", but that gives 0 most of
> the time since the runtimes are often 0.0s.
If there are zeros involved, the geometric mean should be undefined, not zero,
but of course that's not better. :-)
I agree with Erez: the best solution for this is to use a certain minimum value
here, since times around 0 seconds are very noisy anyway. If we stay with the
current timer, I'd suggest a minimum value of at least 0.1 sec; with a more
accurate timer, we should use at least 0.01 sec.
|
| msg463 (view) |
Author: erez |
Date: 2010-08-18.08:29:08 |
|
Regarding the problem with 0 in geometric mean, I suggest you add 0.01 to all
times. This would eliminate the problem, and is the measurement error anyway.
Also, we can switch to using the ExactTimer class (which has the same interface
as Timer), but gives up to nano-second accuracy (depending on hardware of
course).
|
| msg462 (view) |
Author: jendrik |
Date: 2010-08-18.00:41:18 |
|
It took longer than expected, but I managed to only compare the problems
commonly solved by all configs. By default this is now done for the attributes
'expanded', 'generated', 'plan_length', 'search_time' and 'total_time'. I added
the attribute "solved" to the reports.
Where did you want me to use the geometric mean? To get a mean value of the
results IN one domain or a mean value of the summed/averaged results BETWEEN the
domains (or both?)?
At the moment I use the geometric mean for both in and between the domains for
the attributes "search_time" and "total_time", but that gives 0 most of the time
since the runtimes are often 0.0s.
The experiment was conducted for the first 25 problems of each STRIPS domain.
You find the results in "issue102-reports-first25.tar".
|
| msg460 (view) |
Author: malte |
Date: 2010-08-15.12:25:00 |
|
Looks much more reasonable indeed. :-)
Can I ask for a slightly different report? "Total time" and "Total search time"
summed over all solved instances are not meaningful because you get punished for
solving more instances. Instead, those two reports (and only those) should be
over the set of commonly solved instances within the five configurations that
are compared in that report (-O[0123s]).
Also I'd much prefer the geometric mean (product of all runtimes to the power of
(1/number of instances) for this since it's more appropriate for things that
grow exponentially and less prone to outliers.
Finally, can you add a fifth report that just gives the number of tasks solved
in each domain?
|
| msg459 (view) |
Author: jendrik |
Date: 2010-08-13.13:14:26 |
|
This time I have taken into account only the first 10 problems of each STRIPS
domain. The results seem a lot more reasonable.
In the reports all instances are reported, not only the ones that have been
solved by all configurations.
|
| msg458 (view) |
Author: jendrik |
Date: 2010-08-13.02:29:23 |
|
Oh, oh... It's good we have someone here who didn't trust the result... The
planners were indeed not properly recompiled everytime. I have started a new
experiment and will post the results here once available.
This time the planners do not have the same size ;)
21904 downward-O0
16960 downward-O1
17624 downward-O2
18400 downward-O3
16256 downward-Os
|
| msg457 (view) |
Author: malte |
Date: 2010-08-12.18:14:01 |
|
I can't believe those results without seeing the details. :-)
Are you sure that the whole code was actually recompiled cleanly with the
different options? And that the computation of overall scores is correct? I get
very different results on my local machine. Of course it is possible that the
machines of the gkigrid have different properties, but such large differences in
behaviour would be highly unusual. Whenever I tried -O0 on any machine in any
context, it tended to suck. :-)
For example, if I test astar(blind()) and astar(lmcut()) on my own machine, I
get the following total runtimes:
logistics00:
--search "astar(blind())"
-O0 -O1 -O2 -O3 -Os
#6-0 7.80s 3.41s 3.10s 3.05s 3.51s
#6-1 0.38s 0.16s 0.14s 0.13s 0.16s
#6-2 7.74s 3.34s 3.00s 2.97s 3.43s
#6-9 6.78s 2.95s 2.68s 2.65s 3.03s
--search "astar(lmcut())"
-O0 -O1 -O2 -O3 -Os
#7-0 13.43s 2.91s 3.03s 2.83s 3.35s
#7-1 214.39s 46.16s 48.97s 45.45s 54.34s
#8-0 5.95s 1.28s 1.35s 1.26s 1.48s
#8-1 93.77s 19.97s 20.82s 19.41s 23.48s
#9-0 34.19s 7.37s 7.67s 7.07s 8.41s
#9-1 2.16s 0.48s 0.48s 0.45s 0.53s
#10-0 769.68s 173.95s 182.55s 168.52s 200.71s
#10-1 660.90s 149.39s 156.03s 145.53s 166.41s
Can you have a look at the detailed Logistics-2000 results for these
configurations from your experiment for comparison (and if the data is not
conclusive, send them around)?
Can you do an "ls -al" on the executables to see how they differ? (For example,
the -Os one should be the smallest in terms of file size.)
|
| msg456 (view) |
Author: erez |
Date: 2010-08-12.13:08:20 |
|
Wow, this is really surprising.
I went over the results and looked for numbers that were an order of magnitude
worse than the others, and here's what I found:
LAMA: freecell -Os
fF: trucks-strips -O1 -O2 -O3
I guess this means that the best option is to use -O0, since that's the only
configuration that never loses by a lot.
|
| msg455 (view) |
Author: jendrik |
Date: 2010-08-12.12:42:44 |
|
The first experiment has finished and I have made some reports (attached). Do
you see a "best" optimization parameter?
The experiment used the first 15 problems of every strips domain.
|
| msg446 (view) |
Author: malte |
Date: 2010-08-09.20:22:34 |
|
> I'll start a page with configuration examples on the wiki that explains the call
> syntax for those.
I've added some more examples to the existing call syntax example page instead:
http://alfons.informatik.uni-freiburg.de/downward/PlannerUsage#Examples
|
| msg442 (view) |
Author: erez |
Date: 2010-08-09.14:33:08 |
|
It might also be a good idea to check with "LAMA".
The configuration would be something like:
--heuristic "hff=ff()" --heuristic "hlm=lmcount()" --search
"iterated(lazy_wastar(hff,hlm,preferred=(hff,hlm),w=10),
lazy_wastar(hff,hlm,preferred=(hff,hlm),w=5),lazy_wastar(hff,hlm,preferred=(hff,
hlm),w=3),
lazy_wastar(hff,hlm,preferred=(hff,hlm),w=2),lazy_wastar(hff,hlm,preferred=(hff,
hlm),w=1),repeat_last=true)"
who loves unlimited string length?
|
| msg441 (view) |
Author: malte |
Date: 2010-08-09.14:29:30 |
|
PS: I think it's not necessary to use giant-sized problems in this test. In
particular, I'd only pick such examples that can be solved with -O3 within five
minutes (search component only).
|
| msg440 (view) |
Author: malte |
Date: 2010-08-09.14:28:33 |
|
Other compiler options/disabling assertions:
If you check the current Makefile, you'll see the line
CCOPT_RELEASE = -O3 -DNDEBUG -fomit-frame-pointer
Here, "-DNDEBUG" disables assertions (and some other debug code), and
-fomit-frame-pointer is the only "other option" we currently use. In the future,
we might also think of using some of the -mtune or -march options, but since
those are platform-specific, it may be best to steer away from the for now.
(Although I would be interested if the proper -march option is worth it on the
gkigrid.)
Planner configurations:
I'd be most interested in:
* A* with blind heuristic
* A* with merge-and-shrink (default parameters)
* A* with LM-cut
* lazy greedy BFS with h^cea
* lazy greedy BFS with h^FF
I'll start a page with configuration examples on the wiki that explains the call
syntax for those.
|
| msg436 (view) |
Author: jendrik |
Date: 2010-08-09.04:41:57 |
|
I have started working on this. The code can be found in new-scripts/issue102.py.
Some questions:
* Which planner configurations should be tested? How do I write them in the new
syntax?
* Which other compiler options do you mean?
* How do you turn assertions on and off?
|
| msg392 (view) |
Author: malte |
Date: 2010-08-04.05:24:57 |
|
It would be good to get at least a little clue about the influence of the
various gcc options to control optimization. My limited experience with the
planner tells me that -O3 is much better than no optimization (-O0), but there
is more we should know. With other search code, I remember cases where -O2 was
actually faster than -O3, so it'd definitely be good to have some data for -O0
vs. -O2 vs. -O3 (and maybe throw in -O1 and -Os for good measure).
Also, there are a bunch of additional compiler options we use (some of them set
differently in debug and release builds) whose performance impact should be tested.
Also, I'd like to know how expensive it would be to enable assertions in (at
least some) release builds since some of them are definitely very helpful.
Maybe the comparative experiments code that Jendrik has been working on could be
used for this purpose?
|
|
| Date |
User |
Action |
Args |
| 2011-09-01 13:31:31 | malte | set | status: chatting -> resolved messages:
+ msg1725 |
| 2011-09-01 13:30:03 | malte | set | status: resolved -> chatting messages:
+ msg1724 |
| 2011-09-01 13:02:04 | jendrik | set | status: in-progress -> resolved messages:
+ msg1723 |
| 2011-09-01 12:35:12 | malte | set | assignedto: malte -> jendrik messages:
+ msg1719 |
| 2011-09-01 11:36:19 | erez | set | assignedto: erez -> malte messages:
+ msg1716 |
| 2011-09-01 10:50:35 | malte | set | assignedto: erez |
| 2011-09-01 02:50:03 | jendrik | set | messages:
+ msg1714 |
| 2011-09-01 02:48:25 | jendrik | set | assignedto: jendrik -> (no value) messages:
+ msg1713 |
| 2011-08-26 12:11:17 | malte | set | messages:
+ msg1688 |
| 2011-08-26 12:00:05 | jendrik | set | files:
+ unnamed messages:
+ msg1687 |
| 2011-08-26 11:34:11 | malte | set | messages:
+ msg1686 |
| 2011-08-26 11:25:25 | malte | set | messages:
+ msg1685 |
| 2011-08-25 23:50:50 | jendrik | set | files:
+ issue102-asserts-framepointer.tar.gz messages:
+ msg1684 |
| 2011-08-16 18:10:03 | jendrik | set | messages:
+ msg1669 |
| 2011-08-15 08:29:46 | malte | set | messages:
+ msg1638 |
| 2011-08-14 23:23:44 | jendrik | set | files:
+ issue102-configs.tar.gz messages:
+ msg1636 |
| 2011-07-26 20:34:05 | malte | set | messages:
+ msg1428 |
| 2011-07-26 20:13:51 | jendrik | set | messages:
+ msg1427 |
| 2011-01-11 20:08:54 | malte | set | messages:
+ msg1146 |
| 2011-01-11 19:53:46 | jendrik | set | messages:
+ msg1145 |
| 2010-12-10 18:45:03 | jendrik | set | messages:
+ msg816 |
| 2010-12-10 17:46:10 | malte | set | messages:
+ msg812 |
| 2010-12-10 17:41:29 | malte | set | messages:
+ msg810 |
| 2010-12-10 17:25:36 | malte | set | messages:
+ msg809 |
| 2010-12-10 16:46:45 | malte | set | messages:
+ msg808 |
| 2010-11-16 18:33:31 | jendrik | set | files:
+ issue102assertouSTRIPSeval-d-abs.html messages:
+ msg736 |
| 2010-11-16 18:33:04 | jendrik | set | files:
+ issue102optouSTRIPSeval-d-abs.html messages:
+ msg735 |
| 2010-11-03 18:53:48 | malte | set | messages:
+ msg689 |
| 2010-08-18 16:30:31 | jendrik | set | files:
+ issue102-reports-first25-mintime.tar messages:
+ msg468 |
| 2010-08-18 15:13:54 | malte | set | messages:
+ msg466 |
| 2010-08-18 08:29:08 | erez | set | messages:
+ msg463 |
| 2010-08-18 00:41:18 | jendrik | set | files:
+ issue102-reports-first25.tar messages:
+ msg462 |
| 2010-08-15 12:25:00 | malte | set | messages:
+ msg460 |
| 2010-08-13 13:14:26 | jendrik | set | files:
+ issue102-reports-new.tar messages:
+ msg459 |
| 2010-08-13 02:29:23 | jendrik | set | messages:
+ msg458 |
| 2010-08-12 18:14:01 | malte | set | messages:
+ msg457 |
| 2010-08-12 13:08:20 | erez | set | messages:
+ msg456 |
| 2010-08-12 12:42:44 | jendrik | set | files:
+ issue102-reports.tar messages:
+ msg455 |
| 2010-08-09 20:22:34 | malte | set | messages:
+ msg446 |
| 2010-08-09 14:33:08 | erez | set | messages:
+ msg442 |
| 2010-08-09 14:29:30 | malte | set | messages:
+ msg441 |
| 2010-08-09 14:28:33 | malte | set | messages:
+ msg440 |
| 2010-08-09 04:41:57 | jendrik | set | status: chatting -> in-progress assignedto: jendrik messages:
+ msg436 |
| 2010-08-04 05:24:57 | malte | create | |