msg5421 (view) |
Author: florian |
Date: 2016-06-03.13:07:21 |
|
Great. Merged and pushed. Thanks.
|
msg5420 (view) |
Author: silvan |
Date: 2016-06-03.12:08:57 |
|
As I said to Florian already: Performance seems ok to me.
|
msg5418 (view) |
Author: malte |
Date: 2016-06-02.14:01:28 |
|
Looks fine to me.
|
msg5416 (view) |
Author: florian |
Date: 2016-06-02.10:20:12 |
|
I ran the experiments for the satisficing configs.
Again, memory is almost unaffected:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_memory_lazy-greedy-ff.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_memory_lama-first.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_memory_eager_greedy_cg.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_memory_eager_greedy_cea.png
Time is even noisier compared to the optimal configs, but I don't see a clear trend:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_total_time_lazy-greedy-ff.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_total_time_lama-first.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_total_time_eager_greedy_cg.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_sat_total_time_eager_greedy_cea.png
Full Report:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v5-sat-issue627-v3-base-issue627-v5-compare.html
For easier reference, here are the links to reports for the optimal configs
(from msg5379):
Full report:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v5-issue627-v3-base-issue627-v5-compare.html
Memory effects look like random noise (< 1%) for regular configs:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-lmcut.png
Total time is slower. Usually by less than 10% but with a few cases as high as
30% in blind search
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-lmcut.png
We still get the positive effect for CEGAR for both time and memory (positive
and negative effects for cegar-original):
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-cegar-original.png
|
msg5415 (view) |
Author: malte |
Date: 2016-06-01.18:52:50 |
|
Hmmm, sorry to be a bother, but given that there is a performance penalty across
the board, can we add a quick evaluation on the standard satisficing benchmark
set and some standard satisficing configs?
At minimum, I'd like to see lazy_greedy with FF heuristic and preferred
operators and the alias lama-first. Perhaps it would be useful if we could
document somewhere what the minimum set of configs and corresponding benchmarks
is that we should always test against when changing things that affect
performance of core data structures etc.
In more positive news, I'm not bothered by the numbers in the report Florian
linked: if the results are not worse in the satisficing case, I think this can
be merged.
|
msg5414 (view) |
Author: malte |
Date: 2016-06-01.18:47:40 |
|
BTW, Silvan recently said that issues with global impact should be
shared/discussed with everyone before merging. I think this issue is an example
of this, and I think his comment is fair enough: people decide whether or not to
follow an issue based on the original title and description, and from the title
"Do not derive CartesianHeuristic from Heuristic", it is not clear that this
will have a significant global performance impact.
|
msg5413 (view) |
Author: malte |
Date: 2016-06-01.18:44:49 |
|
I want a report that allows evaluating the size of "the performance penalty"
that you mentioned in msg5410. You know better than me which revisions need to
be compared for this. :-)
|
msg5412 (view) |
Author: florian |
Date: 2016-06-01.18:29:37 |
|
The report in msg5379 compares the latest version (v5) to the last merged
revision from default (v3-base):
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v5-issue627-v3-base-issue627-v5-compare.html
Did you want a report comparing different revisions?
|
msg5411 (view) |
Author: malte |
Date: 2016-06-01.18:25:45 |
|
As far as I understand, we only have the PNG plots for the latest version. Can
you also link an HTML evaluation?
|
msg5410 (view) |
Author: florian |
Date: 2016-06-01.18:23:29 |
|
Should we merge this and accept the performance penalty? Malte, are you happy
with the in-place modification of the values?
|
msg5379 (view) |
Author: florian |
Date: 2016-05-20.10:17:55 |
|
We changed the template method so it updates the values in-place. This
avoids an unnecesary copy which caused the CEGAR code to become twice
as slow and also reduces code complexity a bit.
Here are the reports for comparing v3-base (the latest revision on
default) to v5 (the latest revision on the branch). This shows what we
would get if we would merge now.
Memory effects look like random noise (< 1%) for regular configs:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-lmcut.png
Total time is slower. Usually by less than 10% but with a few cases as high as
30% in blind search
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-lmcut.png
We still get the positive effect for CEGAR for both time and memory (positive
and negative effects for cegar-original):
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v5_total_time_astar-cegar-original.png
Full report:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v5-issue627-v3-base-issue627-v5-compare.html
I also compared v3 (the version without the template method) to v5 (the
new template method). Memory is (almost) not affected by this. Time is
noisy, usually within +/-15% with what looks like a slight negative
trend:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v5_total_time_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v5_total_time_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v5_total_time_astar-lmcut.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v5_total_time_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v5_total_time_astar-cegar-original.png
|
msg5369 (view) |
Author: florian |
Date: 2016-05-13.14:07:16 |
|
Compared to the version without the template method, memory was mostly
unaffected. Time is much more noisy but for most configs the average stayed more
or less the same and most changes are within +-7%. See A*/blind for an example:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v4_total_time_astar-blind.png
Unfortunately, the one config that uses state translation a lot got up to 2
times slower:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_v3_v4_total_time_astar-cegar-lm-goals.png
I guess this means I introduced an unnecessary copy somewhere.
|
msg5364 (view) |
Author: malte |
Date: 2016-05-12.16:31:46 |
|
Can you summarize the result for us?
|
msg5363 (view) |
Author: florian |
Date: 2016-05-12.16:30:52 |
|
And here are the reports with the template method implementation:
Memory:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_memory_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_memory_astar-lmcut.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_memory_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_memory_astar-cegar-lm-goals.png
Total Time
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_total_time_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_total_time_astar-lmcut.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_total_time_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_total_time_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v4_total_time_astar-cegar-lm-goals.png
Full Report
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v4-issue627-v3-base-issue627-v4-compare.html
|
msg5351 (view) |
Author: florian |
Date: 2016-05-12.11:02:10 |
|
Reports for the repeated experiment with the correct baseline and
settings for CEGAR that make it more predictable. This does not yet
include the template method we just discussed (experiments for that are
still running).
Memory now is affect almost exclusively in CEGAR configs, as we hoped:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_memory_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_memory_astar-lmcut.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_memory_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_memory_astar-cegar-lm-goals.png
Total Time
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_total_time_astar-blind.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_total_time_astar-lmcut.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_total_time_astar-ipdb.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_total_time_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v3_total_time_astar-cegar-lm-goals.png
Full Report
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v3-issue627-v3-base-issue627-v3-compare.html
|
msg5343 (view) |
Author: malte |
Date: 2016-05-11.18:59:50 |
|
The second. (Only in that is one is "one of the virtual methods introduced by
DelegatingTask".)
|
msg5342 (view) |
Author: florian |
Date: 2016-05-11.18:28:38 |
|
Sorry, I'm still not sure what you mean.
Did you mean this:
vector<int> AbstractTask::convert_state_values(...) const {
if (ancestor_task == this)
return ancestor_state_values;
return do_conversion(ancestor_state_values)
}
vector<int> DelegatingTask::do_conversion(...) const {
vector<int> parent_state_values =
parent->convert_state_values(ancestor_state_values);
return do_conversion_from_parent(parent_state_values);
}
vector<int> RootTask::do_conversion(...) const {
ABORT("...");
}
or this:
vector<int> AbstractTask::convert_state_values const = 0;
vector<int> DelegatingTask::convert_state_values(...) const {
if (ancestor_task == this)
return ancestor_state_values;
vector<int> parent_state_values =
parent->convert_state_values(ancestor_state_values);
return do_conversion_from_parent(parent_state_values);
}
vector<int> RootTask::convert_state_values(...) const {
if (ancestor_task == this)
return ancestor_state_values;
ABORT("...");
}
Both have two virtual methods and one of which is introduced by DelegatingTask.
The second one violates DRY (but only once for three lines RootTask); the first
one has three involved methods and a more complex structure. I'm not sure which
one is worse.
|
msg5341 (view) |
Author: malte |
Date: 2016-05-11.18:04:59 |
|
> As an alternative, we could only put part A into the template method, repeating
> part B in every delegating task.
That's not a good alternative for DRY. We expect on root task and many non-root
tasks.
Having two virtual methods, one of them introduced by DelegatingTask, seems to
be the right thing to me.
|
msg5340 (view) |
Author: florian |
Date: 2016-05-11.18:02:03 |
|
As we discussed, I wanted to experiment with the template method pattern to
avoid boiler plate code. Malte already wrote a stub for this:
vector<int> AbstractTask::convert_state_values(...) const {
// Part A
if (ancestor_task == this)
return ancestor_state_values;
// Part B
vector<int> parent_state_values =
parent->convert_state_values(ancestor_state_values);
return do_conversion_from_parent(parent_state_values);
}
Part A is common to all tasks, but part B only occurs in delegating tasks
(AbstractTask and RootTask have no member "parent").
Should we do the template method in DelegatingTask instead of Abstract Task?
This would still duplicate part A once (for RootTask). The disadvantage would be
that convert_state_values still would have to be virtual then so it can have a
different implementation for RootTask.
As an alternative, we could only put part A into the template method, repeating
part B in every delegating task.
I guess we could also use a template method in abstract task that calls another
template method but this seems too complicated.
|
msg5335 (view) |
Author: florian |
Date: 2016-05-11.13:49:36 |
|
This may be caused by the wrong base revision. I'll re-run the experiment with
the correct one to check this. Jendrik also said to use max_states=10000 to make
the CEGAR configs less noisy.
|
msg5333 (view) |
Author: malte |
Date: 2016-05-11.13:43:00 |
|
Why does LM-Cut become faster? Random noise? Unrelated differences in the code?
We should also test non-optimal configurations. If the numbers (for the right
revisions) are similar to the way they are for the optimal ones, I'm not worried
about merging this. I hope we can repair the performance impact later.
|
msg5332 (view) |
Author: florian |
Date: 2016-05-11.12:33:54 |
|
Thanks. I ran an experiment after the change we did in the meeting. I noticed
too late that the base revision is still the the one where the branch started,
not the latest commit on the default branch that was merged into the branch. So
if there where significant changes on the default branch in the meantime, they
might show up in the experiment. I suggest I run a larger experiment before we
merge, but for now, this should be sufficient (we wanted to experiment with a
different version before merging).
Memory seem to be affected only by a constant (maybe the code size changed).
Most memory plots look like this:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_memory_astar-blind.png
The memory plots for CEGAR are the exception to this but they look good as well:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_memory_astar-cegar-lm-goals.png
Total time is affected negatively, probably because of the additional overhead
when translating states from the search to states for the heuristic. With blind
search, this overhead is below 20% and in most cases below 10%:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_total_time_astar-blind.png
For LM-cut and iPDB, we get the same effect.
Again CEGAR is the exception:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_total_time_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v2_total_time_astar-cegar-lm-goals.png
Full report:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v2-issue627-base-issue627-v2-compare.html
|
msg5280 (view) |
Author: malte |
Date: 2016-04-25.18:07:46 |
|
I made some comments.
|
msg5278 (view) |
Author: florian |
Date: 2016-04-25.15:59:16 |
|
I updated the pull request (merging in the current default branch):
https://bitbucket.org/flogo/downward-issues/pull-requests/18/issue627/diff
Malte, do you have time to look at the non-CEGAR parts of this change?
|
msg5175 (view) |
Author: florian |
Date: 2016-02-09.14:41:46 |
|
Profiling showed that the additional benefit we get in the other config comes
from not using the evaluation context. I'm not quite sure why this has so much
more effect on the config with subtasks than on the config without subtasks. The
heuristic lookup within the evaluation context plays some role, but in the
profile this was responsible for only 15% of the time.
Jendrik and I thought about where copies of are created when transforming State
objects from one task to another. We are not quite sure if the current code
avoids all unnecessary copies, but it seems to work ok. We think that we can
leave optimizing this for a later issue.
Jendrik, you also wanted to see the full report of the noise experiment:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v1-noise-issue627-base-4ed2abfab4ba-compare.html
|
msg5173 (view) |
Author: florian |
Date: 2016-02-09.14:25:03 |
|
It seems like the effect we saw in the "original" config is just regular noise.
I ran an experiment comparing two identical revisions that showed a similar
distribution. The config names are different, because we needed different
revisions for lab, but the source code is identical in both revisions (the
commits are before and after starting the branch).
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_total_time_astar-cegar-original-10000.png
|
msg5170 (view) |
Author: florian |
Date: 2016-02-09.01:49:15 |
|
Similar results for the variant with a limit on abstract states.
Report
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v1-limit-issue627-base-issue627-v1-compare.html
Original
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_memory_astar-cegar-original-10000.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_total_time_astar-cegar-original-10000.png
LM-Goals
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_memory_astar-cegar-lm-goals-10000.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_total_time_astar-cegar-lm-goals-10000.png
|
msg5146 (view) |
Author: jendrik |
Date: 2016-01-22.17:33:46 |
|
As discussed offline, I'd suggest to rerun the experiment for the following, more
deterministic, configurations:
"astar(cegar(subtasks=[original()],max_states=10000,max_time=infinity))"
"astar(cegar(subtasks=[landmarks(),goals()],max_states=10000,max_time=infinity))"
|
msg5134 (view) |
Author: florian |
Date: 2016-01-22.08:34:30 |
|
Running experiments had confusing results:
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627-v1-issue627-base-issue627-v1-compare.html
For cegar-original things varied a lot more than I thought they would. For
cegar-lm-goals time and memory got better leading to a coverage increase of 22.
Also, the heuristic accuracy (expansions_until_last_jump) seems to be affected,
and I have no idea why. This happens in some domains for cegar-lm-goals and in
almost all for cegar-original and seems to be worse more often than it is better.
Original
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_memory_astar-cegar-original.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_total_time_astar-cegar-original.png
LM-Goals
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_memory_astar-cegar-lm-goals.png
http://ai.cs.unibas.ch/_tmp_files/pommeren/issue627_base_v1_total_time_astar-cegar-lm-goals.png
|
msg5118 (view) |
Author: florian |
Date: 2016-01-19.23:00:45 |
|
This turned out to be more complicated than we thought originally. The local
task within AdditiveCartesianHeuristic has to be translated to a local task of
the heuristic function.
We didn't have a method for this yet, so I tried to add one. I tried to avoid
copies where possible, but I'm not quite sure if it works this way. Jendrik, can
you have another look, please?
|
msg5114 (view) |
Author: jendrik |
Date: 2016-01-19.11:31:24 |
|
Let's just call it "refinement_hierarchies" for now.
|
msg5113 (view) |
Author: florian |
Date: 2016-01-19.11:26:27 |
|
Thanks. I'll run the experiments. There is still the issue of finding a good
name for the vector<RefinementHierarchy>. I don't think "heuristics" is a good
name for it.
|
msg5112 (view) |
Author: jendrik |
Date: 2016-01-19.11:15:10 |
|
I left two comments on bitbucket, the code looks good. I think
CartesianHeuristic can be removed altogether.
It would be good to know how this influences the evaluation speed. Could you
please run an experiment with the configs "astar(cegar(subtasks=[original()]))"
and "astar(cegar(subtasks=[landmarks(),goals()]))"?
|
msg5105 (view) |
Author: florian |
Date: 2016-01-18.19:54:54 |
|
I started working on this:
https://bitbucket.org/flogo/downward-issues/pull-requests/18/issue627/diff
Jendrik, can you have a look and maybe suggest a good new name for the thing
that is no longer a heuristic now? Would you prefer to remove the wrapper
completely and use RefinementHierarchy directly?
|
msg5099 (view) |
Author: florian |
Date: 2016-01-16.10:58:35 |
|
CartesianHeuristic is used in AdditiveCartesianHeuristic as a component. It
doesn't have its own plugin, so it cannot be used as a stand-alone heuristic.
Having nested Heuristic classes means that we need to create EvaluationContexts
and use global states to evaluate them. This is currently problematic for
issue416, where we try to get rid of g_initial_state among other things.
I propose to change the class to just do the computation (like PDBs do for PDB
heuristics) and take a State object instead of a GlobalState. All that would
remain is a wrapper around RefinementHierarchy, so we could even use this class
directly.
|
|
Date |
User |
Action |
Args |
2016-06-03 13:07:21 | florian | set | status: chatting -> resolved messages:
+ msg5421 |
2016-06-03 12:08:57 | silvan | set | nosy:
+ silvan messages:
+ msg5420 |
2016-06-02 14:01:28 | malte | set | messages:
+ msg5418 |
2016-06-02 10:20:12 | florian | set | messages:
+ msg5416 |
2016-06-01 18:52:50 | malte | set | messages:
+ msg5415 |
2016-06-01 18:47:40 | malte | set | messages:
+ msg5414 |
2016-06-01 18:44:49 | malte | set | messages:
+ msg5413 |
2016-06-01 18:29:37 | florian | set | messages:
+ msg5412 |
2016-06-01 18:25:45 | malte | set | messages:
+ msg5411 |
2016-06-01 18:23:29 | florian | set | messages:
+ msg5410 |
2016-05-20 10:17:55 | florian | set | messages:
+ msg5379 |
2016-05-13 14:07:16 | florian | set | messages:
+ msg5369 |
2016-05-12 16:31:46 | malte | set | messages:
+ msg5364 |
2016-05-12 16:30:52 | florian | set | messages:
+ msg5363 |
2016-05-12 11:02:10 | florian | set | messages:
+ msg5351 |
2016-05-11 18:59:50 | malte | set | messages:
+ msg5343 |
2016-05-11 18:28:38 | florian | set | messages:
+ msg5342 |
2016-05-11 18:04:59 | malte | set | messages:
+ msg5341 |
2016-05-11 18:02:04 | florian | set | messages:
+ msg5340 |
2016-05-11 13:49:36 | florian | set | messages:
+ msg5335 |
2016-05-11 13:43:01 | malte | set | messages:
+ msg5333 |
2016-05-11 12:33:54 | florian | set | messages:
+ msg5332 |
2016-04-25 18:07:46 | malte | set | messages:
+ msg5280 |
2016-04-25 15:59:16 | florian | set | messages:
+ msg5278 |
2016-02-09 14:41:46 | florian | set | messages:
+ msg5175 |
2016-02-09 14:25:03 | florian | set | messages:
+ msg5173 |
2016-02-09 01:49:15 | florian | set | messages:
+ msg5170 |
2016-01-22 17:33:46 | jendrik | set | messages:
+ msg5146 |
2016-01-22 08:34:30 | florian | set | messages:
+ msg5134 |
2016-01-19 23:00:45 | florian | set | messages:
+ msg5118 |
2016-01-19 11:31:24 | jendrik | set | messages:
+ msg5114 |
2016-01-19 11:26:27 | florian | set | messages:
+ msg5113 |
2016-01-19 11:15:10 | jendrik | set | messages:
+ msg5112 |
2016-01-18 19:54:54 | florian | set | status: unread -> chatting assignedto: florian messages:
+ msg5105 |
2016-01-16 10:58:35 | florian | create | |