|
>> it doesn't make sense that when you delete an item, its siblings should be affected
i totally agree.
what should happen in the implementation of DeleteItem() is that the sibling of the item being deleted should be set to NULL _before_ deleting it.
the reason i did it this way is was purely because it struck me as an elegant solution to cleaning up an item. because of the way i solved having items of the same tag (linked list of siblings) deletion of an item must always be done using DeleteItem() in order to make sure the linked list is valid. as a result i felt that having an item delete its sibling in its ~tor would be acceptable. unless of course there was a bug!
the reason it has not yet caused a problem is that DeleteItem() is only called once in CToDoCtrl::AddItem() and the item that is being deleted is always the last one on the sibling chain. how's that for luck!
thanks you for bringing this to my attention.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
I still don't think the ~tor should delete the siblings. Because with a definitions like "will delete siblings that're after this item", it's hard to write new methods for this class. Similarly, I don't think when adding an item (AddItem), its siblings should be added (again, a subset of siblings).
I basically added two methods: Flatten and UnFlatten. These do the reverse of one another. They do conversion between the tree and a flat list (a PARENT attribute is added to the flat list to be able to go back to tree). The flat list would be always sorted by ID, thus making modification of tasks as local as it can get in terms of the xml file. This should make it source control-friendly, which is why I'm going through all this trouble in the first place. (Flatten is called before saving; UnFlatten is called after loading).
Anyway, to make a long story short, I had to rewrite the first version after I learned the true class behavior (that I think is unnatural). My fix to DeleteItem() kept the same behavior:
replacing:
if (!pXIPrevSibling)
{
if (!pNextSibling)
m_mapItems.RemoveKey(sName);
else
m_mapItems[sName] = pNextSibling;
}
with:
if (!pXIPrevSibling)
{
m_mapItems.RemoveKey(sName);
}
The new methods seem to work fine and I can save and load. One thing is still need to get rid of is the POS attribute. Can I safely comment out:
AddAttributeToItem(pXIItem, TDL_TASKPOS, nPos, TDCC_POSITION, filter);
in ToDoCtrl.cpp?
What is POS for anyway?
|
|
|
|
|
'POS' is used when exporting so that the item positions will be the same if only a subset of the task list is being exported. this might not be something that affects you.
its a very interesting idea that you're investigating and it prompts a number of thoughts (some of which you have obviously already had):
1. ordering by ID is definitely the key because a task's ID won't change.
2. it may not be necessary to flatten the tree to solve the problem though. sorting only occurs within a branch which we know. so maybe the 'POS' element can store the underlying order much as it does at present and so long as each branch is sorted by ID then the saved structure will hardly change even after a sort.
ie pseudo(ish)-code:
CXmlFileEx file(..);
AddChildren(.., file.GetRoot(), ..);
SortSubTasks(file.GetRoot());
:
file.Save(..);
where SortTasks might look like this:
void SortSubTasks(CXmlItem* pXI)
{
CXmlItem* pXIChild = pXI->GetItem(TDL_TASK);
if (!pXIChild)
return;
CPtrArray aSubTasks;
while (pXIChild)
{
SortSubTasks(pXIChild);
aSubTasks.Add(pXIChild);
pXIChild = pXIChild->GetSibling();
}
SortTasks(aSubTasks);
}
then when the file is reloaded, the items can be repositioned according to their 'POS' attribute.
3. if i were to drag a task to a different position without changing its parent would that change be recognized when i next load the tasklist (in your implementation)?
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
The following scenario should demonstrate why 'POS' is problematic. Let's say we have one task A with three subtasks a, b, c (in that order). User X and Y both check out the xml file via say CVS. User X clicks on the "Due" tab which reorders the display to become b, c, a. Meanwhile, user Y changes the priority of task a. Now X checks his copy in. Then Y tries to do the same. But now there is a conflict (2 changes in a). Intuitively, this shouldn't have produced a conflict because user X didn't really change the data; only Y did.
I understand it is sometimes hard to draw a line between what is data and what is a viewing attribute of the data (like a sort order). I would like to think of POS as a viewing attribute, as each user may want to view the tasks in any order they like. This would mean that 'POS' should not even be in the xml file.
In answer to your question in (3), dragging a task to a different position would have no effect on the saved file. So it would not be recognized when you next load the tasklist. This is the desired behavior for me bacause dragging a task is really no different from sorting tasks.
I considered not flattening; but flattening has an advantage. Imagine having a large task T (with many subtasks that are active). If someone decides to move T down one level, this large chuck of subtasks will have to physically move from one place to another in the xml file. Any other users who may have changed an attribute in any of these tasks in the mean time will get a conflict when they try to check-in. Now with flattening, the effect on the xml file of the "move one level down" operation will be local to task T (a change in the PARENT attribute).
Here's basically what I did. In CToDoCtrl::Save :
file.Root()->Flatten(TDL_TASK,TDL_TASKID);
if (file.Save(szFilePath))
{ ...
and in CToDoCtrl::Load() :
if (file.LoadEx())
{
file.Root()->UnFlatten(TDL_TASK,TDL_TASKID);
...
and in CToDoCtrl::AddItem() , deleted this line:
AddAttributeToItem(pXIItem, TDL_TASKPOS, nPos, TDCC_POSITION, filter);
In XmlFile.cpp , I did the fix to CXmlItem::DeleteItem() given in my previous reply, and added this:
void CXmlItem::Flatten(CString sParentName, CString sSortKey)
{
CXmlItem *f=GetItem("FLATFORMAT");
if (!f) { AddItem("FLATFORMAT",1); }
else { f->SetValue(1); }
s_sFlattenParentName=sParentName;
ltKey::s_sSortKey=sSortKey;
s_FlatSet.clear();
s_nLevel=0;
Flatten();
CXmlItem XI;
for (std::set<CXmlItem *,CXmlItem::ltKey>::iterator i=s_FlatSet.begin();
i!=s_FlatSet.end();
i++)
{
CXmlItem* tmp=(*i)->m_pSibling;(*i)->m_pSibling=NULL;
CXmlItem* pXIChild=XI.AddItem(**i);
(*i)->m_pSibling=tmp;
CXmlItem* pXIRepeated=pXIChild->GetItem(s_sFlattenParentName);
if (pXIRepeated) { pXIChild->DeleteItem(pXIRepeated); }
}
CXmlItem* pXIChild = GetItem(s_sFlattenParentName);
DeleteItem(pXIChild);
pXIChild = XI.GetItem(s_sFlattenParentName);
AddItem(*pXIChild);
}
void CXmlItem::Flatten()
{
CXmlItem* pXIChild = GetItem(s_sFlattenParentName);
s_nLevel++;
while (pXIChild)
{
CXmlItem *test=pXIChild->GetItem(CString("PARENT"));
ASSERT(test==NULL);
int id=-1;
if (s_nLevel!=1)
{
id=GetItemValueI(ltKey::s_sSortKey);
}
pXIChild->AddItem(CString("PARENT"),id);
s_FlatSet.insert(pXIChild);
pXIChild->Flatten();
pXIChild = pXIChild->GetSibling();
}
s_nLevel--;
}
void CXmlItem::UnFlatten(CString sParentName, CString sSortKey)
{
if (GetItemValueI("FLATFORMAT")!=1) return;
s_sFlattenParentName=sParentName;
ltKey::s_sSortKey=sSortKey;
s_FlatSet.clear();
std::multimap<int, CXmlItem *> children;
typedef std::multimap<int, CXmlItem *>::iterator mmi;
CXmlItem* pXIChild = GetItem(s_sFlattenParentName);
while (pXIChild)
{
int parent_id=pXIChild->GetItemValueI(CString("PARENT"));
children.insert(std::pair<int,CXmlItem *>(parent_id,pXIChild));
pXIChild = pXIChild->GetSibling();
}
std::deque<CXmlItem *> q;
CXmlItem XI;
XI.AddItem(sSortKey,-1);
q.push_back(&XI);
while(!q.empty())
{
CXmlItem *Parent=q.front();
int parent_id=Parent->GetItemValueI(sSortKey);
q.pop_front();
std::pair<mmi,mmi> r=children.equal_range(parent_id);
for (mmi i=r.first;i!=r.second;i++)
{
CXmlItem &item=*(i->second);
CXmlItem* tmp=item.m_pSibling;item.m_pSibling=NULL;
CXmlItem* pXIChild=Parent->AddItem(item);
item.m_pSibling=tmp;
pXIChild->DeleteItem(pXIChild->GetItem("PARENT"));
q.push_back(pXIChild);
}
}
pXIChild = GetItem(s_sFlattenParentName);
DeleteItem(pXIChild);
pXIChild = XI.GetItem(s_sFlattenParentName);
AddItem(*pXIChild);
}
CString CXmlItem::ltKey::s_sSortKey;
CString CXmlItem::s_sFlattenParentName;
std::set<CXmlItem *,CXmlItem::ltKey> CXmlItem::s_FlatSet;
int CXmlItem::s_nLevel;
and this in XmlFile.h (class CXmlItem ):
struct ltKey
{
bool operator()(const CXmlItem* s1, const CXmlItem* s2) const
{
return s1->GetItemValueI(s_sSortKey) < s2->GetItemValueI(s_sSortKey);
}
static CString s_sSortKey;
};
void Flatten(CString sParentName, CString sSortKey);
void Flatten();
void UnFlatten(CString sParentName, CString sSortKey);
static std::set<CXmlItem *,ltKey> s_FlatSet;
static CString s_sFlattenParentName;
static int s_nLevel;
I've used stl's set , multimap , and deque . (I'm not too big on MFC containers as you can tell).
|
|
|
|
|
hmm... what happens when you want to transform the xml to html with xsl?
part of the decision to use xml was based on its ability to handle hierarchical data. removing that hierarchy and order from the file results in the loss of a lot of semantic data.
i appreciate that it might solve your needs but i don't think its appropriate for todolist in the general case. i'm going to finish the approach i outlined yesterday and then when its released, we can review it again.
i'm certainly not against making todolist more source control friendly but i also have to try to strike a balance between everyone's needs.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
I think the ultimate solution to the source control issue would be using an especially made external merge utility (which should be easy to make). This utility will understand the format (whichever it is, even unsorted) and should be able to merge changes appropriately. In the meantime, I think your approach (of sorting by ID but maintaining the hierarchy) should be sufficient for now for source control purposes. It is definately better than the current behavior. So I'd say go for it.
The things I wanted to talk to you about next are:
1. Making POS more useful/meaningful
2. View-filtering features
Since these are not particularly related to source control (nor to each other), I'll start a fresh thread for each.
|
|
|
|
|
Hi,
There must be a first one jumping up and down, here I am
Will the change in the completion calculation be the one set "by default" ? My problem is that I do not use time estimation very often (do not tell me that it is bad, my boss already does it for you ), and I'm quite happy with the actual system. What is going to change for people like me who do not use time estimations ?
~RaGE();
|
|
|
|
|
>> What is going to change for people like me
not much if the individual branches of your tree are similar in size and structure.
only if you have significant differences in sibling tasks will you notice a difference.
if you like i can send you a demo build to try out. let me know.
.dan.g.
AbstractSpoon Software
|
|
|
|
|
How I Can Export to HTML only the Tasks attributed to one Person?
Thanks
Luciano
luciano@microstop.com.br
Lutch
|
|
|
|
|
sorry, luciano, but that not possible at present.
how about i add this ability to the export dialog for the next feature release?
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
What would also be useful, is to be able to filter the task list itself so it only displays my tasks.
|
|
|
|
|
i had hoped to get this done for the next release but its going to have to be the one after
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
Hi Dan,
thanks for your reply. I guess there's no one correct method to calculate the completion of a task.
Just one additional comment: if (in the previous example) task 1.1.1 is complete (100%), then with the current implementation, the completion of the topmost task will alway be at least 50% independent from the number of sub-tasks and sub-sub-tasks of task 1.2. This is what seems to be 'not correct' to me.
But I thought again about my suggestion. And in fact, what I'd like to have can be stated quite simply:
In the way I use TDL, only leaf-tasks are real tasks (things I have to do). All parent tasks are only used to group/organize tasks. In the end, I would like to see for any parent task, what the completion of all its subordinated leaf-tasks is.
Maybe, (if you don't like the idea of changing the calculation of a task's completion), it could also be solved by adding another number to a task, which expresses how many leaf-tasks are complete. But I don't know how this could be displayed, especially if you also want to take into account partially complete tasks (such as task 1.1.1 in the example).
And then TDL also has the options to weight tasks based on their priority or time estimates. I have not thought about these features when I made my suggestion, and I have no idea how these could be handled .
Anyway, I'm curious to what solution (if any) a discussion leads.
Regards,
Martin
There are only 10 types of people in the world: those who understand binary and those who don't.
|
|
|
|
|
the more i think about it, martine, the more your suggestion acquires merit.
i think my original approach to calculating % completions was based on thinking that sibling tasks would have similar structures. ie you wouldn't have a single task alongside a multi-level task.
however, i now come to see that this assumption is largely based only on my own tasklist and will produce very skewed results for such an example.
the benefit of your suggestion, though, is that together with weighting it can still handle my way of thinking as well as yours.
as a worked example, consider two tasks both representing 'big' tasks. But while one has 50 subtasks, the other has none yet, because they won't be filled in until near the end of the first task. ie phase 1 and phase 2.
without any weighting, each of the subtasks of the first task would have equal effect to the second task which we would both agree would not be necessarily realistic (in this scenario).
if, however, we were to assign 1 day estimates to each of the first tasks subtasks and 50 days to the second task, and turn on 'weight by time estimates', we would get a much more realistic result where when all of task 1's subtasks are complete the overall completion would be 50% and not 100 * (50/51) = 100%.
an additional benefit of your suggestion is that its a good default for first time users to use since it assigns equal weighting to each 'real' task.
i've also just implemented a feature request in the next release to allow parent tasks to be displayed as folders which reflects this way of working too.
so what i'm going to do is this: post a comment saying that i'm going to change the method of calculation and see if anyone jumps up and down
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
Dan, thanks a lot.
Now I feel a little bit uncomfortable that I only gave this article/program a rating of 4 when I first downloaded TDL (many versions ago ). But if it were possible to change the rating, I would have given it a 5+ long ago.
Regards,
Martin
There are only 10 types of people in the world: those who understand binary and those who don't.
|
|
|
|
|
>> Now I feel a little bit uncomfortable
don't be, your honesty does you credit. and thanks for helping sort out this issue.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
Hi, I have a small suggestion for the way the completion of a task (with subtasks) is calculated. Let me explain how I'm using TDL:
I'm currently using top-level tasks for different projects. Then I use sub-tasks to group my real tasks (which are therefore on the third level). Here's an example
1. MyProject
1.1 Module 1
1.1.1 Module 1 - Taks 1
1.2 Module 2
1.2.x Lots of tasks for Module 2
Now what happens if I complete task 1.1.1, is that MyProject will be shown as 50% complete, since one of its two subtasks is complete.
I thought, that for such cases it might be better if the completion of a task is calculated as the average of all its sub-tasks which are leaf-nodes (all tasks which have no more sub-tasks). This might give a better representation of what the completion of the main project really is.
I hope my description is understandable. Anyway, thanks a lot for ToDoList.
Regards,
Martin
There are only 10 types of people in the world: those who understand binary and those who don't.
|
|
|
|
|
hi martin,
if i read you right, you would like to be able to weight a task's completion according the number of subtasks it has.
if this is correct then TDL already has a similar feature, hidden away on the 'Tasks > Attributes' preferences tab. check the option called 'Weight completion by time estimate' and then provided you allocate time to each task it will correctly calculate the % completion.
nevertheless, i do like the idea of also being able to weight tasks simply by the number of subtasks they contain, and can't quite think why i haven't already added it.
let me know if the existing functionality is sufficient.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
Hi Dan,
I'm not sure if we meant the same thing. What I meant is to have an option to calculate the completion of a task as the average of all its sub- and subsub-tasks which are leaf-tasks (where a leaf-task is a task without any further subtasks).
This gives every leaf-task the same weight in the calculation of the completion of all its parent tasks.
Example:
1. Project 1 (30%)
1.1 Module 1 (50%)
1.1.1 Task A (50%)
1.2 Module 2 (25%)
1.2.1 Module 2.1 (0%)
1.2.1.1 Task B (0%)
1.2.1.2 Task C (0%)
1.2.2 Module 2.2 (50%)
1.2.2.1 Task D (100%)
1.2.2.2 Task E (0%)
Completion of Module 2.2 is the average of Tasks D,E.
Completion of Module 2 is the average of Tasks B,C,D,E.
Completion of Project 1 is the average of Tasks A,B,C,D,E.
Hope this makes it clear.
Regards,
Martin
There are only 10 types of people in the world: those who understand binary and those who don't.
|
|
|
|
|
>> Hope this makes it clear
as crystal. thanks for the clarification.
however, in order not to create an area of confusion, i'll need to see how/if it can be tied in to the current approach to calculating percentage completion.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
thanks martin,
i've had a bit more of a think about this and i think your suggestion raises questions about the current approach to calculating percentage completion and i'm trying to work out whether it should all be changed to reflect your suggestion.
i think your suggestion has significant merit but i'm getting bogged down because intuitively something feels not quite right about it. please note that this is not a criticism and i would welcome being proved wrong. its also possible that each method (your's and the existing) has strengths which the other does not.
so i'm going to try to summarize things so we can work it out.
in the existing method each parent node adds up all the percentages of its children and divides by the number of children. if a child has children, then its percentage is simply the average of its children's percentages and so on.
(curiously this sounds quite like your suggestion but we know that the end result has the potential to be quite different).
in fact using your example is the best way of summarizing it:
1. Completion of Module 2.2 is the average of Tasks D,E = (D+E)/2 .
2. Completion of Module 2.1 is the average of Tasks B,C = (B+C)/2 .
3. Completion of Module 2 is the average of (2.2 + 2.1) = ((D+E)/2 + (B+C)/2)/2 which just happens to simply equal the average of Tasks B,C,D,E = (B+C+D+E)/4 .
4. Completion of Project 1 though is the average of (1. + 2.) = ((B+C+D+E)/4 + A)/2 which is quite different to your suggestion which equates to (A+B+C+D+E)/5 .
so with the existing method the % completion works out to be (25+50)/2 = 37.5% against 30% with your method.
now i don't 'know' if one method has more validity than the other, but i would like to only have one method because explaining the difference between the two methods in the preferences dialog will be impossible.
ideally i'd like to get other's to input too, so if we don't get any other responses here i'll repost it.
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
(or maybe I'm just to lazy to find them. Sorry in thart case - sorry)
Suggestion: When changing one of the edits (e.g. Estimate, ...) pressing RETURN takes me back to the task list. This would make editing times etc. conveniently easy. Priority: Insanely high
Suggestion: For entering times, allow formats like "2w1d2h30m" for 2 weeks 1 day 2h 30 minutes (ok that specification is weird, but a 2d4h is something nice to have, and one could type 2d without switching between time units. Priority: Nice to have
I'd even be willing to get my hands dirty on these things - but would like to hear your comments first
we are here to help each other get through this thing, whatever it is Vonnegut jr. boost your code || Fold With Us! || sighist | doxygen
|
|
|
|
|
1. i don't see why not, but i'll need to investigate it so as to ensure that it works intuitively. it would also need to be controllable via a preference (disabled by default).
2. i like this idea but it'll need further discussion.
for instance, do you anticipate that the field would remain as '2d4h' or would it be converted to the current time units when the focus shifted away from it?
i would prefer the latter in order to reduce the amount of work that has to be done to recalculate the totals (which happens during the drawing cycle - not the best architecture but that's how it is at present
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|
(1) I frequently find myself pondering how I get back to the list. I press e.g. Alt-I to get to the estimate, enter the estimate - then I want back to the list.
Currently RETURN does nothing, and it feels "natural" to confirm the input, so I wouldn't make it an option (but maybe we shuld hear what others say)
(2) entering times in "dhm format" makes the list easier to use, displaying as such just looks better, so I think it would be ok.
But this would make a nice option: display *all* times in dhm format.
TDL is a bit slow already when drawing, so no overloading it. (the "unparse" could be done in very few cycles, though )
we are here to help each other get through this thing, whatever it is Vonnegut jr. boost your code || Fold With Us! || sighist | doxygen
|
|
|
|
|
>> TDL is a bit slow already when drawing
didn't you see the minimum spec: AMD 3600+, 1 GB ram, latest 4D video card (4D == 'time just disappears') ?
i'm always happy to revisit the performance issues with TDL especially if i can get a different perspective on it.
1. what is your machine spec ? (i develop on a 1100 Duron and a TNT2)
2. when do you most notice rendering slowdown?
one possible cause of slowdown (as i mentioned before) is that of recalculating parent sub-totals on the fly during the drawing cycle.
i've been slowly thinking about implementing cached totals so that this can at least be discounted as a factor or at best produce improved rendering.
however, the most substantial benefit of recalculating on the fly is that the displayed total is always up to date provided i refresh the screen after the user modifies a task.
my current prefered solution is this:
1. add additional fields (to the data structure that represents each task internally) where we want to cache calculated values.
2. add a 'dirty' flag which indicates whether the user has edited any task since last these values were drawn. this flag would be set to TRUE on _all_ tasks whenever a task was changed (unless this itself generates a performance hit!)
3. when drawing, we would check this flag and if FALSE then simply use the cached value, and if TRUE then recalc the value, cache it and reset the flag.
this should improve rendering in the following situations:
1) when scrolling
2) during drag'n'drop
3) when dragging a window over TDL
rgds
.dan.g.
AbstractSpoon Software
|
|
|
|
|