[ACCEPTED]-How is the time complexity of Morris Traversal o(n)?-binary-tree
The original paper for Morris traversal 25 is Traversing binary trees simply and cheaply. It claims that time complexity is O(n) in 24 Introduction section:
It is also efficient, taking 23 time proportional to the number of nodes 22 in the tree and requiring neither a run-time 21 stack nor 'flag' bits in the nodes.
The full 20 paper should have a analysis of time complexity. But 19 the full paper can't be accessed for free.
Morris Traversal方法遍历二叉树(非递归,不用栈,O(1)空间) does 18 some analysis. I have translated some relevant 17 part and made some corrections based on 16 my understanding. Here is my understanding:
A 15 n-node binary tree has n-1 edges. In a Morris 14 traversal, one edge is visited at most 3 13 times. 1st visit is for locating a node. 2nd 12 visit is for looking for the predecessor 11 of some node. And 3rd/final is for restoring 10 the right child of the predecessor to null. In 9 the following binary tree, red dashed line 8 is for locating a node (1st visit). Black 7 dashed line is for looking for predecessor 6 node (traveled two times: for both 2nd visit 5 AND 3rd visit). Node F will be visited when 4 being located. It will also be visited when 3 node H is looking for its predecessor. Finally, it 2 will be visited when restoring the right 1 child of node G (H's predecessor) to null.
it is not going to increase the complexity 6 as the algorithm just rebuilds the tree 5 in only one direction(rebuilding takes only 4 O(n) after which its only O(n) again to 3 print them... but they have merged both 2 the functionality into a same function and 1 gave a special name for the algo thats it...
Another way of looking at it is to find 3 out how many times a tree node will be traversed. As 2 it is constant(3 times for a binary tree). We 1 are looking at O(n).
First, I have a correction to make to a 17 claim that you made:
In the traversal, whenever 16 the node has a left child a copy of it is 15 made to the right child of its predecessor
A 14 copy of the [current] node is not made to the 13 right child of its [current node's] predecessor 12 - the right child of current node's predecessor 11 is pointed to current node - a pointer does not 10 make any copy; instead it just points to 9 the current node that already exists.
Now to answer your 8 question about your code snippet:
while(pre->right != NULL && pre->right != current)
pre = pre->right;
- That loop does add time to the running of the algorithm [compared to if that loop were not in the code]
- But in a worst case, the time it adds is exactly n (taking the run time from n to 2n). This worst case happens when every single node must be visited an extra time, in order to find all predecessors; by the way, each such extra visit of a given node is the only extra time it is visited when finding predecessors (that's because finding any other predecessor will never travel over the same nodes that were traveled through to find any other predecessor) - that explains why the extra time contributes going from n -> 2n [linear] but not n -> n^2 [quadratic]
- Even though 2n > n, when [Big-O] complexity is considered, O(2n) = O(n)
- In other words, taking longer time of 2n compared to n is not actual extra complexity: n & 2n runtimes both have complexities of identical O(n) [they are both "linear"]
Now even 7 though it may have sounded like I was implying 6 above that the entire algorithm runtime is 2n, it is not - it 5 is actually 3n.
- The loop that is in just the code snippet itself contributes n time
- But the algorithm as a whole runs in 3n because each node is visited at most 3 times {once/first to "thread" it back to the node that it is a predecessor of (the ultimate goal that the code snippet helps achieve); a 2nd time when it is arrived at in otherwise normal traversal [as opposed to anything to do with predecessor threading]; and then a 3rd/final time when it is found as predecessor [that itself for the 2nd/final time] again and its right-child pointer/thread is removed from pointing to right-child of current node [directly before printing current node]}
- And again [just as complexity O(2n) = O(n)], complexity O(3n) = O(n)
- So to summarize: Yes, your code snippet loop does contribute time, but NOT extra time complexity
By the way, I don't think this 4 line (from the full algorithm) to remove 3 the old predecessor "thread" reference is 2 strictly necessary (although it doesn't 1 hurt, and can be considered nice tidying-up):
pre->right = NULL;
I discuss Morris inorder traversal here, because 14 one node can be visited at most 4 times, so 13 the time complexity is O(n).
Let's divide 12 the types that one node is visited into 11 two types:
the
current
visits the node.If one node 10 has left child,
current
visits twice, otherwise 9 visits once.build a loop and destroy the 8 loop.
If one node is in a loop, it is visited 7 twice, otherwise zero.
I draw the figure 6 below and use A + B
to represent the times, A
is 5 current
, B
is loop.
For the node D
, current
moves from B
to 4 D
and from E
to D
. D
is also in the loop A B D F G
, D
is 3 visited when current
move from A
to B
(build the loop) and 2 from A
to H
(destroy the loop). Therefore, node 1 D
is visited 2 + 2 = 4 times.
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.