Hello guys,
I decided to try and do a problem about analyzing the worst possible runtime of an algorithm.
Since I'm a beginner and I want to do an understand this right I require some help.
I came accros this problem in a book that uses the following algorithm:
Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2.
Output: The squared distance of a closest pair of points.
ClosePoints
1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5.....for j ← i + 1 to n do
6. .....t ← (xi − xj)^2 + (yi − yj)^2
7. ......if t < d then
8. .........d ← t
9. return d
These are the questions:
T(n) represents the worst possible runtime.
1. Prove that T(n) = O(n^2). For this part you must use the O(·) notation along with any
appropriate properties .
2. Prove that T(n) = Ω(n^2). (In fact the runtime of the algorithm is Ω(n^2) for all inputs of n
points.)
3. Is it true to say that T (n) = Θ(n^2)? Justify your answer very briefly.
Now 3 is dependent of 1 and 2.As for 1 I know that we say that f is O(g), pronounced
if and only if there is an n0 ∈ N and c > 0 in R such that for all
n ≥ n0 we have
f(n) ≤ cg(n).
And also we say that f is Ω(g) if there is an
n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have
f(n) ≥ cg(n).
I was thinking of either using small positive constants c1, c2, c3, c4.... to represent the cost of executing line 1,2,3,4,.. or maybe just directly O(1)+O(1)+...instead of constants....
How should I solve the questions?
Thanks in advance