Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.

Big O Notation Help?

Can anyone help me with this problem and explain a little more about the omega, theta, and Big O? Thanks!

In each of the following situations, please indicate whether f=O(g), or f=Ω(g), or both (in which case f=Q (g).)

f(n) g(n)

a. n-100 n-200

b. n^(1/2) n^(2/3)

c. 100n+log n n +(log n)^2

d. nlogn 10nlogn

e. log2n log3n

f. 10logn log(n^2)

g. n^1.01 nlog^2n

h. n^2/logn n(logn)^2

i. n^0.1 (logn)^10

j. (logn)^logn n/logn

Thanks!

2 Answers

Relevance
  • Me M
    Lv 5
    1 decade ago
    Favorite Answer

    I think it is A, but it has been a long time since I have dealt with Big-O notation.... so my answers and explanations may be off... but I think they are good.

    Anyways, Big O is the upper bound. So something like O(n^2) is saying that your algorithm is going to be in or less than some constant multiplied by n^2. Also, when dealing with polynomials, generally the part that has the largest impact as n approaches infinity matters.

    So something like 6x^4 + 3x^2 + 4 would be O(x^4). Also, if I remember correctly, because O() is considered to be an upper bound, you could put the 6x^4 + 3x^2 + 4 function under O(x^8) because it would still be under that upper bound...even though that upper bound isn't nearly as precise as they would want it.

    If I remember correctly, omega is the lower limit and functions similarly to big-O.

    Finally if I remember correctly, theta is used to describe when big-O and omega are the same.

    Some people tend to think of Big-O as worst-case in an algorithm, Omega as best case, and theta as average. While I guess it could hold to some extent, Big-o notation is really just describing bounds set on a function and only really deals with the larger aspects as it approaches infinity. So you may think that O(n^2) will always be better than O(n^4) in a computer function.... that isn't true... because the first function could really be 5n^2 + 500n and the second function could simply be n^4. If these functions describe an algorithm and the algorithm will only be looped twice, then the second algorithm will be better, even though it is O(n^4). However, if you were planning on running the algorithm in a loop a large number of times, eventually the first algorithm will become more efficient because the constant multiplier and the 500n will be of less importance than the exponent.

    Its been many years since I have dealt with this, so I am sorry cannot give a better explanation and I hope what I did give you is accurate.... I believe it is, but take it with a grain of salt.

    I would suggest you look at wikipedia. Also I did a quick search and saw this: http://www.perlmonks.org/?node_id=573138 The article describes some things decently and the first response to it appears to be better (although I just read part of it).

    Source(s): I have a BS in computer science, but it has been 7 years since I have looked at big-oh notation.
  • Anonymous
    4 years ago

    the respond is definite to the two. the huge 'O' potential 'on the order of' and easily expresses that (interior the 1st case) the function greater or less will enhance via a ingredient of four if n is doubled whilst the function in the two'nd case will enhance linearly with increasing n. HTH, Doug

Still have questions? Get your answers by asking now.