Big-O is an upper bound.

Big-Theta is a tight bound, i.e. upper and lower bound.

When people only worry about what’s the worst that can happen, big-O is sufficient; i.e. it says that “it can’t get much worse than this”. The tighter the bound the better, of course, but a tight bound isn’t always easy to compute.

See also

Wikipedia/Big O Notation

Related questions

What is the difference between Θ(n) and O(n)?

The following quote from Wikipedia also sheds some light:

Informally, especially in computer science, the Big O notation often is

permitted to be somewhat abused to describe an asymptotic tight bound

where using Big Theta notation might be more factually appropriate in a

given context.

For example, when considering a function T(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tightness of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., bullet 1

below).

T(n) = O(n100), which is identical to T(n) ∈ O(n100)

T(n) = O(n3), which is identical to T(n) ∈ O(n3)

T(n) = Θ(n3), which is identical to T(n) ∈ Θ(n3)

The equivalent English statements are respectively:

T(n) grows asymptotically no faster than n100

T(n) grows asymptotically no faster than n3

T(n) grows asymptotically as fast as n3.

So while all three statements are true, progressively more information is contained in

each. In some fields, however, the Big O notation (bullets number 2 in the lists above)

would be used more commonly than the Big Theta notation (bullets number 3 in the

lists above) because functions that grow more slowly are more desirable.

I’m a mathematician and I have seen and needed big-O O(n), big-Theta Θ(n), and big-Omega Ω(n) notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can’t do better or that no algorithm can do better. For instance, if you say “Sorting requires Θ(n(log n)) comparisons for worst-case input”, then you’re explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make Ω(n(log n)) comparisons.

Now, one narrow reason that people use O instead of Ω is to drop disclaimers about worst or average cases. If you say “sorting requires O(n(log n)) comparisons”, then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time Θ(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).

However, there is a broader reason that people informally use O. At a human level, it’s a pain to always make two-sided statements when the converse side is “obvious” from context. Since I’m a mathematician, I would ideally always be careful to say “I will take an umbrella if and only if it rains” or “I can juggle 4 balls but not 5”, instead of “I will take an umbrella if it rains” or “I can juggle 4 balls”. But the other halves of such statements are often obviously intended or obviously not intended. It’s just human nature to be sloppy about the obvious. It’s confusing to split hairs.

Unfortunately, in a rigorous area such as math or theory of algorithms, it’s also confusing not to split hairs. People will inevitably say O when they should have said Ω or Θ. Skipping details because they’re “obvious” always leads to misunderstandings. There is no solution for that.