Here we start a series of posts which pretends to clarify some essential points in the computer science world. If you want to study/remember this for your exam or whatever, I recommend you to read it pretending not to be studying but enjoying with it.

**Complexity and asymptotic notation**

**Whats the importance of this point?**

When designing software, an essential part is how we design the structures and algorithms in it, because it could be determinating the efficence of the full program.

This is specially important for those aiming to become great algorithmers as we are evaluating the efficence of our design. Let’s dig in it.

**Why should I read this shitty post?**

When essentially comparing the efficency of two algorithms with an uncertain problem size, the thing that does one better than the other is the way the cost rise. This is, at certain point in the problem, the cost gets bigger.

Let’s say we have three designs (A1,A2,A3), represented as functions that tell us the cost in time:

As we can see in the graph, the TA2 grade 1 polynomic function, is bigger than the constant one at 2.8,6 aprox.. TA3, as cubic function, gets fastly bigger.

When evaluating cost functions (again, details aside, essentially), we usually work with great ones, so some terms in the equation gets insignificant. So, what really matters here is the other terms. Those terms are known as “dominant terms”. For example, in 10n^{2}+45n+1355, with n=5420, is 294009255 where 293764000 is due to the 10n^{2} term, the dominant term.

- TA1(n) = 6 =6 constant grown rate
- TA2(n) = 2n+n-2 ≈2n lineal grown rate
- TA3(n) = n
^{3 }=n^{3 }cubic grown rate

What really defines the function behavior is (usually) the higher grade term, and its what we really need to know when evaluating.

**Is this grown rate acceptable?**

- Constant grown rate is ideal
- Lineal rate seems acceptable
- Logarithmic rate is better than lineal rate. Its still a nice rate.
- nLog(n) rate. Still acceptable

- Cuadratic, cubic,… Higher polynomic functions are really delimitating the problem size the algorithm can treat. It’s unnaceptable (of course, it depends on your goals)

**Algorithm complexity**

When looking for the complexity, we’re really looking for **the way the cost’s behavior as we increase the problem size**. The expression of the way the algorithm cost increases related to the problem size while using asymptotic notation derives in the expression “asymptotic complexity”.

**Asymptotic notation**

This allow us to express, in a formal way, the algorithm complexity.

- Big Oh notation:

Represented with **Ο()**, tell us the upper bounds of the function. So, this is the **worst case** for an algorithm. Let’s say we have an array and we want to search a element we will call “X”. Since it’s a search by tour, the worst case would occur when X is at the last position in the structure.

- Omega notation:

Represented with **Ω()**, express the lower bounds, which we can know whats the cost for the **best case** in the algorithm. Following the previous example, when “X” is where we started to search.

- Theta notation:

Represented with **Θ()**, express the upper and lower bounds. Allow us to formally describe (as previous ones), the average cost.

Let’s say it’s a way to express the cost behavior with **average** problem sizes.

This is **Θ(f(n)) = Ο(f(n)) ∩ Ω(f(n))**