# Intro to the Big-O Notation

What is the Big-O and why is it so important to Programmers?

The Big-O Notation establishes a framework to describe the performance of your code. You will find that there are multiple approaches to solving the same algorithm but how can we determine which is the “best” implementation?

Programmers determine the “best” implementation based on runtime, memory, brevity & readability. We want code that is legible, runs fast and doesn’t require too much memory. The two complexities that determine the Big-O are **Time **and **Space.**

**Time complexity** describes the runtime of an algorithm as the input size increases. Let’s take a look at 2 examples with different implementations of **prime?** method that returns a boolean based on the input value **n**:

OPTION 1:def prime?(n)

return false if n <= 1

if (2...n).any?{|x| n % x== 0}

false

else

true

end

end

In this implementation of **prime?**, we are setting baseline conditions for a prime number and then sending it through an if/else block statement to determine if ** n** is a prime number. Pretty standard. Here is another way to implement

**prime?.**

OPTION 2:def prime?(n)

(2...n).none? { |x| n % x == 0 }

end

Option 2 yields the same result but with less code and less number of operations to perform.

The problem with **Time** is that different machines will record different times. Even same machines will record different runtimes. I sampled this in my terminal.

As you can see above, speed measurements are not precise enough to determine the runtime so instead, the Big O looks at the ** number **of simple operations the machine has to perform and how it runs in relation to the input size. Just comparing between the 2 examples, it is clear that option 2 would be the more effective option with less code and less operations.

**Space Complexity **describes how much additional memory we need to allocate in order to run the code in our algorithm. This includes the input size.

**Auxiliary Space **refers just to the memory required by the algorithm, not including the input size.

Big- O notation gives programmers an overview of the performance, providing a numeric representation between “worst case scenario” and “best case scenario”.

O(1) or “Big O of 1” is the “best case” scenario presenting a constant progression regardless of the input size.

O(log n) or “Big O of log n” presents great logarithmic time and space complexity.

O(n) or “Big O of n” is linear, growing at a similar rate, proportionate to the input size.

O(n²) or “Big O of n squared” is on the “worst case” spectrum presenting a quadratic progression growing proportionately to the input size squared. This is typically a function that has a nested loop inside another loop. We want to avoid this because we our application runs larger, we will start to see a slow down in performance.

Understanding the Big-O is fundamental to programmers providing a general analysis about the runtime of the function, whether it is constant, linear, quadratic, which in turn guides us to refactor our code, avoiding pain points in our application.