The theoretical study of computer-program **performance** and resource usage.

The objective is make programs fast and the factors that influence them including resources (RAM, SSDs).

But before analysing the factors, we need to figure out what is more important than performance?

- User friendliness
- Security
- Correctness
- Simplicity
- Maintainability
- Scalability
- Precision
- Robustness
- Features
- Modularity

Then, why study algorithms and performance ?

- Performance is co-related with user friendliness, nothing is more frustrating than a slow loading web-page.
- Real time constraints, sometime they do not work if they do not function adequately.
- Divides the line between feasible and non-feasible.
- Algorithms give language for program behaviour.

Performance is like a currency to buy other things.

If we want to analyse running time of an algorithm like Insertion Sort, it depends on a lot of things.

- Depends on input (Eg. for an already sorted array, a sort like insertion sort has very less work to do)
- Depends on the input size (6 elements vs 6*10^9 elements)
- Parameterise the input size, time as a function of size to see the behaviour with increasing size.

- Want upper bounds to guarantee the user the maximum amount of time an algorithm is going to run.

### Kinds of analysis:

There are usually three kinds of analysis we can do for an algorithm.

**Worst-case (usually):**

T(n) = Maximum time on any input size of n

**Average-case (sometimes):**

T(n) = Expected time over all input of size n (weighted average)

To calculate average we need to know the probability of input, since we do not know that we need assumption of statistical distribution (Every input of size n is equally likely)

**Best-case (never):**

This is no good since we can cheat the analysis by taking a poor algorithm which performs better only for a specific input which is the best case but does not tell anything about other vast majority of inputs. A slow algorithm that works fast on some input.

**So what is the worst-case of an algorithm?**

Depends on computer, whether it is a super computer or a wrist watch.

We also check for **relative speed**, compare two algorithms on same machine to check the performance.

We can also check for **absolute speed**, one algorithm always perform better than the other no matter what machine we run on.

This gets confusing about how we can talk about the worst-case of an algorithm by only discussing the software and not discussing hardware since software’s performance is dependent on hardware. An algorithm on faster machine will always run faster than compared to slow machine.

**BIG IDEA ! Asymptotic Notation Analysis**

Asymptotic analysis helps us analyse algorithms and make things simpler.

- Ignore machine dependent constants.
- Look at
of the running time instead of running time itself. T(n) as n -> ∞__growth__

### Asymptotic notation:

Θ – notation (Theta notation) :

- Drop low order terms
- Ignore leading constants

Ex: 3n³ + 90n² – 5n + 6046 = Θ(n³)

As n -> ∞, Θ(n²) algorithm always beats a Θ(n³) algorithm. Even if we run Θ(n²) algorithm on a slower computer and Θ(n³) on fast computer.

So asymptotic notation helps us to resolve the issue of both relative and absolute speed.

On different machines, we may get different machine constants, but the growth as the size of the input gets larger, the asymptotic wont change.