Introduction to algorithms
Problem Solving ------> Computer Programs -----> Algorithms
There can be many solutions or algorithms for one problem.
P1 ------> {A1, A2, A3, … }
Before implementing any algorithm as the program, we have to find
out which algorithm among these is good in terms of performance
measures time and memory.
out which algorithm among these is good in terms of performance
measures time and memory.
Analyse an algorithm in terms of performance measures.
Time : Which algorithm would execute faster
Memory : Which algorithm will take less memory
First we design all possible algorithms which can use, then analyse
them to choose the best algorithm, then the best algorithm will be
implemented as our program, or the solution.
them to choose the best algorithm, then the best algorithm will be
implemented as our program, or the solution.
Asymptotic Notations - notations used in algorithm
Big O notations
To measure how well computer algorithms scales as the amount of
data involved increases.
data involved increases.
As no of input giving to the algorithm is increasing, the time is
increasing according to the following graph.
increasing according to the following graph.
The function of the curve : f(n) where n is no of inputs
Worst case or the upper bound for this function?
Another function C.g(n) greater than the given f(n)
After some limit n0, if C.g(n) can bound the function f(n), the value
of C.g(n) is greater than f(n).
of C.g(n) is greater than f(n).
f(n)<=C.g(n) | after some n>n0 ; C>0 and n0>=1
If you can satisfy the above condition; we can say,
f(n) is big O of g(n) ----------> f(n) = O (g(n)) which means that
f(n) is smaller than g(n)
f(n) is smaller than g(n)
Example 01
f(n) = 3n+2 and g(n) = n
Can we say, f(n)=O(g(n)) is true?
Find C and n0 first.
f(n)<=C.g(n) ; for some C>0 and n>=n0
3n+2 <= C.n
When can 3n+2 be less than or equal to C.n ?
Can have any value for C, the constant.
The best option is C=4, but anything above 3 is good.
So, 3n+2<=4n ; so we found C. What about n?
3n+2<=4n, from this
n>=2
So, every n>=2 and C=4; f(n)<=C.g(n)
Therefore; f(n)=O(g(n)) ; f(n) is big O of g(n)
O(3n+2) = n
We found out that, if f(n) = 3n+2 and g(n)
Then f(n) is big O of g(n) ; f(n)=O(g(n))
That means, 3n+2 can be written as a constant multiply by n.
Let f(n) and g(n) be functions matching non-negative integers
just to real numbers. We say that f(n) is big O of g(n). If there
is a real constant C>0 and an integer constant n0>=1 such that,
just to real numbers. We say that f(n) is big O of g(n). If there
is a real constant C>0 and an integer constant n0>=1 such that,
f(n) <= C.g(n) for n>=n0
Example 02
f(n)=8n-2
Linear constant
f(n) <= C.g(n) for n>=n0
g(n)=n
8n-2 <=C.n
8n-2 <=8n
So, C=8
f(n)<=8n for n>=1
O(8n - 2) = n
Example 03
f(n)=8n2 + 3n + logn
f(n) <= C.g(n) for n>=n0
g(n) = n2
8n2 + 3n + logn <= C.n2
8n2 + 3n + logn <= 8.n2
So, C=8
f(n) <= 8.n2 for n>=8
O(8n2 + 3n + logn) = n2
Example 04
f(n) = 3n5 + nlogn + 3
f(n) <= C.g(n) for n>=n0
g(n) = n5
3n5 + nlogn + 3 <= C.n5
3n5 + nlogn + 3 <= 3.n5
So, C=3
f(n)<= 3.n5 for n>=8
O(3n5 + nlogn + 3) = n5
Example 05
f(n) = 3nlogn + 3n
f(n) <= C.g(n) for n>=n0
g(n) = nlogn
3nlogn + 3n <= C.nlogn
3nlogn + 3n <= 3.nlogn
So, C=3
f(n) <= 3nlogn for n>=3
O(3nlogn + 3n) = nlogn
Example 06
f(n) = 2n + 3
f(n) <= C.g(n) for n>=n0
g(n) = 2n
2n + 3 <= C.2n
2n + 3 <= 1*2n
So, C =1
f(n) <= 1*2n for n>=1
O(2n + 3) = 2n
Example 07
f(n) = 2n+3 + n2
f(n) = 23.2n + n2
f(n) = 8.2n + n2
f(n) <= C.g(n) for n>=n0
g(n) = 2n
8.2n + n2 <= C. 2n
8.2n + n2 <= 8* 2n
So, C=8
f(n) <= 8* 2n for n>=8
O(2n+3 + n2) = 2n
Example 08
f(n)= 3n + logn + 7
f(n) <= C.g(n) for n>=n0
g(n) = n
3n + logn + 7 <= C.n
3n + logn + 7 <= 3.n
So, C = 3
f(n) <= 3.n for n>=3
O(3n + logn + 7) = n
Example 09
f(n) = n3 + n2 + 100n
f(n) <= C.g(n) for n>=n0
g(n) = n3
n3 + n2 + 100n <= C.n3
n3 + n2 + 100n <= 1.n3
So, C=1
f(n) <= 1.n3 for n>=1
O(n3 + n2 + 100n) = n3
Example 10
f(n) = 2n + n3
f(n) <= C.g(n) for n>=n0
g(n) = 2n
2n + n3 <= C.2n
2n + n3 <= 1.2n
So, C=1
f(n) <= 1.2n for n>=1
O(2n + n3) = 2n
Example 11
f(n) = 3n - 2
f(n) <= C.g(n) for n>=n0
g(n) = n
3n - 2 <= C.n
3n - 2 <= 3.n
So, C=3
f(n) <= 3.n for n>=3
O(3n - 2) = n
Example 12
f(n) = nlogn -n
f(n) <= C.g(n) for n>=n0
g(n) = nlogn
nlogn - n <= C.nlogn
nlogn - n <= 1.nlogn
So, C=1
f(n) <= 1.nlogn for n>=1
O(nlogn -n) = nlogn
Example 13
f(n) = 3n3 + 10n2 - 21nlogn
f(n) <= C.g(n) for n>=n0
g(n) = n3
3n3 + 10n2 - 21nlogn <= C.n3
3n3 + 10n2 - 21nlogn <= 3.n3
So, C=3
f(n) <= 3n3 for n>=3
O(3n3 + 10n2 - 21nlogn) = n3
------------------------------------------------------------------------------
Note:
2n > n5 > n4 > n3 > n2 > nlogn > n > logn
Different Functions
- Constant Function
f(n) = C
One of the application in constant function is the time required to execute the
primitive operations will be the constant time.
primitive operations will be the constant time.
Example : comparing a 8 bit register with another 8 bit register. (It will take
same time for each bit)
same time for each bit)
- Logarithmic Function (Log function)
f(n) = logn ; O(f(n))= logn
Even for a very large input, the output will be coming in a very small time.
- Linear Function
f(n) = n ; O(f(n))= n
A linear function will arise in algorithm analysis, when we have to do a single
basis operation for each of n elements.
basis operation for each of n elements.
Example : Comparing a number x to each element of an array of size n will
require n comparisons.
require n comparisons.
- nlogn Function
f(n) = nlogn ; O(f(n))= nlogn
The function goes a little faster than the Linear function and a lot slower than
the Quadratic function. (f(n) = n2)
the Quadratic function. (f(n) = n2)
n
|
logn
|
nlogn
|
10
|
1
|
10
|
100
|
2
|
200
|
1000
|
3
|
3000
|
10000
|
4
|
40000
|
- Quadratic Function
f(n) = n2 ; O(f(n))= n2
The quadratic function appears in many algorithms that have “nested loops”,
where the “inner loop” perform a linear no of operations and the “outer loop”
performs another linear no of operations.
where the “inner loop” perform a linear no of operations and the “outer loop”
performs another linear no of operations.
In such cases, the algorithm will perform n*n = n2 no of operations.
Example:
- Cubic Function (and other polynomials)
f(n) = n3 ; O(f(n))= n3
Polynomial :
f(n) = a0 + a1x1 + a2x2 + a3x3 + ….. + anxn
Here, a1 , a2 , a3 , ……. , an are constants
- Exponential Function
f(n) = an ; O(f(n))= an
In algorithm analysis, he most common based for exponential function is
a = 2 , that is f(n) = 2n.
a = 2 , that is f(n) = 2n.
Example : if we have a loop that starts by performing one operation and
then doubles no of operations performed with each iteration. Then the
total no of operations performed in the nth iteration is 2n.
then doubles no of operations performed with each iteration. Then the
total no of operations performed in the nth iteration is 2n.
No of iteration
|
1
|
2
|
3
|
4
|
...
|
n
|
Total operations
|
2
|
4
|
8
|
16
|
...
|
2n
|
Time vs Problem Size (n - no of inputs) for different functions
(different types of algorithms)
(different types of algorithms)
Big O of some algorithms
(Time Complexity)
Algorithm
|
Big O
|
Bubble Sort
|
n2
|
Insertion Sort
|
n2
|
Selection Sort
|
n2
|
Quick Sort
|
n2
|
Merge Sort
|
nlogn
|
Search (array, queue, stack, binary search tree,
singly linked list, doubly linked list) |
n
|
Search (Red black tree)
|
logn
|
Q1) Determine the Big-O values of the following:
1. Find an item in an arbitrary array of size n. ----------------> n
2. Finding the maximum from an array of values. ----------------> n
Think about it mathematically. Unless the array is sorted, there is nothing to
"cut in half" to give you the log(n) behavior.
"cut in half" to give you the log(n) behavior.
3. Finding the sum of an array of items.
temp_sum = 0
for i in 1 ...array.length
temp_sum = temp_sum + array[i]
4. Adding a value to an unsorted array.
5. Adding values to a sorted array.
6. Input n no. of values to an array. ----------------> n
7. Finding a value in an array of size n using Binary search. ----------------> n