Computer Science‎ > ‎

Algorithms: Greedy Algorithms - Fractional Knapsack Problems, Task Scheduling Problem - with C Program source codes




These are two popular examples of Greedy Algorithms.

1. Fractional Knapsack Problem


Given the weight of items along with its value and also the maximum weight that we can to take, 
we have to maximize the value we can get. Unlike 0-1 knapsack, here we are allowed to take 
fractional items. Hence this problem is called fractional knapsack. Fractional knapsack problem 
is solvable by greedy strategy.

Greedy algorithm – It obtains the solution of a problem by making a sequence of choices. At 
each decision point, the best possible choice is made.

The basic idea is to calculate for each item the ratio of value/weight, and sort them according to 
this ratio. Then take the items with the highest ratios and add them until we can’t add the next 
item as whole. Finally add as much as you can of the next item.

Properties

If the items are already sorted into decreasing order of vi / wi, then the for-loop takes O(n) time. Where n is the total number of items. Therefore, the total time including the sort is in O(n log n) as quicksort’s average time complexity is O(nlogn).

Complete Tutorial for the Fractional Knapsack Problem :



Fractional Knapsack Problem - C Program Source Code 


#include<stdio.h>
/* Given the weight of items along with its value and also the maximum weight that we can to take,
   we have to maximise the value we can get. Unlike 0-1 knapsack, here we are allowed to take fractional
   items. Hence this problem is called fractional knapsack */

typedef struct item
{
       
int weight;
       
int value;
}item;
int compare(const void *x, const void *y)
{
        item
*i1 = (item *)x, *i2 = (item *)y;
       
double ratio1 = (*i1).value*1.0 / (*i1).weight;
       
double ratio2 = (*i2).value*1.0 / (*i2).weight;
       
if(ratio1 < ratio2) return 1;
       
else if(ratio1 > ratio2) return -1;
       
else return 0;
}
int main()
{
       
int items;
        scanf
("%d",&items);
        item I
[items];
       
int iter;
       
for(iter=0;iter<items;iter++)
       
{
                scanf
("%d%d",&I[iter].weight,&I[iter].value);
       
}
        qsort
(I,items,sizeof(item),compare);
       
int maxWeight;
        scanf
("%d",&maxWeight);
       
double value = 0.0;
       
int presentWeight = 0;
       
for(iter=0;iter<items;iter++)
       
{
               
if(presentWeight + I[iter].weight <maxWeight)
               
{
                        presentWeight
= presentWeight + I[iter].weight ;
                        value
+= I[iter].value;
               
}
               
else
               
{
                       
int remaining  = maxWeight - presentWeight;
                        value
+= I[iter].value*remaining *1.0/I[iter].weight;
                       
break;
               
}

       
}      
        printf
("Maximum value that can be attained is %.6lf\n",value);
}


2. Task Scheduling Problem


Task Scheduling


Given a set of events, our goal is to maximize the number of events that we can attend. Let E ={1,2,3…n} be the events we need to attend. Each event has a start time si and a finish time fi, where si < fi. Events i and j are compatible if the intervals [si , fi) and [sj , fj) do not overlap (i.e., i and j are compatible if si ≥ fj or sj ≤ fi. The goal is to select a maximum-size set of mutually compatible events. Greedy algorithm is used for solving this problem. For this we need to arrange the events in order of increasing finish time: f1 ≤ f2 ≤ ……≤ fn.

Properties

If the events are already sorted into increasing order of fi , then the for-loop takes O(n) time. Where n is the total number of events. Therefore, the total time including the sort is in O(n log n) as quicksort’s average time complexity is O(nlogn).

Complete Tutorial for the Task Scheduling Problem :



Task Scheduling - C Program Source Code



#include<stdio.h>
typedef struct event
{
       
int start_time;
       
int end_time;
       
int event_number;
}event;
int compare(const void *x, const void *y)
{
        event
*e1 = (event *)x, *e2 = (event *)y;
       
return (*e1).end_time - (*e2).end_time;
}
/* Given the list of events, our goal is to maximise the number of events we can attend. */
int main()
{
       
int number_of_events;
        scanf
("%d",&number_of_events);
        event T
[number_of_events];
       
int iter;
       
for(iter=0;iter<number_of_events;iter++)
       
{
                scanf
("%d%d",&T[iter].start_time,&T[iter].end_time);
                T
[iter].event_number = iter;
       
}
       
/* Sort the events according to their respective finish time. */
        qsort
(T,number_of_events,sizeof(event),compare);


       
int events[number_of_events]; // This is used to store the event numbers that can be attended.
       
       
int possible_events = 0; // To store the number of possible events

       
//Taking the first task
        events
[possible_events++] = T[0].event_number;
       
int previous_event = 0;
       
       
/* Select the task if it is compatable with the previously selected task*/
       
for(iter=1;iter<number_of_events;iter++)
       
{
               
if(T[iter].start_time >= T[previous_event].end_time)
               
{
                        events
[possible_events++] = T[iter].event_number;
                        previous_event
= iter;
               
}
       
}
        printf
("Maximum possible events that can be attended are %d. They are\n",possible_events);
       
for(iter=0;iter<possible_events;iter++)
       
{
                printf
("%d\n",events[iter]);
       
}
       
       
}

Related Tutorials ( Common examples of Greedy Algorithms ) :

Elementary cases : Fractional Knapsack Problem, Task Scheduling
 
 Elementary problems in Greedy algorithms - Fractional Knapsack, Task Scheduling. Along with C Program source code.
 Data Compression using Huffman Trees

 Compression using Huffman Trees. A greedy technique for encoding information.



Some Important Data Structures and Algorithms, at a glance:

Arrays : Popular Sorting and Searching Algorithms

 

  

Bubble Sort  

Insertion Sort 

Selection Sort Shell Sort

Merge Sort  

Quick Sort 

 
Heap Sort
 
Binary Search Algorithm

Basic Data Structures  and Operations on them


  

Stacks 

Queues  

 
 Single Linked List 

Double Linked List

Circular Linked List 











Basic Data Structures and Algorithms



Sorting- at a glance

 Bubble Sort One of the most elementary sorting algorithms to implement - and also very inefficient. Runs in quadratic time. A good starting point to understand sorting in general, before moving on to more advanced techniques and algorithms. A general idea of how the algorithm works and a the code for a C program.

Insertion Sort - Another quadratic time sorting algorithm - an example of dynamic programming. An explanation and step through of how the algorithm works, as well as the source code for a C program which performs insertion sort.

Selection Sort - Another quadratic time sorting algorithm - an example of a greedy algorithm. An explanation and step through of how the algorithm works, as well as the source code for a C program which performs selection sort.

Shell Sort- An inefficient but interesting algorithm, the complexity of which is not exactly known.

Merge Sort An example of a Divide and Conquer algorithm. Works in O(n log n) time. The memory complexity for this is a bit of a disadvantage.

Quick Sort In the average case, this works in O(n log n) time. No additional memory overhead - so this is better than merge sort in this regard. A partition element is selected, the array is restructured such that all elements greater or less than the partition are on opposite sides of the partition. These two parts of the array are then sorted recursively.

Heap Sort- Efficient sorting algorithm which runs in O(n log n) time. Uses the Heap data structure.

Binary Search Algorithm- Commonly used algorithm used to find the position of an element in a sorted array. Runs in O(log n) time.

Basic Data Structures and Algorithms


 Stacks Last In First Out data structures ( LIFO ). Like a stack of cards from which you pick up the one on the top ( which is the last one to be placed on top of the stack ). Documentation of the various operations and the stages a stack passes through when elements are inserted or deleted. C program to help you get an idea of how a stack is implemented in code.

Queues First in First Out data structure (FIFO). Like people waiting to buy tickets in a queue - the first one to stand in the queue, gets the ticket first and gets to leave the queue first. Documentation of the various operations and the stages a queue passes through as elements are inserted or deleted. C Program source code to help you get an idea of how a queue is implemented in code.

Single Linked List A self referential data structure. A list of elements, with a head and a tail; each element points to another of its own kind.

Double Linked List- A self referential data structure. A list of elements, with a head and a tail; each element points to another of its own kind in front of it, as well as another of its own kind, which happens to be behind it in the sequence.

Circular Linked List Linked list with no head and tail - elements point to each other in a circular fashion.

 Binary Search Trees A basic form of tree data structures. Inserting and deleting elements in them. Different kind of binary tree traversal algorithms.

 Heaps A tree like data structure where every element is lesser (or greater) than the one above it. Heap formation, sorting using heaps in O(n log n) time.

 Height Balanced Trees - Ensuring that trees remain balanced to optimize complexity of operations which are performed on them.

Graphs

 Depth First Search - Traversing through a graph using Depth First Search in which unvisited neighbors of the current vertex are pushed into a stack and visited in that order.

Breadth First Search - Traversing through a graph using Breadth First Search in which unvisited neighbors of the current vertex are pushed into a queue and then visited in that order.

Minimum Spanning Trees: Kruskal Algorithm- Finding the Minimum Spanning Tree using the Kruskal Algorithm which is a greedy technique. Introducing the concept of Union Find.

Minumum Spanning Trees: Prim's Algorithm- Finding the Minimum Spanning Tree using the Prim's Algorithm.

Dijkstra Algorithm for Shortest Paths- Popular algorithm for finding shortest paths : Dijkstra Algorithm.

Floyd Warshall Algorithm for Shortest Paths- All the all shortest path algorithm: Floyd Warshall Algorithm

Bellman Ford Algorithm - Another common shortest path algorithm : Bellman Ford Algorithm.

Dynamic Programming A technique used to solve optimization problems, based on identifying and solving sub-parts of a problem first.

Integer Knapsack problemAn elementary problem, often used to introduce the concept of dynamic programming.

Matrix Chain Multiplication Given a long chain of matrices of various sizes, how do you parenthesize them for the purpose of multiplication - how do you chose which ones to start multiplying first?

Longest Common Subsequence Given two strings, find the longest common sub sequence between them.

 Elementary cases : Fractional Knapsack Problem, Task Scheduling - Elementary problems in Greedy algorithms - Fractional Knapsack, Task Scheduling. Along with C Program source code.

Data Compression using Huffman TreesCompression using Huffman Trees. A greedy technique for encoding information.