Saturday, February 29, 2020

POINTERS

Pointers are special variables that are responsible to store the l-value of a variable.
Let us clear of conception about l-value and r-value.
Well, r-value stands for real-value.
On the other hand, l-value stands for location-value.


Suppose we consider the following declarations as given below
i) int a=2;
in the above initialization, a is a variable that refers to a value of 2.
In other words, a memory address is created at a random memory location with name as 'a' and the value stored in that memory location is 20.
Now as shown here a is the name, 20 is the r-value and 400 is its address or l-value.
As now we are through with these let's dive into the depths of next syntax

ii) int a=2
int *p=&a









Now see in the first statement a variable with name a points to a value 20 with l-value as 400. In the second case, we have a pointer with name p that points to the address of variable 'a' or 400 and stores it into a separate memory address i.e. 600.


Now we shift to our second important topic under pointer.

POINTER AND ARRAY

An array is a contiguous memory allocation where numbers are stored in a continuous form. 
For example suppose, we are given an array
int a[]={5,10,7,9,8};//line 1
int *p=a;//line 2
int *p=&a;//line 3
What is the difference between the 3 lines?
line 1 initializes an array with certain values
line 2 is not an error. In the case of an array, the name of the array itself becomes the pointer base address. 
line 3 is also fine. It states that a container is created which stores the total array a.
Let us take an example.

This is an example of the container of an array but it is referenced by the first index of the array.

Now let’s look at certain notations in a pointer.
int a[2]
int *(a+2)
int 2[a]

Which of the above is correct?
Though it may not look but all three are accepted and are correct.
The first statement is obviously correct.
a[2]=*(a+2)
For any pointer operation
*(a+i)=a[i] where i is a constant.
Now *(a+i)=*(i+a)
a[2]=2[a]

Thus, all are the same.
Q) if A={2,5,6,7,8,0}
Find i) A[2] ii)3[A] 

Pointer addition
Now pointer addition is somewhat different from normal addition.
Suppose we are give a pointer with name A and asked to find the value of
*(A+2)
The * operator is address of operator. Well in this case our address value is calculated as
Resultant address= Base Address(A)+ data_type(A)*i…..here * means normal multiplication.
Now that we are through this lets take the previous question and add a subpart to it.

iii) Find *(A+1)

Storage of pointers
Since we are on the context of datatypes let’s make this very clear that every pointer irrespective of its type occupies 2B. Although certain books say that since pointer always store addresses and addresses are necessarily numbers. Numbers are of 2B, hence, each pointer occupies 2B.
So, just solve the below sum
i)                sizeof(void *d)-sizeof(int *f)*sizeof(char *h)
ii)              sizeof(void *d)-sizeof(int *f)*sizeof(char *’c’)
Well both looks similar but they are not. The first one gives -2 as sizeof() returns size of a variable or datatype in Bytes(B).
The second one is an error.
In pointers we can only pass variables or memory addresses but not a constant like ‘c’.

Memory allocation of Pointers
To understand the memory allocation, we need to understand how the memory looks like.


Any pointer when not allocated memory resides in the heap memory.
There are generally 2 types of methods to allocate memory.
Static allocation: This is the type of allocation that take place during compilation.
Dynamic allocation: This is the type of allocation in which the amount of memory to be allocated is known during the runtime. These values are stored in the Heap section.
The space in between heap and stack memory is the freely expandable space. Heap expands in the downward direction and Stack expands in the upward direction. At the time when both collides and there are no space for any memory allocation then it becomes crashing.
Dangling pointers
Dangling pointers arise during object destruction when an object that has an incoming reference is deleted or deallocated, without modifying the value of the pointer, so that the pointer still points to the memory location of the deallocated memory.
Suppose we have a pointer that was pointing to a certain value. Now if that value gets deleted somehow but this pointer does not change its r-value leading to a dangling pointer situation.
Let’s see this with an example
int a=30;
int *p=a;
int *s=Null;
s=p;
delete(p);
print(s)// we are referring to s and this is not an error but the value is irrelevant. Hence this leads to dangling error.
Why the notion of the pointer may become dangerous?
Pointers may appear to be harmless but it can do some great damages. Some of these are mentioned below:
           Suppose if the pointer takes any system address as its value then it can cause those system values to change and we may face problems.
Let’s take an example to understand. The world’s most simple virus can be made by just checking the address of Caps-Lock and assigning it to a pointer. Now definitely Caps-lock may be either ON/OFF denoted by 1/0 respectively. Now suppose if we take the value and perform bitwise OR operation of the above mentioned Caps-Lock value with 1, the answer will always remain 1. Hence, we cannot write any C, C++, Java or Python codes. Because they are all case sensitive.
Remedy: Always initialize your pointers to NULL and do not allow the pointer to randomly take up a value.

int*p=NULL;
Here we are not allowing the pointer p to take up any garbage value as data.

2)     Another prominent case is when suppose, we dynamically allocate some memory, but forget to free the memory then our memory remains allocated leading to increase in Heap memory and may sometime go on to crash completely.
Remedy: The only remedy in such situation is that dynamic allocation and deallocation must take place anywhere when allocation is needed.
Because of these problems certain programming languages like Java, Kotlin has removed pointers totally.
Self-Referential structure
Suppose the definition of a structure of a structure is given as below
struct demo1                                                                           struct demo2
{                                                                                               {
            int a;                                                                                        int b;
            struct demo2 p;                                                                       struct demo1 r;
}k;                                                                                            }m;

Now suppose when object of struct demo1 is to be created, it calls creation of demo2 struct which in turn calls calling of object of demo1 and this process continues till the total memory is exhausted.

How to solve this problem?
Well, again pointers coming to our rescue. Let us see how
struct demo1                                                                           struct demo2
{                                                                                               {
            int a;                                                                                        int b;
            struct demo2 *p;                                                                      struct demo1 *r;
}k;                                                                                            }m;
Now when we create an object of demo1, the object need not call object formation of demo2. As the size of any pointer is 2B, size of demo1 can be easily calculated as 4B. Similarly for demo2 it is 4B.

This is how pointer to an object of another structure solves the crashing problem. Although the above structure is stable but definitely this is not a self-referntial structure.
A self-referential structure is one which calls another object of same type in the same structure. Let us see an example.
struct Tree
{
            int key;
struct Tree *left,*right;
}root;

Here we find that inside the structure, same structure is called. These are self referential structure.
SOME COMPLEX FUNCTION PROTOTYPES
1)     int f(float)   >>>A function with name f taking a float value and returns an integer value
2)     int *f(int)      >>>A function taking an integer value and returning an int type pointer
3)     int (*f)(int)    >>>A pointer to function referred to by ‘f’ which takes an integer argument and returns an integer value
4)     void (*f[10])(int)>>An array of pointer to function which takes an int value as argument and returns no value.
5)     Int *(*a[5])(int)  >>>An array of 5 pointer to function which takes an integer value as argument and returns a pointer to integer in each case.
6)     float ***f(double ***)>> A function that takes pointer to pointer to pointer of double argument and returns pointer to pointer to pointer to float as return value.
7)     char (*f)(int (*)[3])>>>>A pointer to function referred by F that takes a pointer to an array of size 3 as argument and returns a char type data.
8)     Char(*(*f[3])(int))[5];>>>An array of 3 function pointers which takes an integer value and returns a pointer to array of size 5 of char type
9)     Float(*(F(int *)))[3]>>>>A function that takes an integer pointer as argument and returns a pointer of an array of size 3 of float type.
10)   Float *F(int *)[3]>>>>>>A function which takes an integer pointer and returns a pointer array of float type
11)  Char (*(*F())[3])(int)>>>A function which takes no argument and returns a pointer of which in turn is a function of array of size 3 pointer which takes an integer argument and return a char value.
12)  Void (*F)(int,void*(*)())>>>A pointer to function ‘F’ that takes an integer and a pointer of function that returns a void pointer which in turn returns no value
13)  Int **(*F)(int **,int **(A)(int **,int **))>>>A pointer to a function referred by ‘F’ which takes a pointer to pointer of integer and a function A which takes 2 pointer to pointer of integer as arguments and returns pointer to pointer of integer which in turn returns a pointer to pointer of integer.




                       

Thursday, February 20, 2020

Inverted Page table

Inverted Paging

Inverted paging was a concept that was introduced after the conventional Paging system, in order to remove the problems that the normal paging system was facing.
In the conventional paging system, we had stored them in the main memory.
A small depiction of the page table has been shown below.

As shown in the above diagram, we have a page table. The page table is referenced by a logical address. We will come back to the logical address a bit later.
First, we take a look at the total page table storage process.
Page table diagram

 Now, look at the above picture. Here on the left side, we find the process number and frame number distinction. This is actually the logical address. What happens is that virtual memory is created by the system, which stores all the process number against each page number.
Whenever a certain process in the page table is referred, then the logical table searches the whole table for the matching process number. Once the process number matches, it checks whether it belongs to the required page or not by comparing the page numbers. If the match found, retrieves that data from the main memory. If match not found then it needs to go to the main memory, search for it and copy the required page to the page table and compare. 
One might feel that fine what is the problem. I mean what was the need for this inverted page table. The answer is the searching time. and the process retrieval time.
To reduce this time inverted paging was proposed. But it had some grave disadvantages too.


Now first let us see what is this inverted paging.
In the inverted paging, everything was the same just the difference was that a process number is not mapped to a page number rather a page number was mapped to a process number.
I know it looks confusing but let's just look at the diagram to get a better insight. And definitely, I suggest one to compare the page table and the inverted page table diagram to understand better.

Inverted page table
Now here we can see there is only a small change, now the page number is the key against which the process number is mapped.
So, we can simply say that the physical address becomes the logical address and vice-versa.
One can also depict this as a mapping from physical address to logical address if we take our page table as the base of comparison.
Well now that you clearly see the comparison between the 2 diagrams, let's go ahead with the discussion.
Clearly, the inverted page table was invented for reducing the comparison time of the page table.
Clearly, this structure might have been suiting for uni-processing because in that case, we need to map against one process only but a disaster in case of multiprocessing systems where there are a large number of processes. In that case, finding which page is mapped against which process will again take linear time i.e. O(n) complexity as we need to compare each and every process. Thus, ruining the purpose of its presence. Hence this is not recently used.
In case of any doubt feel free to ask your doubts.

Tuesday, February 18, 2020

Stable and unstable sorting

Stable and unstable sorting

We have heard of this Stable term in several contexts. Well, to add to the list there is a comparison of sorting algorithms on the basis of Stable and unstable terms.
Let us understand the meaning of the Stable sorting algorithm.
Well, in general terms of Physics, what we now as a stable state?
A stable state is one in which when we displace the Centre of Gravity from one point to another, it settles itself and goes back to its original position.
Even in general terms, the stable term means fixed in a certain posture.
Thus, in the same way, if the input values do not change their order of input values in the sorted order then it is called a Stable sorting algorithm.
Seems confusing,
Ok, let's take an example and then try to understand this fact.
For example, if our inputted array is

Array: 3,2,4,5,7,6,4

NOTE: One 4 is in a different color, to denote that the red 4 was entered first and then the black 4 was entered.

Ok now that we are clear upon the aforementioned facts. Let us see how the sorted array will look. Let the sorting be done in the ascending order.
Thus, on sorting the array there can be two outputs.
Don't believe me?
Ok, I'll show you how.

1st output: 2,3,4,4,5,6,7

2nd output: 2,3,4,4,5,6,7

You might find there is nothing as such differences between the two outputs. Well, there is no difference in the sorting but there is one small difference.
Note the position of the red 4 and the black 4. In the 1st output red, 4 is before the black 4. Whereas in the second output the red 4 is after black 4.
According to the input, the red 4 was entered first and then the black 4 was entered. That pattern is followed in the 1st output but not in the second output. Hence, the former is a Stable sorting algorithm whereas the latter is an unstable sorting algorithm.

Examples of stable sorting algorithm: Bubble sort, Insertion sort, Merge sort, Counting sort, Tim sort
Examples of unstable sorting algorithm: Selection sort, Heap sort, Shell sort, Quicksort.

Although one may argue that how Selection sort an unstable sorting algorithm. But it is true. This is because it swaps the same elements too.

Though it is true that this property can be controlled by appropriate use of conditional statements and relational operators. Yet, it is possible to provide an unstable sorting condition. Hence, it falls under unstable sorting algorithms

On the other Bubble sort and merge sort provides appropriate positioning and by no means gives unstable condition. Hence, it is not an unstable sorting algorithm.


Monday, February 17, 2020

Finding the strongest connected components using Kosaraju's theorem

Kosaraju's Theorem

When we know what the strongest connected components are, we must know or devise an algorithm to find them. The theorem that is commonly used is Kosaraju's theorem.
Let us dive deep into the Kosaraju's theorem.
First lets list out the steps involved in finding the Kosaraju's theorem.
Then we will devise an algorithm and finally go for the test case or example.
STEP 1:
The first step involves, performing DFS of the graph starting from any vertex and storing the vertices visited in a data structure D indicating as visited. 
STEP 2:
Now when there are no more directions to go, retrace back to the previous edge and push the elements in a stack S.
STEP 3:
Now when this step is over, reverse each direction of the graph.
STEP 4:
Now pop the first element and perform DFS. Wherever the DFS will stop, till that is our one strongly connected component.
Also at the same time store, the edges visited in D.
STEP 5:
At last, all the sequence are found



Let us now dive deep into the algorithm

int kosaraju(int v,struct gr *G)
{
       int c=0;
      for(i=0;i<v;i++)
       {
            if(i not in V)
            {
                  V[max]=i;
                   c++;
                  DFS(i,v-1,G);
           }
      }
      G=Reverse(G);
      for(i=0;i<v;i++)
         V[i]=-1;
      for(i=0;i<v;i++)
      {
             
            if(i is not in V and j=S.pop())
             {
                   V[max]=i;
                   c++;
                  DFS(j,v-1,G)
             }
      }
      print("Number of strongly connected components:",c);
}
void DFS(int s,int v,int *G)
{
            for(i=s;i<v;i++)
            {
                   printf("%d-",G->value);
                   S.push(G->value);
                   G->value=0;
                   if(G->next->value!=NULL and G->next->value=-1)
                                          DFS(s+1,v.G->next);
              }
}

Now let us move to our example.
We take the following graph
Now the graph needs certain naming. So let's take name them like below








Now, this graph is perfect. We take a stack
S:
This shows that stack is empty







we perform our 1st DFS move taking '1' as our selected vertex.
Thus, after our first vertex movement, our graph becomes
Stack S:
This shows S is still empty








V:1

After 1st DFS visited data structure has 1 as visited
After this, we move to our next vertex as 0
Stack S:
S is still empty








V: 10
Next, we move to 2
S: 
S still empty








V: 102
After this no place for 2 to go so it goes out of the DFS
Now it goes back to 0. Now from 0, we can also move to 3
Hence, our next vertex visited is 3
S: 2









V: 1023
now we move to vertex 4
S: 2









V: 10234

Now since all are over we retrace back.
We move to 3 and add 4 in the stack
S: 24
V:10234
Now we retrace back to 0 and add 3 to stack
S:243
V:10234
We now retrace back to 1 and add 0
S: 2430
V:10234
We now move out of the loop
S: 24301
V: 10234


Hence now it is our responsibility we need to reverse the graph as below
S: 24301
V:








Now we pop our first value from the stack and choose it as our starting element for DFS()
S.pop()=1
So we add 1 to our visited data structure.
S: 2430
V: 1
output: 1-







Next, DFS is 2
S: 2430
V: 12
output 1-2-







next DFS is 0
S: 2430
V: 120
output 1-2-0-
Next no more DFS possible so retrace back
'2' also has got no more paths to go so retrace back to '1'
Now '1' also has got no path to go so retrace back
Now S.pop()=0
but 0 is already visited hence continued
S: 243
V:120
output: 1-2-0-

S.pop()=3
'3' is not visited until now. So next we DFS for '3'
S: 24
V: 1203
output
1-2-0-
3-





Next, there is no path to go hence it retraces back

Next S.pop()=4 which is not visited till now
S: 2
V:12034
output:
1-2-0-
3-
4-




Next, there is no place for 4 to go which is not visited
hence retraces back
S.pop()=2
but 2 is already visited hence ignored
hence final status
S: empty
V: 12034
output:
1-2-0-
3-
4-
hence this is our required output




Sunday, February 16, 2020

Ways to find connected components using DFS

Finding connected components using DFS

Ok now that we are thorough with Connected Components and Strongly Connected components. Let's check how to find the connected components using DFS.
For finding the components we follow the following steps
STEP 1:
Take an array consisting of all the vertices. That is the size of array = number of vertices.
STEP 2:
Assign each vertex value in the array as -1
STEP 3:
Start from each vertex and apply DFS to them if and only if vertex value=-1 Else move to the next vertex.
STEP 4:
At last count, the number of times DFS moves was initiated from the parent loop. That many are the components and each component is the vertices traversed during DFS.

Let's create an algorithm to justify my point.
void Connected_components(int v,int *G)
{
                      int c=0,i;
                     int arr=new int[v];
                     for (i=0;i<v;i++)
                     {
                                 arr[i]=-1;
                      }
                       for (i=0;i<v;i++)
                       {
                                   if(arr[i]==-1)
                                         {
                                                c++;
                                                 printf("\n");
                                                DFS(i,v,G);
                                           }
                          }
                          printf("Number of components:%d",c);
}
void DFS(int s,int v,int *G)
{
            for(i=s;i<v;i++)
            {
                   printf("%d-",G->value);
                   G->value=0;
                   if(G->next->value!=NULL and G->next->value=-1)
                                          DFS(s+1,v.G->next);
              }
}

Now that we have devised an algorithm. Let's take an example and solve it.
Let us take the graph below and find the number of components also each component values 

 As shown here we have a partly connected and partly disconnected undirected graph. 
We need to find the number of components and the contents of each component respectively.





Before starting I must briefly tell about the 'G' variable in the algorithm.
G is a self-referential pointer. G points to a structure which is defined as follows
typedef struct depth
{
               int value;
               depth *next
};
Now let's apply step 1 and convert each vertex value to -1.

AFTER APPLYING STEP 1 the graph looks like
Thus as shown each value is -1.
Now we choose our first vertex as 0. Though any vertex can be chosen. Yet let us go serially. hence, let us choose '0' vertex.
 Thus, we start performing DFS from 0.
First, c=0 becomes c=c+1 which is '1'
Hence, 0 is printed on the screen and DFS() is called again
Now we change the value of '0' vertex to 0 and move to 2.
              
 ON THE SCREEN 
0-







After 2 it goes to next neighbor using DFS() which is 4 after changing the value of '2' vertex as 0

ON THE SCREEN NOW WE GET
0-2-








Now when 2 is also completed let us move to 4 and makes it 0
ON THE SCREEN DISPLAY
0-2-4-








Now the DFS cannot send it to any other node hence, it moves out of the DFS() to the parent function which is connected components().
Again now the connected components loop moves to '1' vertex and finds that it is still -1 so c++ gets executed and c becomes'2'. Also, DFS() is called.
Thus, '1' vertex value gets changed to 0.
ON THE SCREEN NOW IT DISPLAYS
0-2-4-
1-







DFS() gets called again and the pointer moves to 3 now.
Hence, now the graph becomes
ON THE SCREEN NOW IT DISPLAYS 
0-2-4-
1-3-







Now since DFS() cannot connect to any other node, it retraces back to the parent function.
Now in the connected components loop, the '2' vertex is visited but DFS cannot be called as the value is no more '-1'
Similarly, loop visits vertex '3' and vertex '4' but DFS could not be called as the values are not '-1' in respective nodes.
Hence c value also does not increase.
Now when our loop points to vertex '5' Now it is still -1 hence c value increases to 3 from 2 and the new line is printed.
Thus DFS() is called
And finally, vertex '5' is visited.
ON THE SCREEN IT DISPLAYS
0-2-4-
1-3-
5-






Now since DFS cannot go to any other node hence it breaks out.
In the parent function loop points to vertex '5'. But since its value is not '-1' hence, DFS() not called.
Then the loop becomes 6 and breaks out of the parent loop
Then it prints 3 which is the number of components
Finally the output of the code on the screen
0-2-4-
1-3-
5-
Number of components:3

Saturday, February 15, 2020

Connected Components and Strongly Connected Components

Connected Components and Strongly Connected Components

In a directed graph if we can reach every vertex starting from any vertex then such components are called connected components.
But what are strongly connected components? Well, a strongly connected component is a subset of connected components. The only difference is that in connected components if we can reach any vertex from any vertex but in Strongly connected components we need to have a two-way connection system i.e. if A to B vertices are connected by an edge then B to A must also be present.
So does the above-mentioned statement contradict to the fact that it is a directed graph?
What the hell are you speaking?













Well not actually. Firstly a directed graph is definitely not an undirected graph but a subset of it. But most importantly the statement doesn't say that we need to have a direct path from A to B and B to A. There might be an intermediate vertex.

Let's have a look into this through an image
Take a thorough look into the above diagram and try to get the connected and strongly connected components.
Well I was just kidding. Let's just find them together.
The strongly connected components are
A. 1-2-3
B. 4
C. 5
But the connected components are not the same
Connected components are
A. 1-2-3-4-5
Do you agree?
Definitely, you do. If you get anything else. Try doing again.
But, why are the strongly connected components same as connected components
This is because, in the above diagram, the component 1-2-3 can reach any vertex (out of 1,2 and 3) starting from any vertex in the component.
But now if we try to add 4 to the above component and make 1-2-3-4 as a single component, it is observed that we cannot reach any vertex from any vertex like suppose if we start from 4, we cannot connect to 1 or 2 or 3. Hence it violates the laws of Strongly connected components. Hence it is a separate strongly connected component.
Similarly, if we connect 5 we cannot reach 1,2,3 or 4 from it hence it is a single and a separated component.

If you think you have got the point comfortably then go for the following questions. Else drop in our comment box, the part you are not comfortable with.

Q1. Is a single undirected edge be called a Strongly connected component?
Q2. Rahul's teacher asks him to apply DFS on a given graph of 7 vertices. Rahul on doing so came up with the following conclusion:
a) Each vertex has the same in-degree and out-degree sequence
b) Edge set is empty
Then find A and B where A is the number of components that are present in strongly connected set and B is the number of components present in the connected components.
Q3. Let G=(V, E) be a directed graph where V is the set of vertices and E the set of edges. Then which one of the following graphs has the same strongly connected components as G ?






Q4. Is a cyclic graph have strongly connected components the same as connected components?

Answers
A1. Strongly connected components are found through DFS only since here in a single undirected edge we can reach any vertex from any vertex so definitely it is strongly connected.

A2. Since the number of vertices =7
As there are no edges.
Thus the number of strongly connected componets=number of vertices=7
Similarly, the number of connected componets=7
So, A=7 and B=7

A3. option B
The option is pretty clear though. If we reverse the directions of all arcs in a graph, the new graph has the same set of strongly connected components as the original graph.

A4. A cyclic graph is formed by connecting all the vertex to the closest components. Thus definitely connected components have only 1 component but we cannot reach any vertex from any other vertex only if directed. Thus, may not have 1 strongly connected component. But definitely can have the same number of components when undirected only. 


In case of any doubt please feel free to ask.