Friday, April 3, 2020

Demonstration of Neural Networks without using Tensorflow or Keras in Python


The in detail demonstration of Neural Networks or better known as ANN without using any fancy libraries such as Tensorflow or Keras.

The images used for the demonstration are as shown below

The link to the videos is as given below. 








Sunday, March 22, 2020

IIIT Hyderabad 2018 question paper and solutions

IIITH 2018 PGEE question paper and solutions

As we are all aware of the IIITH PGEE exam which is coming close so here is the solution of the Section C computer science section for all the aspirants.


Mathematics will be coming soon. Here, the computer Science section is given.

CLICK HERE TO DOWNLOAD THE QUESTION PAPER OF CSE 2018 IIITH PGEE

CLICK HERE TO DOWNLOAD THE ANSWER OF CSE PAPER

The rest papers are coming soon. So stay tuned

Monday, March 16, 2020

NIC NIELIT_SC_B_2018

As we are all aware of the NIC exam which is coming close so here is the solution of the Section C computer science section for all the aspirants.


The Mathematics and The General Aptitude portion will be coming soon. Here, the computer Science section is given.




Stay tuned for further NIC and other solution papers

In case of any doubt put it in comments.

Saturday, February 29, 2020

POINTERS

Pointers are special variables that are responsible to store the l-value of a variable.
Let us clear of conception about l-value and r-value.
Well, r-value stands for real-value.
On the other hand, l-value stands for location-value.


Suppose we consider the following declarations as given below
i) int a=2;
in the above initialization, a is a variable that refers to a value of 2.
In other words, a memory address is created at a random memory location with name as 'a' and the value stored in that memory location is 20.
Now as shown here a is the name, 20 is the r-value and 400 is its address or l-value.
As now we are through with these let's dive into the depths of next syntax

ii) int a=2
int *p=&a









Now see in the first statement a variable with name a points to a value 20 with l-value as 400. In the second case, we have a pointer with name p that points to the address of variable 'a' or 400 and stores it into a separate memory address i.e. 600.


Now we shift to our second important topic under pointer.

POINTER AND ARRAY

An array is a contiguous memory allocation where numbers are stored in a continuous form. 
For example suppose, we are given an array
int a[]={5,10,7,9,8};//line 1
int *p=a;//line 2
int *p=&a;//line 3
What is the difference between the 3 lines?
line 1 initializes an array with certain values
line 2 is not an error. In the case of an array, the name of the array itself becomes the pointer base address. 
line 3 is also fine. It states that a container is created which stores the total array a.
Let us take an example.

This is an example of the container of an array but it is referenced by the first index of the array.

Now let’s look at certain notations in a pointer.
int a[2]
int *(a+2)
int 2[a]

Which of the above is correct?
Though it may not look but all three are accepted and are correct.
The first statement is obviously correct.
a[2]=*(a+2)
For any pointer operation
*(a+i)=a[i] where i is a constant.
Now *(a+i)=*(i+a)
a[2]=2[a]

Thus, all are the same.
Q) if A={2,5,6,7,8,0}
Find i) A[2] ii)3[A] 

Pointer addition
Now pointer addition is somewhat different from normal addition.
Suppose we are give a pointer with name A and asked to find the value of
*(A+2)
The * operator is address of operator. Well in this case our address value is calculated as
Resultant address= Base Address(A)+ data_type(A)*i…..here * means normal multiplication.
Now that we are through this lets take the previous question and add a subpart to it.

iii) Find *(A+1)

Storage of pointers
Since we are on the context of datatypes let’s make this very clear that every pointer irrespective of its type occupies 2B. Although certain books say that since pointer always store addresses and addresses are necessarily numbers. Numbers are of 2B, hence, each pointer occupies 2B.
So, just solve the below sum
i)                sizeof(void *d)-sizeof(int *f)*sizeof(char *h)
ii)              sizeof(void *d)-sizeof(int *f)*sizeof(char *’c’)
Well both looks similar but they are not. The first one gives -2 as sizeof() returns size of a variable or datatype in Bytes(B).
The second one is an error.
In pointers we can only pass variables or memory addresses but not a constant like ‘c’.

Memory allocation of Pointers
To understand the memory allocation, we need to understand how the memory looks like.


Any pointer when not allocated memory resides in the heap memory.
There are generally 2 types of methods to allocate memory.
Static allocation: This is the type of allocation that take place during compilation.
Dynamic allocation: This is the type of allocation in which the amount of memory to be allocated is known during the runtime. These values are stored in the Heap section.
The space in between heap and stack memory is the freely expandable space. Heap expands in the downward direction and Stack expands in the upward direction. At the time when both collides and there are no space for any memory allocation then it becomes crashing.
Dangling pointers
Dangling pointers arise during object destruction when an object that has an incoming reference is deleted or deallocated, without modifying the value of the pointer, so that the pointer still points to the memory location of the deallocated memory.
Suppose we have a pointer that was pointing to a certain value. Now if that value gets deleted somehow but this pointer does not change its r-value leading to a dangling pointer situation.
Let’s see this with an example
int a=30;
int *p=a;
int *s=Null;
s=p;
delete(p);
print(s)// we are referring to s and this is not an error but the value is irrelevant. Hence this leads to dangling error.
Why the notion of the pointer may become dangerous?
Pointers may appear to be harmless but it can do some great damages. Some of these are mentioned below:
           Suppose if the pointer takes any system address as its value then it can cause those system values to change and we may face problems.
Let’s take an example to understand. The world’s most simple virus can be made by just checking the address of Caps-Lock and assigning it to a pointer. Now definitely Caps-lock may be either ON/OFF denoted by 1/0 respectively. Now suppose if we take the value and perform bitwise OR operation of the above mentioned Caps-Lock value with 1, the answer will always remain 1. Hence, we cannot write any C, C++, Java or Python codes. Because they are all case sensitive.
Remedy: Always initialize your pointers to NULL and do not allow the pointer to randomly take up a value.

int*p=NULL;
Here we are not allowing the pointer p to take up any garbage value as data.

2)     Another prominent case is when suppose, we dynamically allocate some memory, but forget to free the memory then our memory remains allocated leading to increase in Heap memory and may sometime go on to crash completely.
Remedy: The only remedy in such situation is that dynamic allocation and deallocation must take place anywhere when allocation is needed.
Because of these problems certain programming languages like Java, Kotlin has removed pointers totally.
Self-Referential structure
Suppose the definition of a structure of a structure is given as below
struct demo1                                                                           struct demo2
{                                                                                               {
            int a;                                                                                        int b;
            struct demo2 p;                                                                       struct demo1 r;
}k;                                                                                            }m;

Now suppose when object of struct demo1 is to be created, it calls creation of demo2 struct which in turn calls calling of object of demo1 and this process continues till the total memory is exhausted.

How to solve this problem?
Well, again pointers coming to our rescue. Let us see how
struct demo1                                                                           struct demo2
{                                                                                               {
            int a;                                                                                        int b;
            struct demo2 *p;                                                                      struct demo1 *r;
}k;                                                                                            }m;
Now when we create an object of demo1, the object need not call object formation of demo2. As the size of any pointer is 2B, size of demo1 can be easily calculated as 4B. Similarly for demo2 it is 4B.

This is how pointer to an object of another structure solves the crashing problem. Although the above structure is stable but definitely this is not a self-referntial structure.
A self-referential structure is one which calls another object of same type in the same structure. Let us see an example.
struct Tree
{
            int key;
struct Tree *left,*right;
}root;

Here we find that inside the structure, same structure is called. These are self referential structure.
SOME COMPLEX FUNCTION PROTOTYPES
1)     int f(float)   >>>A function with name f taking a float value and returns an integer value
2)     int *f(int)      >>>A function taking an integer value and returning an int type pointer
3)     int (*f)(int)    >>>A pointer to function referred to by ‘f’ which takes an integer argument and returns an integer value
4)     void (*f[10])(int)>>An array of pointer to function which takes an int value as argument and returns no value.
5)     Int *(*a[5])(int)  >>>An array of 5 pointer to function which takes an integer value as argument and returns a pointer to integer in each case.
6)     float ***f(double ***)>> A function that takes pointer to pointer to pointer of double argument and returns pointer to pointer to pointer to float as return value.
7)     char (*f)(int (*)[3])>>>>A pointer to function referred by F that takes a pointer to an array of size 3 as argument and returns a char type data.
8)     Char(*(*f[3])(int))[5];>>>An array of 3 function pointers which takes an integer value and returns a pointer to array of size 5 of char type
9)     Float(*(F(int *)))[3]>>>>A function that takes an integer pointer as argument and returns a pointer of an array of size 3 of float type.
10)   Float *F(int *)[3]>>>>>>A function which takes an integer pointer and returns a pointer array of float type
11)  Char (*(*F())[3])(int)>>>A function which takes no argument and returns a pointer of which in turn is a function of array of size 3 pointer which takes an integer argument and return a char value.
12)  Void (*F)(int,void*(*)())>>>A pointer to function ‘F’ that takes an integer and a pointer of function that returns a void pointer which in turn returns no value
13)  Int **(*F)(int **,int **(A)(int **,int **))>>>A pointer to a function referred by ‘F’ which takes a pointer to pointer of integer and a function A which takes 2 pointer to pointer of integer as arguments and returns pointer to pointer of integer which in turn returns a pointer to pointer of integer.




                       

Thursday, February 20, 2020

Inverted Page table

Inverted Paging

Inverted paging was a concept that was introduced after the conventional Paging system, in order to remove the problems that the normal paging system was facing.
In the conventional paging system, we had stored them in the main memory.
A small depiction of the page table has been shown below.

As shown in the above diagram, we have a page table. The page table is referenced by a logical address. We will come back to the logical address a bit later.
First, we take a look at the total page table storage process.
Page table diagram

 Now, look at the above picture. Here on the left side, we find the process number and frame number distinction. This is actually the logical address. What happens is that virtual memory is created by the system, which stores all the process number against each page number.
Whenever a certain process in the page table is referred, then the logical table searches the whole table for the matching process number. Once the process number matches, it checks whether it belongs to the required page or not by comparing the page numbers. If the match found, retrieves that data from the main memory. If match not found then it needs to go to the main memory, search for it and copy the required page to the page table and compare. 
One might feel that fine what is the problem. I mean what was the need for this inverted page table. The answer is the searching time. and the process retrieval time.
To reduce this time inverted paging was proposed. But it had some grave disadvantages too.


Now first let us see what is this inverted paging.
In the inverted paging, everything was the same just the difference was that a process number is not mapped to a page number rather a page number was mapped to a process number.
I know it looks confusing but let's just look at the diagram to get a better insight. And definitely, I suggest one to compare the page table and the inverted page table diagram to understand better.

Inverted page table
Now here we can see there is only a small change, now the page number is the key against which the process number is mapped.
So, we can simply say that the physical address becomes the logical address and vice-versa.
One can also depict this as a mapping from physical address to logical address if we take our page table as the base of comparison.
Well now that you clearly see the comparison between the 2 diagrams, let's go ahead with the discussion.
Clearly, the inverted page table was invented for reducing the comparison time of the page table.
Clearly, this structure might have been suiting for uni-processing because in that case, we need to map against one process only but a disaster in case of multiprocessing systems where there are a large number of processes. In that case, finding which page is mapped against which process will again take linear time i.e. O(n) complexity as we need to compare each and every process. Thus, ruining the purpose of its presence. Hence this is not recently used.
In case of any doubt feel free to ask your doubts.

Tuesday, February 18, 2020

Stable and unstable sorting

Stable and unstable sorting

We have heard of this Stable term in several contexts. Well, to add to the list there is a comparison of sorting algorithms on the basis of Stable and unstable terms.
Let us understand the meaning of the Stable sorting algorithm.
Well, in general terms of Physics, what we now as a stable state?
A stable state is one in which when we displace the Centre of Gravity from one point to another, it settles itself and goes back to its original position.
Even in general terms, the stable term means fixed in a certain posture.
Thus, in the same way, if the input values do not change their order of input values in the sorted order then it is called a Stable sorting algorithm.
Seems confusing,
Ok, let's take an example and then try to understand this fact.
For example, if our inputted array is

Array: 3,2,4,5,7,6,4

NOTE: One 4 is in a different color, to denote that the red 4 was entered first and then the black 4 was entered.

Ok now that we are clear upon the aforementioned facts. Let us see how the sorted array will look. Let the sorting be done in the ascending order.
Thus, on sorting the array there can be two outputs.
Don't believe me?
Ok, I'll show you how.

1st output: 2,3,4,4,5,6,7

2nd output: 2,3,4,4,5,6,7

You might find there is nothing as such differences between the two outputs. Well, there is no difference in the sorting but there is one small difference.
Note the position of the red 4 and the black 4. In the 1st output red, 4 is before the black 4. Whereas in the second output the red 4 is after black 4.
According to the input, the red 4 was entered first and then the black 4 was entered. That pattern is followed in the 1st output but not in the second output. Hence, the former is a Stable sorting algorithm whereas the latter is an unstable sorting algorithm.

Examples of stable sorting algorithm: Bubble sort, Insertion sort, Merge sort, Counting sort, Tim sort
Examples of unstable sorting algorithm: Selection sort, Heap sort, Shell sort, Quicksort.

Although one may argue that how Selection sort an unstable sorting algorithm. But it is true. This is because it swaps the same elements too.

Though it is true that this property can be controlled by appropriate use of conditional statements and relational operators. Yet, it is possible to provide an unstable sorting condition. Hence, it falls under unstable sorting algorithms

On the other Bubble sort and merge sort provides appropriate positioning and by no means gives unstable condition. Hence, it is not an unstable sorting algorithm.


Monday, February 17, 2020

Finding the strongest connected components using Kosaraju's theorem

Kosaraju's Theorem

When we know what the strongest connected components are, we must know or devise an algorithm to find them. The theorem that is commonly used is Kosaraju's theorem.
Let us dive deep into the Kosaraju's theorem.
First lets list out the steps involved in finding the Kosaraju's theorem.
Then we will devise an algorithm and finally go for the test case or example.
STEP 1:
The first step involves, performing DFS of the graph starting from any vertex and storing the vertices visited in a data structure D indicating as visited. 
STEP 2:
Now when there are no more directions to go, retrace back to the previous edge and push the elements in a stack S.
STEP 3:
Now when this step is over, reverse each direction of the graph.
STEP 4:
Now pop the first element and perform DFS. Wherever the DFS will stop, till that is our one strongly connected component.
Also at the same time store, the edges visited in D.
STEP 5:
At last, all the sequence are found



Let us now dive deep into the algorithm

int kosaraju(int v,struct gr *G)
{
       int c=0;
      for(i=0;i<v;i++)
       {
            if(i not in V)
            {
                  V[max]=i;
                   c++;
                  DFS(i,v-1,G);
           }
      }
      G=Reverse(G);
      for(i=0;i<v;i++)
         V[i]=-1;
      for(i=0;i<v;i++)
      {
             
            if(i is not in V and j=S.pop())
             {
                   V[max]=i;
                   c++;
                  DFS(j,v-1,G)
             }
      }
      print("Number of strongly connected components:",c);
}
void DFS(int s,int v,int *G)
{
            for(i=s;i<v;i++)
            {
                   printf("%d-",G->value);
                   S.push(G->value);
                   G->value=0;
                   if(G->next->value!=NULL and G->next->value=-1)
                                          DFS(s+1,v.G->next);
              }
}

Now let us move to our example.
We take the following graph
Now the graph needs certain naming. So let's take name them like below








Now, this graph is perfect. We take a stack
S:
This shows that stack is empty







we perform our 1st DFS move taking '1' as our selected vertex.
Thus, after our first vertex movement, our graph becomes
Stack S:
This shows S is still empty








V:1

After 1st DFS visited data structure has 1 as visited
After this, we move to our next vertex as 0
Stack S:
S is still empty








V: 10
Next, we move to 2
S: 
S still empty








V: 102
After this no place for 2 to go so it goes out of the DFS
Now it goes back to 0. Now from 0, we can also move to 3
Hence, our next vertex visited is 3
S: 2









V: 1023
now we move to vertex 4
S: 2









V: 10234

Now since all are over we retrace back.
We move to 3 and add 4 in the stack
S: 24
V:10234
Now we retrace back to 0 and add 3 to stack
S:243
V:10234
We now retrace back to 1 and add 0
S: 2430
V:10234
We now move out of the loop
S: 24301
V: 10234


Hence now it is our responsibility we need to reverse the graph as below
S: 24301
V:








Now we pop our first value from the stack and choose it as our starting element for DFS()
S.pop()=1
So we add 1 to our visited data structure.
S: 2430
V: 1
output: 1-







Next, DFS is 2
S: 2430
V: 12
output 1-2-







next DFS is 0
S: 2430
V: 120
output 1-2-0-
Next no more DFS possible so retrace back
'2' also has got no more paths to go so retrace back to '1'
Now '1' also has got no path to go so retrace back
Now S.pop()=0
but 0 is already visited hence continued
S: 243
V:120
output: 1-2-0-

S.pop()=3
'3' is not visited until now. So next we DFS for '3'
S: 24
V: 1203
output
1-2-0-
3-





Next, there is no path to go hence it retraces back

Next S.pop()=4 which is not visited till now
S: 2
V:12034
output:
1-2-0-
3-
4-




Next, there is no place for 4 to go which is not visited
hence retraces back
S.pop()=2
but 2 is already visited hence ignored
hence final status
S: empty
V: 12034
output:
1-2-0-
3-
4-
hence this is our required output