Let’s take an example of sorting alogrithms like insertion and heap sort doesn’t creates a new array during sorting as they are in-place sorting techniques but merge sort creates an array during sorting of elements which takes an extra space so if there is a concern of space then obviously one will prefer the insertion or heap sort.

Suppose you are provided with a task to sort the given array, so there are many sorting algorithms but you will choose the optimsed and efficient one always and for choosing that you have to analyze different algorithms on the basis of their space and time complexity.

## Definition of Space Complexity

`Space complexity`

is nothing but the amount of memory space that an algorithm or a problem takes during the execution of that particular problem/algo.

The `space complexity`

is not only calculated by the space used by the variables in the problem/algo it also includes and considers the space for input values with it.

### Space Complexity Vs Auxiliary Space

There is a sort of confusion among people between the `space complexity`

and the auxiliary space. So let’s be clear about that, so auxiliary space is nothing but the space required by an algorithm/problem during the execution of that algorithm/problem and it is not equal to the space complexity because space complexity includes space for input values along with it also.

So we can say that space complexity is the combination or sum up of the auxiliary space and the space used by input values.

`Space Complexity = Auxiliary Space + Space used for input values`

Let’s take an example:

```
//Sum Of N Natural Number
int sum(int n)
{
int i,sum=0;
for(i=n;i>=1;i--)
sum=sum+i
return sum;
}
```

So in the above example input value is `'n'`

that is constant which will take the space of O(1). Now what about auxiliary space, so it is also O(1) because `'i'`

and `'sum'`

are also constants.

Hence total space complexity is `O(1)`

.

## Algorithm to Evaluate Space Complexity in Data Structures

To evaluate space complexity in data structures, analyze the memory usage of variables and data structures in an algorithm. Assign memory for each, considering primitive types, arrays, and linked structures. Calculate total memory, distinguishing between auxiliary space and input space. Express space complexity as a function of input size using Big-O notation. Explore alternative data structures to optimize memory usage. Consider trade-offs between time and space complexity, ensuring efficient utilization of memory resources. This methodical approach facilitates a comprehensive understanding of an algorithm’s space requirements, aiding in designing memory-efficient solutions.

## Need to Evaluate Space Complexity in Data Structures

Nowadays all systems come up with a large memory so this space is not considered for them but to make our algorithm more efficient so that it can run in less amount of space we have to analyze the `space complexity`

.

Developers of real-world applications are constrained by the memory space of the systems they choose to run on. This is where `space complexity`

comes into play, as we never want to run a function or process that consumes more space than the system has available at any given time.

## Methods to Calculate Space Complexity

Now let’s understand how to calculate the `space complexity`

of an algorithm/problem.

We need to know the amount of memory used by different types of datatype variables,program instructions, constant values and few other things like function call, recursion stack(in recursion case) in order to calculate the `space complexity`

. The amount of memory used by different types of datatype variables varies by os, but the way of calculating the space complexity continues to remain the same.

**Language C compiler takes the following space:**

Type | Size |
---|---|

bool, char, unsigned char, signed char, __int8 | 1 byte |

__int16, short, unsigned short, wchar_t, __wchar_t | 2 bytes |

float, _int32, int, unsigned int, long, unsigned long | 4 bytes |

double, __int64, long double, long long | 8 bytes |

Now let’s understand with an example that how to calculate the space complexity of an algorithm.

### Example 1: Addition of Numbers

```
{
int a = x + y + z;
return (a);
}
```

So in the above example, there are `4`

integer variables those are `a, x, y, z`

so they will take `4`

bytes(as given in the table above) space for each variable, and extra `4-byte`

space will also be added to the total space complexity for the return value that is a.

**Hence, the total space complexity = 4*4 + 4 = 20 bytes**

But for this example, this is the fixed complexity and because of the same variables inputs, such space complexities are considered as **constant space complexities or so-called O(1) space complexity.**

### Example 2: Factorial of a number(Recursive)

```
factorial(N){
if(N<=1)
{
return 1;
}
else
{
return (N*factorial(N-1));
}
}
```

So here this time there is an algorithm to find the factorial of the number using a recursive method.

Now,

- “
**N**” is an integer variable that stores the value for which we have to find the factorial, so no matter what value will, it will just take “**4 bytes**” of space. - Now
**function call**, “**if**” condition, “**else**” condition, and**return**function all come under the**auxiliary space**, and let’s assume these all will take combined “**4 bytes**” of space but the matter of fact here is that here we are calling that function recursively`"N"`

times so here the complexity of auxiliary space will be “**4*N bytes**” where N is the number of which factorial have to be found.

**Hence,** **Total Space Complexity** = **( 4 + 4*N) bytes**

But these 4 bytes are constant so we will not consider it and after removing all the constants(4 from 4*N) we can finally say that this algo have a complexity of “

**O(N)**“.

## Space Complexity Vs Time Complexity

Aspect | Space Complexity | Time Complexity |
---|---|---|

Calculation Focus | Calculates the time required | Estimates the memory space required |

Counting Scope | Time is counted for all statements | Memory space is counted for all variables, inputs, and outputs |

Primary Determinant | The size of the input data is the primary determinant | Primarily determined by the auxiliary variable size |

Optimization Emphasis | More crucial in terms of solution optimization | More essential in terms of solution optimization |

## Algorithm Analysis

The assessment of algorithms typically occurs in two distinct phases: before implementation and after implementation.

A **priori analysis** involves the theoretical examination of an algorithm. This analysis assumes that variables like processor speed are constant and have no influence on the implementation. It aims to determine the efficiency of an algorithm based on theoretical considerations.

**Empirical analysis**, on the other hand, is a posterior analysis. This involves implementing the chosen algorithm using a programming language and deploying it on the targeted computer system. Practical data, including running time and space requirements, is then collected to conduct a comprehensive investigation into the algorithm’s performance in real-world scenarios.

## Algorithm Complexity

When we symbolize the size of input data as N and designate X as an algorithm, the effectiveness of algorithm X is predominantly shaped by the resources it consumes, specifically in terms of time and space.

**Time Factor** – This involves a meticulous examination of critical operations, such as comparisons in sorting algorithms, to gauge the duration it takes for the algorithm to execute.

**Space Factor** – The evaluation of space complexity entails aggregating the memory usage of the algorithm, and determining how much space it requires for effective operation.

In the context of N representing the size of input data, the complexity of an algorithm, denoted as f(N), provides insight into the amount of both running time and storage space essential for the method’s execution.

## Conclusion

- Try to make the space as little as possible so as to keep the space complexity of the program minimal.
- Always analyze the worst-case scenario so that it can handle the large inputs and can have high adaptability and supportability.
- The finalizing of the algo after considering the worst case will keep the resources(made using that algo) in proper condition without any heating issues.