Open Addressing
...

Like separate chaining, open addressing is a method for handling collisions. In Open Addressing, all elements are stored in the hash table itself. So at any point, the size of the table must be greater than or equal to the total number of keys (Note that we can increase table size by copying old data if needed). This approach is also known as closed hashing. This entire procedure is based upon probing. We will understand the types of probing ahead:

  • Insert(k): Keep probing until an empty slot is found. Once an empty slot is found, insert k. 
  • Search(k): Keep probing until the slot’s key doesn’t become equal to k or an empty slot is reached. 
  • Delete(k): Delete operation is interesting. If we simply delete a key, then the search may fail. So slots of deleted keys are marked specially as “deleted”. The insert can insert an item in a deleted slot, but the search doesn’t stop at a deleted slot which make the search take longer and reduces performance.

Different ways of Open Addressing:
...

1. Linear Probing:
...

In linear probing, the hash table is searched sequentially that starts from the original location of the hash. If in case the location that we get is already occupied, then we check for the next location.

Applications of linear probing:
...

Linear probing is a collision handling technique used in hashing, where the algorithm looks for the next available slot in the hash table to store the collided key. Some of the applications of linear probing include:

  • Symbol tables: Linear probing is commonly used in symbol tables, which are used in compilers and interpreters to store variables and their associated values. Since symbol tables can grow dynamically, linear probing can be used to handle collisions and ensure that variables are stored efficiently.
  • Caching: Linear probing can be used in caching systems to store frequently accessed data in memory. When a cache miss occurs, the data can be loaded into the cache using linear probing, and when a collision occurs, the next available slot in the cache can be used to store the data.
  • Databases: Linear probing can be used in databases to store records and their associated keys. When a collision occurs, linear probing can be used to find the next available slot to store the record.
  • Compiler design: Linear probing can be used in compiler design to implement symbol tables, error recovery mechanisms, and syntax analysis.
  • Spell checking: Linear probing can be used in spell-checking software to store the dictionary of words and their associated frequency counts. When a collision occurs, linear probing can be used to store the word in the next available slot.

Challenges in Linear Probing :
...

  • Primary Clustering: One of the problems with linear probing is Primary clustering, many consecutive elements form groups and it starts taking time to find a free slot or to search for an element.
  • Secondary Clustering: Secondary clustering is less severe, two records only have the same collision chain (Probe Sequence) if their initial position is the same.

2. Quadratic Probing
...

If you observe carefully, then you will understand that the interval between probes will increase proportionally to the hash value. Quadratic probing is a method with the help of which we can solve the problem of clustering that was discussed above.  This method is also known as the mid-square method. In this method, we look for the i2-th slot in the i-th iteration. We always start from the original hash location. If only the location is occupied then we check the other slots.

let hash(x) be the slot index computed using hash function.  
If slot hash(x) % S is full, then we try (hash(x) + 1*1) % S  
If (hash(x) + 1*1) % S is also full, then we try (hash(x) + 2*2) % S  
If (hash(x) + 2*2) % S is also full, then we try (hash(x) + 3*3) % S

3. Double Hashing
...

GFG 

The intervals that lie between probes are computed by another hash function. Double hashing is a technique that reduces clustering in an optimized way. In this technique, the increments for the probing sequence are computed by using another hash function. We use another hash function hash2(x) and look for the i * hash2(x) slot in the i-th rotation.

let hash(x) be the slot index computed using hash function.  
If slot hash(x) % S is full, then we try (hash(x) + 1*hash2(x)) % S  
If (hash(x) + 1*hash2(x)) % S is also full, then we try (hash(x) + 2*hash2(x)) % S  
If (hash(x) + 2*hash2(x)) % S is also full, then we try (hash(x) + 3*hash2(x)) % S

Time Complexity
...

Inserting Elements
...

if we face a growth in our data which is larger than the prior capacity we can handle this by using the same method we used in creating Dynamic Arrays by doubling the capacity and copying the data to the new Array.
if we want to calculate the time we spend to do such a task we face 2 cases:

now if we assume we want to insert numbers to an Array we calculate the time it takes like below:

which is a good time, although some times we are spending a lot of time and other time we are spending less time the final complexity is reasonable.

if we search for an element that does not exist in the data that we have when we hash it and start searching for it the average time that it is going to take is equal to :