Search algorithms are designed to check or retrieve an element from a data structure. The most commonly used ones are designed for list searching.
In this article, I will summarize the famous algorithms and their complexity.
The simplest type of search and the most expensive (don’t mind me. I can make up a worst algorithm but it wouldn’t make sense). The method is as simple as iterating over the list from start to finish or until the item is found:
index = 0; foreach n in list: if n == item: break; index += 1;
Complexity: worst-case: O(n).
Works on sorted lists only. It looks at the value in the middle of the list which divides the list into two halves. if the search value is equal to the middle item, the search is over! if it’s bigger than the middle item, then it must be in the second half so we only search in the second half. If it’s smaller, it must be in the first half so we ignore the second half. We keep on dividin the list in the middle and dropping a half that way until we find the item or we end up with a one item that isn’t equal to our item which means it’s not in the list.
The best way to implement the binary search is using a recursive function:
function binarySearch(list, item, left_limit, right_limit): if left_limit >= right_limit: return -1; middle = (right_limit - left_limit) / 2; if item == list[middle]: return middle; else if item > list[middle]: return binarySearch(list, item, middle+1, right_limit); else: return binarySearch(list, item, left_limit, middle-1);
Complexity: worst-case: O(log n).
An algorithm also for sorted lists only. It has a parameter called the block size m = √n for a list of length n. The algorithm starts by jumping to the item at index m and comparing that item with the searched value. If the value is equal to the item at m, the search is over. If it’s less, it searches only from the 0 to m. If it’s bigger, it jumps to the item at 2 * m. repeat the same thing. Until it finds the item or reaches the end of the list.
function jumpSearch(list, item): n = length(list) m = floor(√n); while (list[m-1] < item): prev = m; m += floor(√n) m = min(m, n); if (prev >= n): return -1; while (list[prev] < n): prev += 1; if prev == m: return prev return -1;
Complexity: worst-case: O(√n).
An improvement over binary search where the values are uniformly distributed among the sorted list. Based on that fact, the algorithm doesn’t check the middle value but instead it will be skewed towards the end with the value which is closer to the search item. The dividing position is computing using this formula:
pos = low + (x-list[low])*(high-low) / (list[high]-list[Low]);
Complexity: worst-case: O(log log n).
This is no way to avoid the linear search where data is not sorted. For ordered list, the binary search is better than the jump search, but the jump search has an advantage that we traverse back only once. So if the used data structure has an expensive back-traversal, the jump search performs better. Finally, interpolation search enhances over binary search where the list is uniformly distributed as well.
Server-side pagination is a commonly-used feature in SQL databases. It helps when showing a huge...
Sorting algorithms are used to apply a specific order according by a comparison operator on...