** The introduction **

As a programmer , The top ten sorts are necessary and mastered by all qualified programmers , And popular algorithms such as fast scheduling 、 Merging and sorting may be more detailed , It is required to master the performance and complexity of the algorithm .bigsai As a responsible Java And data structure and algorithm direction of small bloggers , There must be no loopholes in this respect . Follow this article , Let's take you through the ten common sorting algorithms , Grasp easily ！

First of all, as far as sorting is concerned, most people's concept of sorting is still bubble sort or JDK Medium Arrays.sort(), Handwritten sorting is a luxury for many people , Not to mention the top ten sorting algorithms , But it's good that you came across this article ！

For sorting classification , There are different dimensions, such as complexity 、 Inside and outside 、 Compare non comparative dimensions to classify . The top ten sorting algorithms we normally talk about are internal sorting , We divide them more into two categories ： be based on **「 Comparison and non comparison 」** This dimension divides the sort of sort .

**「 Bucket ordering of non comparison classes 、 Radix sorting 、 Count sorting 」**. There will also be a lot of people 8 Big sort , That's because of cardinal ordering 、 Counting sort is based on bucket sort or a special sort of bucket sort , But cardinal sort and count sort have their own characteristics , So here they are summed up as 10 Classical sorting algorithms . The comparison sort can also be divided into- There are also more detailed sorting methods , There are exchange based 、 Insert based 、 Selection based 、 Merge based , In more detail, you can see the brain map below .

Brain map

** Exchange class **

** Bubble sort **

Bubble sort , Also known as bubble sorting , It is a typical sort based on exchange , It's also the basis of the idea of fast platoon , Bubble sort is a stable sort algorithm , The time complexity is O(n^2). The basic idea is ：**「 Loop traverses many times, and move back the big elements back and forth every time. , Determine one maximum at a time ( Minimum ) Elements , The sorting sequence is reached after many times .」**( Or move the small elements forward from behind. ).

The specific ideas are ( Back up the big elements. )：

- Go back and forth from the first element , At each position, judge whether it is larger than the following element , If it's bigger than the next element , Then swap the two sizes , Then go back , In this way, it can be guaranteed after one round
**「 The largest number is swapped to the last position to be determined 」**. - The second time also judged from the beginning backward , If the current position is larger than the next position, then exchange it with the number behind it . But it's important to note that , It doesn't have to be judged this time , Just judge to the next to last position ( Because for the first time we've determined that the biggest one is the last one , This time, the goal is to determine the penultimate )
- Empathy , The length of the subsequent traversal is reduced by one at a time , Until the first element makes the whole element ordered .

for example `2 5 3 1 4`

The sorting process is as follows ：

The implementation code is ：

public void maopaosort(int[] a) { // TODO Auto-generated method stub for(int i=a.length-1;i>=0;i--) { for(int j=0;j<i;j++) { if(a[j]>a[j+1]) { int team=a[j]; a[j]=a[j+1]; a[j+1]=team; } } } }

** Quick sort **

Fast sorting is an improvement on bubble sorting , Using recursive divide and conquer method to solve . And fast row is an unstable sort compared with bubble , The worst time complexity is O(n^2), The average time complexity is zero O(nlogn), The best time complexity is O(nlogn).

For fast platoon ,**「 The basic idea 」** That's true

- Fast sequencing requires two parts of the sequence , Namely
**「 All the left side of the sequence is less than one number 」**,**「 All right sides of the sequence are greater than one number 」**, Then use the idea of recursion to treat the left sequence as a complete sequence and then sort it , The right side of the sequence is also sorted as a complete sequence . - This number can be taken randomly in this sequence , You can take the leftmost , You can take the rightmost , Of course, we can also take random numbers . however
**「 Usually 」**If we don't optimize, we take the left most number .

The implementation code is ：

public void quicksort(int [] a,int left,int right) { int low=left; int high=right; // The order of the following two sentences must not be mixed , Otherwise, the array will be out of bounds ！！！very important！！！ if(low>high)// As a cut-off condition return; int k=a[low];// Extra space k, Take the leftmost one as a measure , Finally, it's required that the left side be smaller than it , It's bigger on the right . while(low<high)// This round requires that the left side be less than a[low], Right greater than a[low]. { while(low<high&&a[high]>=k)// First less than found on right k The stop of { high--; } // So we can find the first one smaller than it a[low]=a[high];// Put it in low Location while(low<high&&a[low]<=k)// stay low Find the first greater than k Of , Put it on the right side a[high] Location { low++; } a[high]=a[low]; } a[low]=k;// Assign a value and then recursively divide and conquer it left and right quicksort(a, left, low-1); quicksort(a, low+1, right); }

** Insert class sort **

** Direct insert sort **

Direct insertion sort is one of the simplest sorting methods in all sorting algorithms . And when we went to school Before and after 、 Sort by height , So in a bunch of high and low disordered people , From the first , If there is a higher one in front of you , Just insert it into the right place .**「 Until the last insertion of the team is completed 」** The whole queue can be ordered .

The comparison time complexity of direct insertion sort traversal is every time O(n), The time complexity of the exchange is the same every time O(n), that n The total time complexity is O(n^2). Some people will ask for half ( Two points ) Can insertion be optimized to O(nlogn), The answer is No . Because bisection can only reduce the complexity of the search O(logn), The time complexity of insertion is 0 O(n) Level , So the total time complexity level is still O(n^2).

How to insert sort ：

- Select current location ( The current position has been ordered in front of it ) The goal is to insert the current location data into the previous appropriate location .
- Forward enumeration or binary search , Find the location to insert .
- Move array , Assignment exchange , Achieve the insertion effect .

The implementation code is ：

public void insertsort (int a[]) { int team=0; for(int i=1;i<a.length;i++) { System.out.println(Arrays.toString(a)); team=a[i]; for(int j=i-1;j>=0;j--) { if(a[j]>team) { a[j+1]=a[j]; a[j]=team; } else { break; } } } }

** Shell Sort **

Insert sort directly because it is O(n^2), In the case of large amount of data or too many data movement bits, the efficiency will be too low . Many sorts try to split the sequence , Then combine , Hill sort is a special way of preprocessing , Considering **「 Data volume and order 」** Two aspects of latitude to design the algorithm . Make the sequence as small as possible before and after the front , The big one should be in the back , Carry out several group calculation , The last group is a complete direct insertion sort .

For one ` Long string `

, Hill first divides the sequence ( Nonlinear segmentation ) It is **「 According to a mathematical model 」**(` Remainder `

This is similar to counting 1、2、3、4.1、2、3、4) In this way, in the form of a group of segmentation first **「 Each group was sorted by direct insertion 」**, such **「 A small number is at the back 」** Can pass **「 Less times to move to the relative front 」** The location of . And then slowly merge and grow longer , Move it a little bit .

Because every time you insert like this, the sequence becomes more ordered , The cost of direct insertion sort is not high for slightly ordered sequences . So it's possible to get the small ones in front of the final merge , The big one is behind , The cost is getting smaller and smaller . In this way, Hill sort can save a lot of time than insert sort .

The implementation code is ：

public void shellsort (int a[]) { int d=a.length; int team=0;// Temporary variable for(;d>=1;d/=2)// Share in d Group for(int i=d;i<a.length;i++)// When you go to that element, you can see the group of this element { team=a[i]; for(int j=i-d;j>=0;j-=d) { if(a[j]>team) { a[j+d]=a[j]; a[j]=team; } else { break; } } } }

** Select the sort of class **

** Simple selection sort **

Simple selection sort （Selection sort） It is a simple and intuitive sorting algorithm . How it works ： First, find the minimum in the unsorted sequence （ Big ） Elements , To the beginning of the collating sequence , then , Continue to find the smallest from the remaining unsorted elements （ Big ） Elements , Then put **「 End of sorted sequence 」**. And so on , Until all the elements are sorted .

The implementation code is ：

public void selectSort(int[] arr) { for (int i = 0; i < arr.length - 1; i++) { int min = i; // Minimum position for (int j = i + 1; j < arr.length; j++) { if (arr[j] < arr[min]) { min = j; // Replace the minimum position } } if (min != i) { swap(arr, i, min); // With the first i Exchange locations } } } private void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; }

** Heap sort **

For heap sort , The first is based on the heap , The heap is a complete binary tree , We need to know big root heap and small root heap first , All nodes in a complete binary tree are greater than ( Or less than ) Its child nodes , So there are two kinds of situations

- If all nodes
**「 Greater than 」**Child node value , So this pile is called**「 Big root pile 」**, The maximum heap size is at the root node . - If all nodes
**「 Less than 」**Child node value , So this pile is called**「 Heap 」**, The minimum value of the heap is at the root node .

First of all, heap sorting is **「 Building the heap 」**, And then adjustment . For binary trees ( An array of said ), We adjust it from the bottom up , from **「 The first non leaf node 」** Start adjusting ahead , The rules for adjustment are as follows ：

Building a heap is a O(n) Time complexity process , After building the heap, you need to sort the deletion header . Given the array build heap (creatHeap)

① Judging the switch down from the first non leaf node (shiftDown), Enables the current node and child to maintain the heap properties

② But normal node replacement may not be a problem , Yes, if the exchange breaks the child heap structure properties , Then move down again (shiftDown) The swapped node stops until it stops .

Stack construction is complete , Take the first top element as the smallest ( Maximum ), The left and right children are still satisfied with the sex value of the heap , But there's a missing top element , If you tune it up for your child , There may be too much mobilization and may destroy the heap structure .

① So just put the last element first . So you just need to judge if the exchange moves down (shiftDown）, Note, however, that the size of the entire heap has changed , We don't logically use abandoned positions , So when designing a function, you need to attach a heap size parameter .

② Repeat the above operation , All the elements in the heap have been stopped .

On the analysis of the complexity of heap algorithm , The time complexity of building the reactor before was O(n). And every time you delete the top of the heap, you need to swap down , The worst number is logn individual . So the complexity is O(nlogn). The total time complexity is O(n)+O(nlogn)=O(nlogn).

The implementation code is ：

static void swap(int arr[],int m,int n) { int team=arr[m]; arr[m]=arr[n]; arr[n]=team; } // Move down to swap Effectively transform the current node into a heap ( Xiaogen ) static void shiftDown(int arr[],int index,int len)//0 You don't need to { int leftchild=index*2+1;// Left the child int rightchild=index*2+2;// The right child if(leftchild>=len) return; else if(rightchild<len&&arr[rightchild]<arr[index]&&arr[rightchild]<arr[leftchild])// The right child is in range and should be exchanged { swap(arr, index, rightchild);// Exchange node values shiftDown(arr, rightchild, len);// It may affect the heap of child nodes , Refactoring down } else if(arr[leftchild]<arr[index])// Exchange left child { swap(arr, index, leftchild); shiftDown(arr, leftchild, len); } } // Create an array into a heap static void creatHeap(int arr[]) { for(int i=arr.length/2;i>=0;i--) { shiftDown(arr, i,arr.length); } } static void heapSort(int arr[]) { System.out.println(" The original array is ："+Arrays.toString(arr)); int val[]=new int[arr.length]; // Temporary storage results //step1 Building the heap creatHeap(arr); System.out.println(" The sequence after building the heap is ："+Arrays.toString(arr)); //step2 Conduct n Second value to build heap , Every time you take the top elements and put them in val Array , The end result is an incremental sequence for(int i=0;i<arr.length;i++) { val[i]=arr[0];// Put the top of the heap into the result arr[0]=arr[arr.length-1-i];// Remove the heap top element , Put the end element on top of the heap shiftDown(arr, 0, arr.length-i);// Adjust this heap to a legitimate small root heap , Be careful ( Logically ) There's a change in length } // Numerical cloning replication for(int i=0;i<arr.length;i++) { arr[i]=val[i]; } System.out.println(" The sequence after heap sorting is :"+Arrays.toString(arr)); }

** Merge class sort **

In the merge sort, we only talk about merge sort , But there are two ways to merge 、 Multiplex merge , Here we will talk about more two-way merge sort , It is implemented recursively .

** Merge sort **

Merging and fast platoon are **「 Based on divide and conquer algorithm 」** Of , Divide and conquer algorithm has many applications , Many divide and conquer use recursion , But in fact **「 Divide and conquer and recursion are two things 」**. Divide and rule is divide and rule , It can be implemented recursively , You can also traverse yourself to achieve non recursive mode . And merging sort is to decompose the problem into subproblems with lower cost , The subproblem then uses a low cost merge method to complete a sort .

As for the idea of merging, it is like this ：

- for the first time ： The whole string is first divided into a single , The first time is to put the (
`1 2 3 4 5 6---`

) Merge into order , End of amalgamation (`xx xx xx xx----`

) Such a locally ordered sequence . - The second time is to merge two into four (
`1 2 3 4 5 6 7 8 ----`

)**「 Every little part is ordered 」**. - It's like this until there's only one string left , But the total number of times it took logn. The time of each operation is complicated
`O(n)`

. So the total time complexity is`O(nlogn)`

.

Merge into one O(n) The process of ：

The implementation code is ：

private static void mergesort(int[] array, int left, int right) { int mid=(left+right)/2; if(left<right) { mergesort(array, left, mid); mergesort(array, mid+1, right); merge(array, left,mid, right); } } private static void merge(int[] array, int l, int mid, int r) { int lindex=l;int rindex=mid+1; int team[]=new int[r-l+1]; int teamindex=0; while (lindex<=mid&&rindex<=r) {// Compare left and right first if(array[lindex]<=array[rindex]) { team[teamindex++]=array[lindex++]; } else { team[teamindex++]=array[rindex++]; } } while(lindex<=mid)// When one is out of bounds, the rest can be added in sequence { team[teamindex++]=array[lindex++]; } while(rindex<=r) { team[teamindex++]=array[rindex++]; } for(int i=0;i<teamindex;i++) { array[l+i]=team[i]; } }

** Bucket sort sorting **

** Bucket sort **

Bucket sort is a sort that trades space for time , The most important thing about bucket ordering is its idea , Rather than concrete implementation , The time complexity is best likely to be linear O(n), Bucket sort is not a sort based on comparison, but an distributive one . Bucket sorting literally means ：

- bucket ： Several barrels , This sort of sort puts data into several buckets .
- bucket ： Each barrel has capacity , A barrel is a container with a certain volume , So there may be more than one element in each bucket .
- bucket ： On the whole , The whole sort would like the bucket to be more symmetrical , It doesn't spill over ( Too much ) Not too few .

The idea of bucket sorting is ：**「 Divide the sequence to be sorted into buckets , The elements in each bucket are sorted individually .」** Of course, the choice of bucket sorting scheme is related to the specific data , Bucket sorting is a relatively broad concept , And counting sort is a special sort of bucket sort , Cardinal sort is also based on bucket sort . When the data distribution is uniform and each bucket element approaches a time complexity can be achieved O(n), However, if the data range is large and relatively concentrated, it is not suitable to use bucket sorting .

Implement a simple bucket sort ：

import java.util.ArrayList; import java.util.List; // WeChat official account ：bigsai public class bucketSort { public static void main(String[] args) { int a[]= {1,8,7,44,42,46,38,34,33,17,15,16,27,28,24}; List[] buckets=new ArrayList[5]; for(int i=0;i<buckets.length;i++)// initialization { buckets[i]=new ArrayList<Integer>(); } for(int i=0;i<a.length;i++)// Put the sequence to be sorted into the corresponding bucket { int index=a[i]/10;// The corresponding bucket number buckets[index].add(a[i]); } for(int i=0;i<buckets.length;i++)// Sort in each bucket ( Use the system with its own quick exhaust ) { buckets[i].sort(null); for(int j=0;j<buckets[i].size();j++)// By the way, print out { System.out.print(buckets[i].get(j)+" "); } } } }

** Count sorting **

Counting sort is a special sort of bucket sort , The size of each bucket is 1, Each bucket is not in use List Express , And usually a value is used to count .

stay **「 When designing specific algorithms 」**, Find the minimum first min, And find the maximum max. Then create an array of this interval size , from min Start counting the position of , In this way, the space can be compressed as much as possible , Improve the efficiency of space use .

public static void countSort(int a[]) { int min=Integer.MAX_VALUE;int max=Integer.MIN_VALUE; for(int i=0;i<a.length;i++)// find max and min { if(a[i]<min) min=a[i]; if(a[i]>max) max=a[i]; } int count[]=new int[max-min+1];// Count the elements for(int i=0;i<a.length;i++) { count[a[i]-min]++; } // Sort values int index=0; for(int i=0;i<count.length;i++) { while (count[i]-->0) { a[index++]=i+min;// Yes min It's the real value } } }

** Radix sorting **

Cardinality sort is a kind of easy to understand but difficult to implement ( Optimize ) The algorithm of . Cardinal sort is also called card sort , The principle of cardinal sort is to use count sort many times ( Counting sort is a special sort of bucket sort ), But what's different from the previous bucket sort and count sort is ,**「 Cardinal sorting is not about assigning a whole to a bucket 」**, It's about breaking itself up into constituent elements , Each element is allocated in the bucket in sequence 、 Sequential collection , Each position is assigned in this order from front to back or from back to front 、 After collection , You get an ordered sequence of numbers .

If it is a number type sort , So this bucket only needs to be filled 0-9 A number of sizes , But if it's a character type , Then we need to pay attention ASCII The scope of the .

So in this case, our idea of cardinal sort is very simple , take 934,241,3366,4399 These numbers are sorted by cardinality in a single process , For the first time, it will be distributed according to you 、 collect ：

Distribution and collection are orderly , The second time will be allocated according to the ten 、 collect , This is the first bit allocation 、 On the basis of collection , So all the numbers are in order .

And the third time is to allocate and collect the hundreds , After the completion of this time, 100 and below are in order .

And the last time it was processed , There are thousands of digits that need to be filled with zeros , After the end of this time, the last thousand and later are in order , That is, the whole sequence is sorted .

The simple implementation code is ：

static void radixSort(int[] arr)//int type From right to left { List<Integer>bucket[]=new ArrayList[10]; for(int i=0;i<10;i++) { bucket[i]=new ArrayList<Integer>(); } // Find the maximum int max=0;// Suppose they're all positive numbers for(int i=0;i<arr.length;i++) { if(arr[i]>max) max=arr[i]; } int divideNum=1;//1 10 100 100…… The number used to find the corresponding bit while (max>0) {//max and num control for(int num:arr) { bucket[(num/divideNum)%10].add(num);// Distribute Put the number in the corresponding position bucket in } divideNum*=10; max/=10; int idx=0; // collect Pick up the data again for(List<Integer>list:bucket) { for(int num:list) { arr[idx++]=num; } list.clear();// After collection, it needs to be emptied and used again } } }

Of course , Cardinal sort also has string length 、 Not equal in length 、 One dimensional array optimization and other implementation needs to learn , Specifically, we can refer to other articles in the official account. .

** Conclusion **

This time, the top ten ranking is so natural and unrestrained , I think we should all understand it ！ For the algorithm summary , Avoid unnecessary labor , I'll share this form with you ：

Sorting algorithm |
Average time complexity |
best |
The worst |
Spatial complexity |
stability |
---|---|---|---|---|---|

Bubble sort |
O(n^2) |
O(n) |
O(n^2) |
O(1) |
Stable |

Quick sort |
O(nlogn) |
O(nlogn) |
O(n^2) |
O(logn) |
unstable |

Insertion sort |
O(n^2) |
O(n) |
O(n^2) |
O(1) |
Stable |

Shell Sort |
O(n^1.3) |
O(n) |
O(nlog2n) |
O(1) |
unstable |

Selection sort |
O(n^2) |
O(n^2) |
O(n^2) |
O(1) |
unstable |

Heap sort |
O(nlogn) |
O(nlogn) |
O(nlogn) |
O(1) |
unstable |

Merge sort |
O(nlogn) |
O(nlogn) |
O(nlogn) |
O(n) |
Stable |

Bucket sort |
O(n+k) |
O(n+k) |
O(n+k) |
O(n+k) |
Stable |

Count sorting |
O(n+k) |
O(n+k) |
O(n+k) |
O(k) |
Stable |

Radix sorting |
O(n*k) |
O(n*k) |
O(n*k) |
O(n+k) |
Stable |

This article is from WeChat official account. - High performance server development （easyserverdev）

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time ： 2021-04-01

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明

本文为[Fan Li]所创，转载请带上原文链接，感谢

https://cdmana.com/2021/04/20210408111751116B.html