Week 3
Week 3
Sorting Implementations Overviews and Big O Analysis
Insertion:
- For the insertion, I used a for loop in conjunction with a while loop
- It is essentially like a for loop with the while loop condition of a temp variable that is decreasing in value each time
- Also, there is a comparison value that is increasing and a swaps value also that keeps track of this data
- Time is also made with the duration feature
- Big O: Useful for small data sets in O(n^2) -> not a fast sorting algorithm because it uses nested for loops
Merge:
-
This is definitely the haredest sort method because of the implementation of recursion and its use in the program
-
This had a second method for the other part of merging which was hard to create
- Also, there is a comparison value that is increasing and a swaps value also that keeps track of this data
- Time is also made with the duration feature
- Big O: MergeSort does log n merge steps (each step merge doubles list in size) -> does n work per step because it goes through and sees each item -> runs in O(n log n)
Bubble:
- For this bubble sort, I used nested for loops and if the previous value was greater than the first, I switched them and that continued for the entirety
- I used a temp variable to store values for when the swap occurs and these values were saved
- Also, there is a comparison value that is increasing and a swaps value also that keeps track of this data
- Time is also made with the duration feature
- Big O: Bubble Sort is not very efficient becuase it uses nested loops and is useful for small data sets -> runs in O(n^2)
Selection:
- Here I also used nested for loops and then am iterating through the data set
- I am finding both the minimum index and minimum value and saving this index and iterating from there
- Then the values are switched based on their magnitudes
- Also, there is a comparison value that is increasing and a swaps value also that keeps track of this data
- Time is also made with the duration feature
- Big O: Selection Sort is not that fast because it has nested for loops and is useful for small data sets -> in O(n^2)
Analysis An Option on Repl Menu:
- From my research and Big O analysis, it is clear that although it is the most difficult, merge sort is the best algorithm
- This came from the average the results for each each Sort, run each at least 12 times and 5000 elements -> throw out High and Low when doing analysis.
- It has the most efficient amount of comparisons and swaps
- Does not uses nested for loops/while loops
- It has the lowest time by far