Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.

Best algorithm to sort and unique a large list of items?

I have a complex sorting situation that I'd like to optimize. I have a working implementation but I think it could be better.

I have a large list (millions) of strings. There can and are likely many redundant entries. I'd like to take this list as input and output a list that is sorted, unique, and with a count of the number of times a particular item appeared in the original list. So for example, if I had a list like:

3 5 2 3 9 1 1 3 3

I'd like to output

(1, 2) (2, 1) (3, 4) (5, 1) (9, 1)

(first number is the item, second number is number of occurances, note it is sorted by the first number). Output format is irrelevant, I just used this format to be concise.

The main problem here is that the original list is too large to fit into memory in many cases, so I cannot sort the list in place. Does anyone have any suggestions for an efficient solution?

Update:

Let me explain my current solution. I'm using a PHP command line script in Linux (not the most ideal programming language, but it's associative arrays and easy file handling are nice).

I read data in, using an array with the item being the key, and value being the number of occurrances. When I find I'm nearing the memory limit, I sort the keys of the array using PHP's ksort function, and spit the array out into a temp file, and purge the data from memory, and start fresh continuing through the data.

I do this until I'm done with the data, and then I do a merge on all of the temp files. When I use this function on a data set small enough to fit into memory (and therefore skip the caching to disk and merging part), it's faster than doing the equivalent sort and uniq -c commands in linux. However the sort command is more memory efficient. Once I get into the problem of running out of memory, it takes longer for data that would be small enough for the Linux sort command.

Update 2:

The AVL tree idea seems interesting. I will explore this option a bit and let you know how that goes. I however do not think it effectively solves the problem of the data being too large to fit entirely in memory.

3 Answers

Relevance
  • 1 decade ago
    Favorite Answer

    [UPDATED 2]

    Ok, here is a solution:

    Read the data into an AVL tree. Your tree node should store a word as well as the number of occurances. Your insert function should insert the word if it doesn't exist in the tree. If the word already exists, then your function should increment the number of occurances of that word without inserting it again. Now, to print the sorted data you should use inorder traversal.

    Using an AVL tree rather than a regular binary search tree will ensure that your tree in balanced and thus your insertion will take O(logn) (that's fast) and printing the data in a sorted fashion is going to take O(n) (that's fast too).

    If you need to know what an AVL tree is and how to implement it, I can help you with that. Let me know your decision.

    You are right that this AVL solution doesn't solve the memory problem. I can't think of a way to do this other than the merge files approach you described. However, using AVL trees rather than arrays is still faster to sort and thus write to files.

  • ?
    Lv 4
    5 years ago

    For the best answers, search on this site https://shorturl.im/awEQ2

    I'm a writer also and I love names! Hope some of these help you. Girls: Mara Daisy Kalise Kelis Larissa Clara Genevieve Aishlyn (My daughers name) Isla (My daughters name) Nova (My daughters name) Hadley Aubrey Lissa Clarissa Story Honey Lena Skyla Lexi Alexis Alessandra Aaliyah Amazon Lyric Cealy Emmeline Zarlee Teighlor Kailia Kendall Araine Farrah ' Ruby Alyssa Zahlia Sienna Brielle Logan Kinley Kensly Emery Brooklyn Jayda Penny Addison Alivia Emmy Malika Chasity Cassidy Everleigh Addilee Emmalyn Keller Poppy Hunter Vada Harlia Emerson Isabeau Arabella Ambalee Amber Lorena Kimba Alisa Giselle Ambriel Novalee Indiana India Luciana Luce Lucinda Lucy Scarlett Mikayla Cassie Casey Rhiannon Krista Natalie Natalia Paisley Pipper Veronica Willa Seraphina Lux Melody Hallie Ryleigh Cecelia Gracelyn Shelby Matilda Stella Estelle Carson Jordyn Miranda Melina Leslie Aria Savannah Peyton Camille Noelle Celeste Eve Evelyn Mackenzie McKenna Cleo Sage Zoey Honor Gwen Thea Ava Layla Boys: Archer Jasper Noah Quinn Tyson Tyrone Sebastian Colten Lewis Brighton (My sons name) Hunter (My sons name) Archie Shane Toby Xavier Benjamin Hamish Jeremy Erik Scott Tommy Dale Ethan Lucas Luca Cooper Andy Liam Lauchlan Loki Ryley Reagan Asher Grayson Ryan Max Braxton Ashton Ren Aiden Mason Blake Levi Logan Samuel Bailey Miles Callum Triston Callan Elijah Flynn ' Jaxon Harvey Alexander Jayden Jaylen Daniel Anthony Bentley Easton Jayce Zayden Ryland Layne Zach Issac Owen Landon Nathaniel Cole Chase Jesse Deyvn Cade Seth Antonio Bryce Parker Caleb Christope Nolan Preston Noel Eli Ty Derek Dalton Trent Brody Roman Harrison Calvin Colby Ruben Zane Kai

  • 1 decade ago

    Memory usage patterns and index sorting :

    When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.

    For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.

    One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort" .

    Another technique for overcoming the memory-size problem is to combine two algorithms in a way that takes advantages of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit easily in RAM (say, a few thousand elements), the chunks sorted using an efficient algorithm (such as quicksort or heapsort), and the results merged as per mergesort. This is more efficient than just doing mergesort in the first place, but it requires less physical RAM (to be practical) than a full quicksort on the whole array.

    Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.

    //////////////////////////////////////////////////////////////////////////////////////////////////////////

    Bubble Sort :

    Bubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. Although simple, this algorithm is highly inefficient and is rarely used except in education. A slightly better variant, cocktail sort, works by inverting the ordering criteria and the pass direction on alternating passes. For sorting small numbers of data (e.g. 20) it is better than Quicksort.

    Insertion Sort :

    Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists. This method is much more efficient than the bubble sort, though it has more constraints.

    Shell sort :

    Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.

    Merge Sort :

    Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e. 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists.

    Heap Sort

    Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. It is removed and placed at the end of the list, then the heap is rearranged so the largest element remaining moves to the root . Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time.

    Quick Sort:

    Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n2)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).

    Radix Sort

    Radix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.

    //////////////////////////////////////////////////////////////////////////////////////////////////////////

Still have questions? Get your answers by asking now.