When you work with text files in Linux you will see that the data is not organized. The sort command in Linux is very useful in this situation. It helps to arrange the data in an order. You can use the command in Linux to arrange the data alphabetically numerically or even in a random order.
The sort command in Linux is easy to use. It works well with other commands. You can use the command in Linux to sort the contents of a file from A to Z. The sort command in Linux can also help you to find the top IP addresses that are accessing your web server. Additionally the sort command in Linux can correctly order a list of file sizes including files with K, M and G suffixes.
This guide provides 14 examples of how to use the sort command in Linux. These examples include alphabetical sorting and more complex tasks like sorting multiple columns in a CSV file. The sort command in Linux can also handle files by using parallel processing. All of these examples have been tested on RHEL, Rocky Linux and Ubuntu. So you can try them out regardless of which Linux distribution you are using. If you want to learn more about text processing tools you can visit LinuxTeck for a guide, on Linux text processing commands.
sort arranges lines of text — it does not parse structured data formats on its own. To sort a CSV by a specific column, you will pair it with the -t and -k flags, or combine it with cut for more precise field extraction before sorting.
Examples
# Sort a file alphabetically (A–Z, default)
sort names.txt
# Sort numerically in reverse (largest first)
sort -rn numbers.txt
# Sort a CSV by the 3rd column numerically
sort -t ',' -k3 -n data.csv
# Read from stdin inside a pipe
cat names.txt | sort
Without the -n flag, sort compares every character from left to right, just like a dictionary does. That means "10" lands before "2" because the character 1 is smaller than 2 in the ASCII table — not because 10 is less than 2. The moment you are sorting anything that contains numbers, always reach for -n. It is the single most common mistake beginners make with this command.
Basic Alphabetical Sort
The most straightforward use of the sort command in Linux is to arrange the lines of a plain text file from A to Z. No flags needed — just point it at a file and let it run. Every line is treated as a unit, and the comparison starts from the very first character. This is the go-to move when you have a list of names, hostnames, or labels that need to be in a predictable order before you process them further.
Bob
Charlie
Diana
Eve
Frank
When no file is specified, sort reads from standard input. This makes it a natural fit inside pipelines — for example, echo -e "Zara\nAdam\nMike" | sort works exactly as expected. See the I/O redirection cheat sheet for more on piping and stdin.
Sort in Reverse Order
Flip the default alphabetical direction with the -r flag and you get a Z–A descending sort. This is handy whenever you want the last item alphabetically to appear at the top — think of pulling the most recent version tag, the highest-ranked name, or the last entry in a sorted log sequence.
Eve
Diana
Charlie
Bob
Alice
The -r flag stacks with nearly every other sort option. You will see it combined with -n (numeric reverse) and -h (human-size reverse) throughout this guide. Think of it as a modifier you attach whenever you want the opposite direction.
Sort File Contents Numerically (Numeric Sort Linux)
Numeric sort linux is one of the most searched topics around this command — and for good reason. Once you see the default behaviour mangle a number list, you never forget to add -n again. The flag tells sort to parse each line's leading value as an actual number before comparing, so 2 correctly falls before 10 instead of the other way around.
1
10
2
20
3
30
# With -n (correct numeric order):
1
2
3
10
20
30
The default sort treats every line as a string of characters. "10" sorts before "2" because the character 1 precedes 2 in character order. Any time your file contains integers, port numbers, process IDs, percentages, or timestamps in numeric form, the -n flag is not optional — it is mandatory for a correct result.
Sort Numerically in Reverse (Largest First)
Combining -r and -n together gives you a largest-to-smallest numeric sort. This is the pattern you reach for when building quick "top N" reports — highest memory consumers, biggest file sizes, most active processes. The result places the largest value at line one, making it trivial to pipe into head afterwards.
20
10
3
2
1
Append | head -5 to instantly extract the five largest values from any numeric list. This combination is a staple in performance monitoring scripts and quick log triage. Explore more monitoring patterns in the Linux system monitoring cheat sheet.
Sort and Remove Duplicates in One Step
The -u flag quietly removes duplicate lines as part of the sort pass — no extra uniq command needed. This is ideal when you want a clean, deduplicated list: unique usernames, one-of-each hostname, or a tidy set of IP addresses pulled from a log. The output keeps only the first occurrence of any line that repeats.
Alice
Bob
Alice
Charlie
Bob
# Output with -u:
Alice
Bob
Charlie
sort -u is faster than piping into uniq because deduplication happens during the single sort pass rather than as a separate step. However, sort | uniq -c is the combination you want when you also need a count of how many times each line appears — that frequency count is invaluable for log analysis and finding repeated patterns.
Sort by a Specific Column (Field)
Real-world data files rarely contain just one column. The -k flag lets you pick exactly which field drives the sort decision. By default, fields are separated by whitespace, so -k2 sorts the entire file based on whatever appears in the second space-delimited column — useful for space-separated logs, ps output, or user data files where each field has a fixed position.
charlie 25 developer
alice 31 designer
bob 19 analyst
# Sorted by column 2 (age) — but as text, not numeric:
bob 19 analyst
charlie 25 developer
alice 31 designer
Column indexing in sort starts at 1, not 0. To sort that same file by age correctly as a number, combine both flags: sort -k2 -n data.txt. The -k and -n flags are fully independent and stack cleanly.
Sort by Column with a Custom Delimiter (CSV Sort)
When your file uses a comma, colon, pipe, or any other character as a separator, you declare that delimiter with -t and then tell sort which resulting column to sort on with -k. Sorting CSV files by a numeric column — like a sales figure or a port number — is a one-liner combining all three: -t ',', -k3, and -n.
alice,designer,8500
bob,analyst,6200
charlie,developer,9100
diana,manager,11000
# Sorted by column 3 (salary) ascending:
bob,analyst,6200
alice,designer,8500
charlie,developer,9100
diana,manager,11000
For more precise CSV field extraction before sorting — especially when fields contain embedded spaces — consider combining sort with cut to isolate the target column first, then pipe the result back through sort.
Case-Insensitive Sort
Without any intervention, uppercase letters sort before lowercase ones in the ASCII character table — so "Zebra" appears above "apple" in a default sort. The -f flag folds all characters to their uppercase equivalent for comparison purposes only (the original casing is preserved in the output). This produces a natural alphabetical order that matches what most humans actually expect.
zebra
Apple
mango
Banana
# Output with -f (case-insensitive):
Apple
Banana
mango
zebra
The -f flag only affects the comparison logic — it does not transform the actual content of your file. "Apple" will still appear as "Apple" in the output, not as "apple". Your original formatting stays intact.
Sort by Human-Readable File Sizes
If you have ever piped the output of du through a regular sort and ended up with 1G sitting below 500M, you already know why -h exists. The human-numeric sort flag understands that K < M < G < T and handles the suffix conversion internally before ranking — making it the only correct way to sort du or df style output. See the du command guide for more disk usage workflows that pair naturally with this.
1G
200M
50K
3.5G
800M
# Sorted with -h (correct size order):
50K
200M
800M
1G
3.5G
Combine with -r to see your largest directories first: du -sh /var/* | sort -rh. This is one of the fastest ways to pinpoint what is eating your disk space on a production server.
Sort by Month Name
When your data contains three-letter month abbreviations — Jan, Feb, Mar, and so on — a default alphabetical sort produces nonsense (Apr before Aug before Dec before Feb...). The -M flag recognises these abbreviations as calendar months and sorts them in their natural January-through-December sequence. This is a lifesaver when processing log files, cron job records, or any report that uses abbreviated month labels.
Nov
Mar
Jul
Jan
Sep
Apr
# Sorted with -M (calendar order):
Jan
Mar
Apr
Jul
Sep
Nov
Month comparison with -M is case-insensitive — jan, JAN, and Jan are all treated identically. This means you do not need to pre-process inconsistently cased month labels before sorting.
Random Shuffle Sort
Sometimes you do not want order at all — you want chaos. The -R flag shuffles lines into a completely random sequence on every run. This is perfect for generating randomised test datasets, picking a random sample from a large list, creating lottery-style draws, or shuffling a playlist of commands for unpredictable batch testing.
Charlie
Alice
Frank
Diana
Eve
Bob
To pull a random sample of exactly 3 lines from a file, pipe through head: sort -R names.txt | head -3. This is a quick and clean way to select test cases or demo data without writing a dedicated script.
Save Sorted Output Directly to a File
By default, sort sends its result to the terminal — your original file stays unchanged. To persist the sorted output, you have two approaches: use the -o flag to write to a named file, or redirect with >. The crucial difference is that -o is safe even when the output file is the same as the input file, whereas redirecting a file to itself causes catastrophic data loss.
# Verify the file was created:
$ cat sorted.txt
Alice
Bob
Charlie
Diana
Eve
Frank
Running sort names.txt > names.txt will destroy your data. The shell opens and truncates names.txt for writing before sort even begins reading it — the file is empty by the time sort tries to read it. You end up with a completely blank file and no way to recover the original content.
The safe pattern for in-place sorting is sort -o names.txt names.txt. The -o flag buffers the entire sorted result in memory first, then writes to the output file after reading is complete — so the input is never at risk.
Speed Up Sorting with Parallel Processing
Sorting a multi-gigabyte log file or a massive CSV export on a single thread can take a frustratingly long time. The --parallel option tells sort to split the work across multiple CPU cores simultaneously. On a modern server with 8 or 16 cores, this can cut sorting time by a significant factor — particularly on files that exceed available RAM and require merge-sort passes on disk.
# Benefit is speed: use `time` to measure the difference:
$ time sort --parallel=4 bigfile.txt > sorted_big.txt
real 0m4.312s
user 0m15.891s
sys 0m0.441s
# Compare without parallel (single-threaded):
$ time sort bigfile.txt > sorted_big.txt
real 0m14.987s
A reasonable starting value is half the number of available CPU cores — check your core count with nproc first. Going beyond the physical core count rarely improves speed and can actually slow things down due to thread contention. For batch scripts that regularly process large files, this flag is worth benchmarking on your specific hardware.
Real-World Pipeline — Top 10 IPs from an Access Log
This is where sort earns its place in every sysadmin's toolkit. By chaining cut, sort, uniq, and head together, you can answer "which IP addresses hit my server the most?" in a single line — no Python script, no database, no external tool required. This pattern extracts the first field (the client IP) from an Apache or Nginx access log, tallies how many times each one appears, and returns the top 10 offenders in descending order.
3102 198.51.100.7
2876 192.0.2.33
1954 203.0.113.12
1731 198.51.100.88
1420 192.0.2.17
987 203.0.113.99
854 10.0.0.15
712 172.16.0.4
601 192.168.1.200
Here is what each stage does: cut -d ' ' -f1 extracts just the IP address from each log line. The first sort groups identical IPs together (required by uniq). uniq -c counts each group and prepends the count. The second sort -rn ranks by count, largest first. Finally head -10 keeps only the top results. See the cut command guide for more field extraction techniques that work seamlessly in this kind of pipeline.
People Also Ask
No — by default, sort sends its output to the terminal (standard output) and leaves the source file completely untouched. To save the sorted result, either use the -o flag followed by a filename (sort -o output.txt input.txt) or redirect with > to a different file. Never redirect output back to the same input file using > — that truncates the file before sorting begins and permanently destroys its contents.
Without the -n flag, sort compares lines as plain character sequences — the same way words are ordered in a dictionary. In character order, "10" sorts before "2" because the first character 1 has a lower ASCII value than 2. The -n flag switches the comparison engine to treat leading values as real numbers, so 2 correctly appears before 10. Always use -n whenever your lines start with integers, decimals, or any numeric value.
sort -u performs deduplication as part of the sort itself — it is a single-pass operation and generally faster. sort | uniq achieves the same deduplication result by piping into a separate process, but it adds a second command. The key reason to choose sort | uniq -c over sort -u is the -c flag on uniq, which prepends a count of how many times each line appeared — essential for frequency analysis in log files and similar tasks.
Use the -t flag to declare the delimiter and the -k flag to specify the column number (1-based). For a comma-separated file sorted by the third column numerically, the command is: sort -t ',' -k3 -n file.csv. Add -r to reverse the order. If the column contains text rather than numbers, simply omit -n. For files where fields may contain spaces, pairing sort with cut to pre-extract the target column often gives cleaner results.
Yes — if you do not supply a filename, sort automatically reads from standard input. This is what makes it so versatile inside shell pipelines. You can pass data to it from any command that produces output: ps aux | sort -k3 -rn | head -5 ranks processes by CPU usage, all without touching the filesystem. For an overview of how Linux pipes and redirection work, check the I/O redirection cheat sheet.
Use the -h (human-numeric) flag. Without it, sort treats size strings as text, which puts 1G below 200M because 1 sorts before 2 as characters. The -h flag understands the unit hierarchy (K < M < G < T < P) and converts values to a common unit before comparing. The most common use case is du -sh * | sort -rh to find your largest directories ranked from biggest to smallest.
From your first terminal command to advanced sysadmin skills — every guide here is written in plain English with real examples you can run right now.