The cut command in Linux is one of the most practical text-processing utilities available in any terminal session. Whether you are working with structured CSV files, system configuration files like /etc/passwd, or server log entries, cut lets you isolate exactly the data you need — by bytes, characters, or delimited fields — without writing a single line of scripting code. It ships pre-installed on every major Linux distribution including RHEL, CentOS, Ubuntu, and Alpine, so there is nothing to install or configure before you start.
In this guide you will work through 10 real-world cut command examples that cover every common use case: extracting individual and multiple CSV columns, reading structured system files, trimming characters from log lines, inverting a selection with --complement, reformatting output separators, and combining cut with grep, sort, and uniq inside powerful shell pipelines. Every example includes the exact command, a real terminal output, and a plain-English explanation so you can adapt it to your own files right away.
cut reads from a file or standard input and always writes to standard output — it never modifies the source file. To save extracted data, redirect the output: cut -d ',' -f2 data.csv > names.txt. For more on output redirection see the Linux I/O Redirection Cheat Sheet.
Examples
# Extract bytes 1 through 5 from each line
cut -b 1-5 report.txt
# Extract characters 1 through 10 from each line
cut -c 1-10 report.txt
# Extract field 2 using comma as delimiter
cut -d ',' -f2 data.csv
# No file provided — cut reads from stdin via pipe
echo "sara:x:1001:1001:/home/sara:/bin/bash" | cut -d ':' -f1
For everyday log files, CSV files, and system files that contain only ASCII characters, -b and -c produce exactly the same result because every character is exactly one byte. The distinction only matters when your file contains accented letters, emoji, or any multi-byte Unicode codepoint. When in doubt, always use -c — it is the safer default on any modern Linux system running a UTF-8 locale.
Extract the First 5 Bytes from Each Line (-b)
When working with fixed-width files — legacy reports, mainframe exports, or binary-safe log formats where every record occupies a predictable number of bytes — the -b flag is the right tool. The command below reads report.txt and outputs only the first five bytes of every line, discarding everything to the right of position five.
EMP-0
EMP-0
You can select non-contiguous byte positions using a comma-separated list. For example, cut -b 1,3,5 report.txt outputs only bytes 1, 3, and 5 of each line. You can also select from a position to the end of the line with an open-ended range: cut -b 6- report.txt outputs everything from byte 6 onwards.
Extract the First 10 Characters from Each Line (-c)
The -c flag operates on characters rather than raw bytes, making it the correct choice whenever your file may contain UTF-8 text. A classic use case is trimming a fixed-length timestamp prefix from log entries so you can focus on the message text. The command below extracts characters 1 through 10 from every line of report.txt.
EMP-02,Lia
EMP-03,Pri
Use an open-ended range to select from a position to the end of each line without knowing the line length. For example, cut -c 12- outputs everything from character 12 onwards — useful for stripping a fixed-length date prefix like 2026-03-17 from every log line.
Extract a Single Column from a CSV File Using a Delimiter
Delimiter-based field extraction with -d and -f is the feature you will use most often. Suppose you have a CSV file named roster.csv containing employee records and you need only the Name column. Set the delimiter to a comma with -d ',' and request field 2 with -f2.
Contents of roster.csv used in examples 03 through 05 and 09 through 10:
EMP-01,Sara,Engineer,92000
EMP-02,Liam,Designer,78000
EMP-03,Priya,Analyst,65000
Liam
Priya
Fields in cut are numbered from 1, not 0. So -f1 is the first column, -f2 the second, and so on. This is different from most programming languages that use zero-based indexing. If you need to add a header row to the output, pipe the result through standard Linux commands such as sed.
Extract Multiple Non-Adjacent Fields in One Pass
You are not limited to a single field. Pass a comma-separated list of field numbers to -f to pull several non-adjacent columns simultaneously. The command below extracts field 1 (Employee ID) and field 3 (Role) while skipping field 2 (Name) and field 4 (Salary), reading every matching pair in a single pass through the file.
EMP-02,Designer
EMP-03,Analyst
cut always outputs selected fields in their original left-to-right document order, regardless of the order you list them in -f. Writing -f3,1 produces exactly the same result as -f1,3. If you need columns reordered or transposed, use awk '{print $3, $1}' instead.
Extract a Consecutive Range of Fields
When the columns you need are adjacent, use a hyphen to define a range rather than listing every field number individually. -f2-4 selects fields 2, 3, and 4 in a single expression. This is especially practical when working with wide files that have dozens of columns and you need a contiguous block in the middle.
Liam,Designer,78000
Priya,Analyst,65000
Open-ended ranges work too. -f3- selects field 3 through the last field on every line, even when the number of fields varies between records. This is particularly useful for log files where the final column is a free-text message of variable length — you can safely capture the entire tail of each line.
Pull All Login Names from /etc/passwd
The /etc/passwd file holds one user account per line in a colon-delimited format: username:password:UID:GID:comment:home_directory:shell. It is one of the most common real-world targets for cut on any Linux server. Setting the delimiter to a colon and selecting field 1 produces a clean list of every account on the machine in a single command — no awk, no scripting required.
daemon
bin
sys
sync
www-data
sara
liam
priya
Sort the list alphabetically with a pipe: cut -d ':' -f1 /etc/passwd | sort. This is one of the fastest ways to audit all accounts on a newly provisioned server. For a full list of user management commands see the User Management Command Cheat Sheet, and for server hardening steps visit the Linux Server Hardening Checklist.
Show the Default Login Shell for Every User Account
The seventh field in each /etc/passwd entry is the path to the user's default login shell — for example /bin/bash, /bin/sh, or /usr/sbin/nologin for service accounts that should not have interactive access. Extracting this column is a quick audit to confirm that locked accounts and system daemons are correctly assigned a no-login shell.
/usr/sbin/nologin
/usr/sbin/nologin
/bin/sync
/usr/sbin/nologin
/bin/bash
Pair fields 1 and 7 to see each username alongside its assigned shell: cut -d ':' -f1,7 /etc/passwd. The output is formatted as username:/bin/bash, giving you a clear and scannable picture of which accounts have interactive shell access on the server.
Count Unique Error Codes in a Log File with grep, cut, sort, and uniq
The true power of cut in a production environment emerges inside pipelines. The workflow below reads server.log, filters for lines containing the word ERROR, extracts the fifth space-separated field (the error code), sorts the results, and counts each unique code. This is a daily-use pattern for sysadmins and DevOps engineers who need to quickly identify which errors are occurring most frequently.
Assume each log line follows this format:
2026-03-17 08:14:22 app.py ERROR ERR-5001 Connection pool exhausted on db01
31 ERR-4031
12 ERR-4404
8 ERR-5003
2 ERR-4010
If the output looks wrong, run each stage independently and inspect the result before adding the next pipe. Start with grep 'ERROR' server.log, then append | cut -d ' ' -f5 and verify the field number matches your actual log format, then add | sort | uniq -c. Adjusting the -f number to match your log schema is usually all that needs changing. For a broader overview of log analysis tools see the Linux System Monitoring Cheat Sheet.
Remove a Specific Column Using --complement
When a file has many columns and you only want to drop one of them, it is easier to say which column you do not want rather than listing all the ones you do. The --complement flag inverts the selection: it prints every field except the ones named with -f. The command below outputs all four columns from roster.csv except field 2, effectively stripping the Name column from the output.
EMP-02,Designer,78000
EMP-03,Analyst,65000
--complement is a GNU cut extension and is available on all major Linux distributions (RHEL, CentOS, Ubuntu, Debian, Arch). It is not supported on the BSD cut that ships with macOS. If you are on macOS and need to drop a column, use awk with a conditional skip: awk -F',' 'BEGIN{OFS=","}{$2=""; gsub(/,,/,","); print}' roster.csv.
Reformat Column Output with a Custom --output-delimiter
By default cut outputs the fields it selects separated by the same delimiter you specified with -d. The --output-delimiter option lets you substitute that separator with any string you choose — turning a comma-separated extract into pipe-delimited, tab-delimited, or any other format before passing it on to another tool or saving it to a file.
EMP-02 | Designer
EMP-03 | Analyst
A common production use case is converting a comma-separated file to tab-separated format for import into a database or spreadsheet tool: cut -d ',' -f1- roster.csv --output-delimiter=$'\t'. The $'\t' shell syntax produces a literal tab character that --output-delimiter inserts between every selected field, transforming the entire file in a single pass with no temporary files needed.
The -d flag accepts exactly one character. If your data uses a two-character or multi-character separator — such as | (space-pipe-space), ::, or a tab followed by a space — cut cannot parse it correctly and will produce unexpected results.
For multi-character delimiters, irregular whitespace between fields, or any case requiring conditional field logic, switch to awk: awk -F ' \| ' '{print $2}' data.txt. You can also use sed for regex-based column extraction. For a full comparison of Linux text processing tools see the Text Processing Commands cheat sheet.
People Also Ask
The cut command in Linux is used to extract specific sections from each line of a file or from standard input piped from another command. You can extract data by raw byte position (-b), by character position (-c), or by field number when the input is structured with a consistent delimiter (-d combined with -f). Common real-world uses include pulling columns from CSV files, extracting usernames from /etc/passwd, isolating error codes from log entries, and preprocessing data for import into databases or spreadsheets.
On files that contain only standard ASCII characters — which covers the vast majority of log files, CSV files, and system configuration files — -b and -c behave identically because each character is exactly one byte. The difference surfaces with multi-byte UTF-8 characters such as accented letters (é, ñ), Chinese or Arabic characters, or emoji. The -b flag counts raw bytes, which can cut through the middle of a multi-byte codepoint and produce garbled output. The -c flag is locale-aware and counts whole characters, keeping every codepoint intact. As a rule, always use -c when there is any possibility the file contains non-ASCII text.
Pass a comma-separated list of field numbers to -f to select non-adjacent columns: cut -d ',' -f1,3,5 data.csv extracts fields 1, 3, and 5. For a consecutive range, use a hyphen: -f2-4 extracts fields 2, 3, and 4. You can also mix both notations in a single expression: -f1,3-5,7 selects field 1, fields 3 through 5, and field 7. Remember that fields are numbered from 1, not 0, and cut always outputs them in their original left-to-right order regardless of how you list them.
The most common causes are: (1) the actual delimiter in your file does not match what you passed to -d — run cat -A yourfile.txt to reveal invisible characters (a ^I indicates a tab, which is the default delimiter when -d is omitted); (2) your data uses a multi-character separator, which cut does not support — switch to awk -F in that case; (3) there are leading or trailing spaces around the delimiter that prevent the field from being recognised as a clean match.
No, never. cut is a read-only tool — it reads from its input and writes to standard output, leaving the source file completely untouched. To save the extracted data to a new file, use output redirection: cut -d ',' -f2 roster.csv > names.txt. To replace the original file you would need to write to a temporary file first and then rename it, or use a tool like sed -i — but that is rarely the right approach for straightforward column extraction.
Use awk instead of cut when: (1) your delimiter is more than one character, (2) fields are separated by variable amounts of whitespace, (3) you need to reorder or combine columns in the output, (4) you need conditional logic (print only rows where field 4 exceeds 80000), or (5) you need arithmetic on field values. For simple, single-pass column extraction from consistently delimited files, cut is faster to type, easier to read, and slightly more efficient. For complex transformations, awk is the better tool.
From your first terminal command to advanced sysadmin skills — every guide here is written in plain English with real examples you can run right now.