The tail command is your go-to utility for viewing the end of files and monitoring logs in real time. For example, when you’re troubleshooting a web server crash and need the last 50 error log entries, watching authentication attempts as they happen, or tracking deployment logs during a live release, tail gives you instant access to the freshest data without opening massive files in an editor.
These sections cover displaying specific line counts, following files in real time with automatic rotation handling, inspecting binary data byte-by-byte, monitoring multiple logs simultaneously, and piping tail output through grep, awk, and sed for powerful filtering workflows. Additional scenarios explain exactly when to use tail -f versus tail -F, how to extract specific fields from access logs, and how to build monitoring commands suited for production cron jobs and deployment scripts.
Installing the tail Command
In most cases, the tail command ships as part of the GNU coreutils package on all modern Linux distributions, meaning it comes pre-installed on virtually every system. If you encounter a minimal container or embedded system without it, you can add coreutils with a single package command.
Verify tail is available:
tail --version
If the command returns version information (typically GNU coreutils), you’re ready to use tail. Otherwise, install coreutils for your distribution:
Ubuntu and Debian-based distributions:
sudo apt install coreutils
Fedora, RHEL, Rocky Linux, and AlmaLinux:
sudo dnf install coreutils
Arch Linux and Manjaro:
sudo pacman -S coreutils
openSUSE:
sudo zypper install coreutils
Alpine Linux:
sudo apk add coreutils
In practice, you’ll rarely need to install tail manually since coreutils is a fundamental dependency for virtually all Linux systems.
Understand the tail Command
tail Command Basics
Even if you’re new to tail, think of it as a spotlight that shows you the end of a file without loading the entire thing. While a text editor like nano or vim opens the whole file (which can take seconds or minutes for multi-gigabyte logs), tail seeks directly to the end and shows just what you need. Windows users can relate it to running Get-Content -Tail 10 -Wait in PowerShell, where the console streams the latest lines without launching another application.
Next, the basic syntax follows this pattern:
tail [OPTIONS] [FILE]
- OPTIONS: Flags that modify behavior (e.g.,
-n 20for the last 20 lines,-fto follow in real time) - FILE: The file to read. Omit this to read from standard input (useful when piping from other commands)
For instance, tail /var/log/syslog displays the last 10 lines of the system log (10 is the default). Need more context? Add -n 50 to see the last 50 lines. Want live updates? Add -f to watch new entries appear as the system writes them.
Essential Options by Task
The table below organizes common options by task, not just alphabetically, so you can quickly find the right flag for your goal:
| Task | Options | What They Do |
|---|---|---|
| View specific number of lines | -n NUMBER | Shows the last NUMBER lines instead of default 10 (e.g., tail -n 100 app.log) |
| View specific bytes | -c BYTES | Shows the last BYTES bytes, useful for binary files or checking file endings (e.g., tail -c 50 data.bin) |
| Follow file in real time | -f | Watches file and prints new lines as they’re written; press Ctrl+C to stop (e.g., tail -f /var/log/nginx/access.log) |
| Follow with rotation handling | -F | Like -f but reopens file if renamed (log rotation safe); equivalent to --follow=name --retry |
| Monitor multiple files | -f file1 file2 | Follows multiple files, printing headers to distinguish sources (e.g., tail -f access.log error.log) |
| Suppress headers | -q | Hides file name headers when viewing multiple files (cleaner output for scripts) |
| Force headers | -v | Always shows file name headers, even for single files (useful for clarity in complex pipelines) |
| Stop following when process exits | --pid PID | Automatically stops tail when process PID terminates (e.g., tail --pid 1234 -f app.log) |
| Adjust polling interval | -s SECONDS | Checks file every SECONDS instead of continuously (saves CPU on slow systems; e.g., tail -s 5 -f log) |
When “optional” matters: Most tail options are truly optional in the sense that tail works fine without them, but scenarios like live log monitoring (-f), handling rotated logs (-F), or inspecting binary data (-c) make specific options essential for those tasks. Ultimately, start with the basics (tail -n and tail -f) and add others as your monitoring workflows demand them.
Practical Use Cases for the tail Command with Examples
Example 1: View Recent Log Entries
In practice, when you need the latest entries in a log file, tail shows you exactly what you need without loading the entire file. This makes troubleshooting faster, whether you’re diagnosing system crashes or tracking application behavior.
Scenario 1: Troubleshooting a Web Server Crash
You’re troubleshooting an Apache web server crash and need to check the last 20 lines of the error log to find the cause.
Command (Debian/Ubuntu):
tail -n 20 /var/log/apache2/error.log
Command (RHEL/Fedora/Rocky Linux):
tail -n 20 /var/log/httpd/error_log
Output:
[Wed Jan 02 12:45:10.123456] [error] AH00128: File does not exist: /var/www/html/favicon.ico
[Wed Jan 02 12:46:02.654321] [error] AH00037: ModSecurity: Request denied with code 403
Why It Matters:
- Shows recent errors that might have caused the crash.
- Saves time by avoiding large file loads.
Scenario 2: Verifying System Boot Logs
After a reboot, you want to check if any warnings or errors occurred during the startup process.
Command:
tail -n 10 /var/log/syslog
Output:
Jan 02 12:50:00 myhost systemd[1]: Starting Cleanup of Temporary Directories...
Jan 02 12:50:10 myhost systemd[1]: Finished Cleanup of Temporary Directories.
Jan 02 12:50:15 myhost kernel: [12345.678] Warning: Disk usage nearing capacity.
Why It Matters:
- Shows system health right after reboot.
- Helps you catch warnings like low disk space before they become problems.
Scenario 3: Checking Application Logs After an Update
You’ve updated an application and need to ensure the latest changes were applied successfully without introducing errors.
Command:
tail -n 15 /var/log/app.log
Output:
Jan 02 12:55:00 [INFO] Application update started.
Jan 02 12:55:10 [INFO] Feature X deployed successfully.
Jan 02 12:55:15 [ERROR] Feature X: Missing configuration file.
Why It Matters:
- Confirms if updates completed successfully and highlights any errors that need attention.
- Focuses on recent events, making deployment debugging faster.
Scenario 4: Reviewing Recent Authentication Attempts
Similarly, you suspect unauthorized login attempts and want to inspect the most recent entries in the authentication log.
Command (Debian/Ubuntu):
tail -n 10 /var/log/auth.log
Command (RHEL/Fedora/Rocky Linux):
tail -n 10 /var/log/secure
Output:
Jan 02 13:00:30 myhost sshd[1234]: Failed password for invalid user admin from 192.168.1.100
Jan 02 13:01:05 myhost sshd[1235]: Accepted password for user1 from 192.168.1.101
Why It Matters:
- Spots unauthorized login attempts and potential brute-force attacks in real time.
- Shows you successful and failed logins quickly for audit or security purposes.
- Lets you take immediate action, such as blocking malicious IPs or tightening authentication policies.
Always double-check log locations for your distribution. Some appliances move authentication logs to custom paths.
Example 2: Monitor Logs in Real Time
The -f (follow) option lets you watch files in real time. This is crucial for monitoring live log updates, diagnosing ongoing issues, and making sure systems run as expected. Press Ctrl+C when you’re ready to stop following a file.
If you are monitoring log files that might be rotated (e.g., renamed by a log rotation utility), the default
-foption will continue to follow the original file descriptor, which might point to the renamed (and no longer active) file. In such cases, usetail -F. The-Foption is equivalent to--follow=name --retry; it will follow the filename and automatically reopen the file if it gets recreated or renamed, making it more robust for production log monitoring.
Scenario 1: Monitoring Web Traffic in Real Time
Meanwhile, you’re managing an NGINX web server and need to monitor incoming traffic to identify user activity and potential issues.
Command:
tail -f /var/log/nginx/access.log
Output (Real-Time Updates):
192.168.1.2 - - [02/Jan/2025:13:05:10 +0000] "GET /index.html HTTP/1.1" 200 1234
192.168.1.3 - - [02/Jan/2025:13:05:15 +0000] "POST /api/login HTTP/1.1" 401 512
Why It Matters:
- Lets you spot problematic trends immediately, such as repeated 401 errors from specific IPs.
- Helps identify high-traffic endpoints or check server performance under load.
Scenario 2: Debugging Application Deployments
Meanwhile, during a live application deployment, you want to monitor log files for errors and confirm that the application starts successfully.
Command:
tail -f /var/log/app.log
Output (Real-Time Updates):
Jan 02 13:06:00 [INFO] Starting application...
Jan 02 13:06:10 [INFO] Listening on port 8080.
Jan 02 13:06:15 [ERROR] Failed to load configuration file.
Why It Matters:
- Catches and fixes deployment issues quickly, minimizing downtime.
- Shows a clear timeline of events during startup, helping you match logs with user-reported problems.
Scenario 3: Tracking Security Events
Consequently, you suspect unauthorized access attempts and need to monitor failed login attempts in real time.
Command (Debian/Ubuntu):
tail -f /var/log/auth.log
Command (RHEL/Fedora/Rocky Linux):
tail -f /var/log/secure
Output (Real-Time Updates):
Jan 02 13:07:10 myhost sshd[1234]: Failed password for invalid user admin from 192.168.1.100
Jan 02 13:07:15 myhost sshd[1235]: Failed password for root from 10.0.0.5
Why It Matters:
- Shows potential brute-force attacks or unauthorized access attempts as they happen.
- Lets you act quickly, such as blocking malicious IP addresses or tightening security settings.
On systems that rely heavily on the systemd journal, run
journalctl -forjournalctl -u servicename -ffor a similar live feed.
Scenario 4: Analyzing Temporary Files
Additionally, you’re debugging a script that writes logs to a temporary file, and you need to monitor the file’s output as it’s updated.
Command:
tail -f /tmp/debug-output.log
Output:
[INFO] Script execution started.
[WARNING] Missing configuration detected.
[ERROR] Operation failed due to invalid input.
Why It Matters:
- Temporary files are often used by scripts for debugging or intermediate storage.
- Monitoring these files in real time helps you catch and resolve issues during execution without needing persistent logs.
Example 3: Analyze Files by Bytes
Likewise, the -c option shows you the last portion of a file by bytes instead of lines. This helps when working with binary files, debugging corrupted data, or analyzing truncated logs.
Scenario 1: Debugging a Corrupted Binary File
For example, you need to inspect the last 100 bytes of a corrupted binary log file to identify potential data loss or errors.
Command:
tail -c 100 /var/log/database.bin
Output (Raw Bytes, example shown as Hexadecimal for readability):
EF 4B 21 35 00 7F 3D 2A ... [truncated]
tail -coutputs raw bytes. To interpret them as hexadecimal, pipe the command intoxxdorhexdump -C, for exampletail -c 100 /var/log/database.bin | xxd.
Why It Matters:
- Pinpoints the specific byte range causing errors in file processing.
- Gives you a starting point for recovery tools or debugging workflows.
Scenario 2: Inspecting Truncated Log Files
Likewise, you are analyzing an incomplete log file and need to verify if the final portion contains critical entries.
Command:
tail -c 200 /var/log/server.log
Output:
Jan 02 15:00:00 Service started
Jan 02 15:05:45 Connection to database lost
Jan 02 15:06:00 Attempting reconnection...
Why It Matters:
- Shows quickly whether key information exists in a truncated log, saving time during troubleshooting.
- Lets you inspect only relevant parts of oversized logs efficiently.
Scenario 3: Validating Binary Data
Additionally, you’re debugging a networked application that writes raw binary payloads to disk, and analyzing the last bytes can confirm if data was transmitted correctly.
Command:
tail -c 50 /var/log/network_payload.bin | xxd
Output (Hexadecimal View):
00000000: 7856 3412 ffaa 99cc ... [truncated]
Why It Matters:
- Helps identify anomalies or corrupted bytes in network transmissions.
- Aids in debugging file transfer protocols or compression algorithms.
Example 4: Monitor Multiple Files
Furthermore, the tail command lets you monitor multiple files at once, making it invaluable for troubleshooting systems with interconnected logs. Each file’s output shows with a header, clearly marking its source.
Scenario 1: Monitoring Access and Error Logs Side-by-Side
Meanwhile, you are debugging a web application and need to monitor NGINX access and error logs concurrently to identify how user requests correlate with errors.
Command:
tail -f /var/log/nginx/access.log /var/log/nginx/error.log
Output:
==> /var/log/nginx/access.log <==
192.168.1.1 - - [02/Jan/2025:14:30:11] "GET /index.html HTTP/1.1" 200 1234
192.168.1.2 - - [02/Jan/2025:14:30:15] "POST /api/login HTTP/1.1" 403 512
==> /var/log/nginx/error.log <==
2025/01/02 14:30:11 [error] 1234#0: *1 open() "/var/www/html/missing.html" failed (2: No such file or directory)
2025/01/02 14:30:15 [error] 5678#0: *2 client sent invalid request body
Why It Matters:
- This workflow helps diagnose issues in real time during heavy traffic or while debugging deployments.
- Matching access and error logs helps pinpoint which user requests trigger server errors.
Scenario 2: Comparing Logs Across Services
Likewise, you’re managing a multi-service application and need to monitor both the application’s backend logs and database logs to troubleshoot an issue.
Command:
tail -f /var/log/app.log /var/log/db.log
Output:
==> /var/log/app.log <==
Jan 02 14:31:00 [INFO] User submitted form on /contact.
Jan 02 14:31:02 [ERROR] Database query timeout.
==> /var/log/db.log <==
Jan 02 14:31:00 Query received: SELECT * FROM contacts WHERE id=5;
Jan 02 14:31:02 Query failed: Connection timeout.
Why It Matters:
- Shows how one service’s issue impacts another by monitoring both logs at once.
- This is crucial for troubleshooting complex systems with dependent services.
Scenario 3: Tracking System and Security Logs
Finally, you’re investigating a security breach and need to analyze both system logs and authentication logs in real time.
Command (Debian/Ubuntu):
tail -f /var/log/syslog /var/log/auth.log
Command (RHEL/Fedora/Rocky Linux):
tail -f /var/log/messages /var/log/secure
Output:
==> /var/log/syslog <==
Jan 02 14:32:15 myhost systemd[1]: Starting Update Job...
Jan 02 14:32:17 myhost kernel: [12345.678] Warning: Unusual disk activity detected.
==> /var/log/auth.log <==
Jan 02 14:32:16 myhost sshd[1238]: Failed password for invalid user guest from 192.168.1.105
Jan 02 14:32:20 myhost sshd[1239]: Accepted password for admin from 10.0.0.10
Why It Matters:
- Combining logs lets you see system-level activities (like disk warnings) alongside user-level actions (like login attempts).
- This complete view helps you identify and mitigate potential breaches.
Example 5: Filter Log Entries with grep
Furthermore, the grep command is a powerful text search tool that, when combined with tail, lets you filter log entries precisely in real time. This workflow helps when working with large logs where finding specific information quickly matters.
Scenario 1: Monitoring Login Failures in Real Time
For example, you are investigating unauthorized access attempts on your system and want to track login failures as they happen.
Command (Debian/Ubuntu):
tail -f /var/log/auth.log | grep --line-buffered "Failed password"
The --line-buffered flag forces grep to flush each matching line immediately so real-time monitoring stays responsive.
Command (RHEL/Fedora/Rocky Linux):
tail -f /var/log/secure | grep --line-buffered "Failed password"
Output:
Jan 02 14:15:30 myhost sshd[1234]: Failed password for invalid user admin from 192.168.1.100
Jan 02 14:16:05 myhost sshd[1235]: Failed password for root from 10.0.0.5
Why It Matters:
- This workflow surfaces suspicious login attempts immediately, letting you respond faster to potential security threats.
- You can monitor specific keywords (like “Failed password”) in real time without sifting through irrelevant log entries.
Scenario 2: Tracking Specific User Activity
Additionally, you want to filter authentication logs for actions involving a specific user, such as johndoe.
Command (Debian/Ubuntu):
tail -n 100 /var/log/auth.log | grep "johndoe"
Command (RHEL/Fedora/Rocky Linux):
tail -n 100 /var/log/secure | grep "johndoe"
Output:
Jan 02 14:17:00 myhost sshd[1236]: Accepted password for johndoe from 192.168.1.101
Jan 02 14:19:15 myhost sshd[1237]: Failed password for johndoe from 10.0.0.6
Why It Matters:
- Narrowing the scope to a single user allows you to track their activity comprehensively, such as login attempts or suspicious behavior.
Scenario 3: Identifying IP-Based Patterns
Likewise, you are troubleshooting a DDoS attack and need to isolate repeated login attempts from a specific IP address.
Command (Debian/Ubuntu):
tail -f /var/log/auth.log | grep --line-buffered "192.168.1.100"
Command (RHEL/Fedora/Rocky Linux):
tail -f /var/log/secure | grep --line-buffered "192.168.1.100"
Output:
Jan 02 14:15:30 myhost sshd[1234]: Failed password for invalid user admin from 192.168.1.100
Jan 02 14:15:35 myhost sshd[1235]: Failed password for root from 192.168.1.100
Why It Matters:
- Filtering by IP helps identify malicious sources and aids in blocking attackers at the firewall or server level.
- This technique is vital during live incident response scenarios.
Example 6: Extract Data with awk and sed
Moreover, enhance tail workflows by processing its output with powerful text-processing tools like awk and sed. These utilities let you filter, manipulate, and format log data dynamically, making it easier to extract useful insights.
Scenario 1: Extracting Specific Fields from Log Data
For instance, you are monitoring an NGINX access log and want to isolate the IP addresses and URLs being requested.
Command:
tail -f /var/log/nginx/access.log | awk '{print $1, $7}'
Output:
192.168.1.1 /home.html
10.0.0.2 /login.html
Why It Matters: This workflow helps identify which clients (IP addresses) access specific resources (URLs) in real time. For example:
- Detecting unusual traffic patterns or spikes in requests for specific endpoints.
- Debugging issues caused by requests to specific files or APIs.
The
awk '{print $1, $7}'command in this example assumes the standard NGINX combined log format where fields are separated by spaces, and the IP address is the first field ($1) and the requested URL path is the seventh field ($7). If your NGINXlog_formatis different, you may need to adjust the field numbers accordingly.
Scenario 2: Normalizing Log Formats for Analysis
Meanwhile, you’re comparing error patterns across different application versions, but the log format changed slightly between releases. Using sed to normalize timestamp formats helps you focus on actual differences.
Command:
tail -n 50 /var/log/app.log | sed 's/[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}/DATE/g'
Output:
DATE ERROR: Unable to connect to database.
DATE ERROR: Timeout occurred.
Why It Matters:
- Replacing variable data like timestamps or request IDs makes it easier to spot patterns when comparing logs from different time periods or sources.
- This technique works great when using diff tools or frequency analysis to find recurring issues.
When using sed for pattern matching, avoid non-POSIX regex like
\s(which GNU sed treats as a literal ‘s’); use[[:space:]]instead for cross-platform compatibility. Need a refresher on pattern replacements? Review our sed command guide for more substitution examples.
Scenario 3: Combining Multiple Filters
Finally, you need to extract full error messages, including their timestamps, from the last 50 lines of a system log, filtering for lines containing ‘error’.
Command:
tail -n 50 /var/log/syslog | awk '/error/ {print $0}'
Output:
Jan 02 14:15:12 myhost systemd[1]: error: Disk space low. Jan 02 14:16:34 myhost kernel: error: Network unreachable.
Why It Matters:
- This tailored command extracts only relevant error messages while retaining critical metadata (timestamps).
- It helps pinpoint issues faster by showing exactly when and where errors occurred.
Example 7: Advanced tail Techniques
Overall, once you have the basics down, a few extra switches can streamline daily operations:
tail --pid 1234 -f /var/log/app.log: Automatically stops following when process 1234 exits, ideal during service restarts or batch jobs.tail -s 2 -f logfile: Polls the file every two seconds instead of aggressively checking for changes, which saves CPU on quiet systems.tail -n +1 -F /var/log/app.log: Dumps the file from the first line and then keeps following it, reopening the handle automatically if log rotation swaps the underlying file.- Journald tip: if services log exclusively to the systemd journal, use
journalctl -forjournalctl -u servicename -f, then pipe into tools likegreporawkas needed.
Common Pitfalls and Troubleshooting
- Buffering delays when piping: If you notice lag when piping tail output through grep or awk, some programs buffer output. Use
grep --line-bufferedorstdbuf -oLto force line buffering. - Log rotation gotchas:
tail -ffollows the file descriptor, not the name. If logs rotate and the file gets replaced, usetail -F(or--follow=name --retry) to track the filename instead. - Performance on huge files: On multi-gigabyte logs, tail still performs well because it seeks to the end rather than reading the entire file. For byte-level inspection of massive files, combine
-cwith tools likexxdorhexdump. - Permission errors: Many system logs require root access. Use
sudo tail -f /var/log/syslogwhen you encounter “Permission denied” messages.
Conclusion
The tail command delivers instant visibility into log files, deployment outputs, and data streams without the overhead of opening entire files. By mastering options like -f for real-time monitoring, -F for rotation-safe following, -c for byte-level inspection, and piping workflows with grep, awk, and sed, you can diagnose issues faster, track security events as they occur, and build reliable monitoring scripts that scale from single servers to production fleets.