Exposing Kubernetes cluster over VPN

Bao Nguyen
4 min readFeb 8, 2017

--

Another exercise that I worked in last few weeks is to setup and test kubernetes cluster, and one of the thing that bother me is that i can not access the kubernetes pod and service directly, i have to go use kubectl pod forwarding, but it is really inconvenient. If you are not familiar with Kubernetes, it’s like kubernetes is setup another cluster inside your cluster. You can read kubernetes networking document for more detail. So i set a small challenge for myself to make kubernetes accessible over VPN, this means that once you connect to a VPN gateway you can easily access to pods and services.

The deployment can be summarized as below:

The goal is that: after connecting to the VPN Gateway, user would be able to access to pod IPs and services IPs normally.

There is one easy is way that you can setup a VPN inside the Kubernetes cluster, then expose that VPN via NodePort. But I want to go the hard way, not because i want hard thing, but this will help me to understand more about kubernetes networking. And it really helped!

In summary, there are three steps you need to do: (1) connect your VPN node to kubernetes cluster, (2) connect your VPN node to kubernetes services and (3) adjust your VPN configuration accordingly. To give you more context: I am using kubernetes 1.5.2 on CoreOS with Flannel network add-on, and i am using openvpn for VPN server.

Connect VPN node to kubernetes cluster

Kubernetes setups an overlay network and uses it to manage the pod network. In my case, this is equivalent to connect my VPN node to the Flannel overlay network, which is quite easy. You need to download flannel binary here, and identify the correct etcd nodes and runtime parameters. For me, this translate to this command

./flanneld -etcd-endpoints https://10.1.128.163:2379,https://10.1.128.131:2379,https://10.1.128.104:2379 -etcd-prefix /cluster.local/network -etcd-cafile /opt/flannel-config/ca_cert.crt -etcd-certfile /opt/flannel-config/cert.crt -etcd-keyfile /opt/flannel-config/key.pem -public-ip 10.1.0.49

Some explanation: I am using multiple etcd servers with custom certs, the prefix is under /cluster.local/network and the VPN address is 10.1.0.49. After running the command, it will take like 10 second, and then I can start pinging the 10.233/16 subnet, which is the pod subnet of the kubernetes cluster.

Connect VPN node to kubernetes services

After you connect to flannel network, you still can not access the kubernetes services. Why? Because the service cluster ip is a virtual ip, and it is managed by kube-proxy and route via ip table. You can read additional detail in Service document. So what i did is: downloading the kube-proxy and start the kube-proxy on the VPN node. You will need to make sure your kubeconfig is available so that the kube-proxy will be able to connect to API server

kube-proxy — kubeconfig=./kube-config/config.yaml — bind-address=10.1.0.49 — cluster-cidr=10.233.64.0/18 — proxy-mode=iptables — masquerade-all

After a few seconds, you will be able to access kubernetes cluster ip inside the cluster.

Adjusting VPN configuration

So now from the VPN node, we are able to connect to Kubernetes pods and services. I just need to adjust the openvpn to declare that the kubernetes subnet is available over VPN

push “route 10.233.0.0 255.255.0.0”

However, you will soon notice that from VPN client, you still can not access the service ip yet. The main reason is they are not real ip and instead handled via IP tables. So, what i did is add a SNAT rule to forward the packet so that i will be handled by iptables as following:

iptables -t nat -I POSTROUTING -s 10.8.0.0/24 -d 10.233.64.0/18 -j SNAT — to-source 10.233.110.0

Also one note is the ip 10.233.110.0 is VPN node flannel’s ip. Additionally, you can also push the DNS option so that you can access kubernetes DNS over the VPN.

Conclusion

After three simple setup (it actually took me almost half a day to figured out the whole thing and how they worked …), I managed to expose the whole cluster over VPN. It is quite convenient since I don’t have to do port forwarding or sock proxy anymore. I believe this is also reusable for your cluster as well, maybe a bit different if you use Weave or other network plugin. I hope you find this helpful and feel free to share your thoughts under comment section.

Originally published at www.nqbao.com on February 8, 2017.

--

--