kubernetes prestop hook doesnt work with env variables - docker

I have a deployment template as below. but the prestop hook is never been executed at all.
the idea here is set the zookeeper node offline before the pod is terminated.
I am running kubectl rollout to restart the pods. and old pod is when it terminates the prestop is not run. could someone please check whats wrong ?
Basically how its prestop executed in case of successful stop ? I need this feature because the zookeeper is involved here and the api connects to zookeeper to send the requests.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abcd
labels:
app: abcd
spec:
replicas: 1
selector:
matchLabels:
app: abcd
template:
metadata:
labels:
app: abcd
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
# terminationGracePeriodSeconds: 1
containers:
- name: se
image: "xxx"
lifecycle:
preStop:
exec:
command: ["zookeepercli","--servers","zk-hs", "-c", "set", "$HOSTNAME", "offline"]
ports:
- containerPort: 2345
- name: pe-1
image: "xxx"
lifecycle:
preStop:
exec:
command: ["zookeepercli","--servers","zk-hs", "-c", "set", "$HOSTNAME", "offline"]
ports:
- containerPort: 2313

As user2511126 mentioned in his/her comment:
the preStop hook doesnt uses env variables. I moved to bash script and works now
According to kubernetes documentation:
PreStop
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.
A more detailed description of the termination behavior can be found in Termination of Pods.
No parameters can be passed to the handler, this includes environmental variables.

Related

Multi Container ASP.NET Core app in a Kubernetes Pod gives error address already in use

I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.

Eventstore doesn't work in Kubernetes (but works in Docker)

I want to run Eventstore in Kubernetes node. I start the node with minikube start, then I apply this yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventstore-deployment
spec:
selector:
matchLabels:
app: eventstore
replicas: 1
template:
metadata:
labels:
app: eventstore
spec:
containers:
- name: eventstore
image: eventstore/eventstore
ports:
- containerPort: 1113
protocol: TCP
- containerPort: 2113
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: eventstore
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 1113
targetPort: 1113
---
apiVersion: v1
kind: Service
metadata:
name: eventstore-dashboard
spec:
selector:
app: eventstore
ports:
- protocol: TCP
port: 2113
targetPort: 2113
nodePort: 30113
type: NodePort
the deployment, the replica set and the pod starts, but nothing happens: Eventstore doesn't print to the log, I can't open its dashboard. Also other services can't connect to eventstore:1113. No errors and the pods doesn't crash.
The only I see in logs is "The selected container has not logged any messages yet".
I've tried a clean vanilla minukube node with different vm drivers, and also a node with configured Ambassador + Linkerd. The results are the same.
But when I run Eventstore in Docker with this yaml file via docker-compose
eventstore:
image: eventstore/eventstore
ports:
- '1113:1113'
- '2113:2113'
Everything works fine: Eventstore outputs to logs, other services can connect to it and I can open its dashboard on 2113 port.
UPDATE: Eventstore started working after about 30-40 minutes after deployment. I've tried several times, and had to wait. Other pods start working almost immediately (30 secs - 1 min) after deployment.
As #ligowsky confirmed in comment section, issue was cause due to VM Performance. Posting this as Community Wiki for better visibility.
Minikube as default is running with 2 CPUs and 2048 Memory. More details can be found here.
You can change this if your VM has more resources.
- During Minikube start
$ sudo minikube start --cpus 2 --memory 8192 --vm-driver=<driverType>
- When Minikube is running, however minikube need to be restarted
$ minikube config set memory 4096
⚠️ These changes will take effect upon a minikube delete and then a minikube start
More commands can be found in Minikube docs.
In my case when Minikube resources was 4CPUs and 8192 memory I didn't have any issues with eventstore.
OP's Solution
OP used Kind to run eventstore deployment.
Kind is a tool for running local Kubernetes clusters using Docker
container "nodes". kind is primarily designed for testing Kubernetes
1.11+
Kind documentation can be found here.

Kubernetes and Docker: how to let two service to communicate correctly

I have two Java microservices (caller.jar which calls called.jar)
We can set the caller service http port through an env-var CALLERPORT and the address of called service through an env-var CALLEDADDRESS.
So caller uses two env-var.
We must set also called service env-var CALLEDPORT in order to set the specific http port on which called service is listening for http requests.
I don't know exactly how to simply expose these variables from a Dockerfile, in order to set them using Kubernetes.
Here is how I made the two Dockerfiles:
Dockerfile of caller
FROM openjdk:8-jdk-alpine
# ENV CALLERPORT (it's own port)
# ENV CALLEDADDRESS (the other service address)
ADD caller.jar /
CMD ["java", "-jar", "caller.jar"]
Dockerfile of called
FROM openjdk:8-jdk-alpine
# ENV CALLEDPORT (it's own port)
ADD called.jar /
CMD ["java", "-jar", "called.jar"]
With these I've made two Docker images:
myaccount/caller
myaccount/called
Then I've made the two deployments.yaml in order to let K8s deploy (on minikube) the two microservices using replicas and loadbalancers.
deployment-caller.yaml
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: caller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: caller
labels:
app: caller
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: caller
tier: caller
strategy:
type: Recreate
template:
metadata:
labels:
app: caller
tier: caller
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
And deployment-called.yaml
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
selector:
app: called
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: called
labels:
app: called
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: called
tier: called
strategy:
type: Recreate
template:
metadata:
labels:
app: called
tier: called
spec:
containers:
- image: myaccount/called
name: called
env:
- name: CALLEDPORT
value: "8081"
ports:
- containerPort: 8081
name: called
IMPORTANT:
The single services work well if called singularly (such as calling an healthcheck endpoint) but, when calling the endpoint which involves the communication between the two services, then there is this error:
java.net.UnknownHostException: called
The pods are correctly running and active, but i guess the problem is the part of deployment.yaml in which I must define how to find the pointed service, so here:
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
Neither
called
nor
called-loadbalancer
nor
http://caller
kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/called-855cc4d89b-4gf97 1/1 Running 0 3m23s 172.17.0.4 minikube <none> <none>
pod/called-855cc4d89b-6268l 1/1 Running 0 3m23s 172.17.0.5 minikube <none> <none>
pod/caller-696956867b-9n7zc 1/1 Running 0 106s 172.17.0.6 minikube <none> <none>
pod/caller-696956867b-djwsn 1/1 Running 0 106s 172.17.0.7 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/called-loadbalancer LoadBalancer 10.99.14.91 <pending> 8081:30161/TCP 171m app=called
service/caller-loadbalancer LoadBalancer 10.107.9.108 <pending> 8080:30078/TCP 65m app=caller
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 177m <none>
works if put in that line of the deployment.yaml.
So what to put in this line?
The short answer is you don't need to expose them in the Dockerfile. You can set any environment variables you want when you start a container and they don't have to be specified upfront in the Dockerfile.
You can verify this by starting a container using 'docker run' with '-e' to set env vars and '-it' to get an interactive session. The echo the value of your env var and you'll see it is set.
You can also get a terminal session with one of the containers in your running kubernetes Pod with 'kubectl exec' (https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/). From there you can echo environment variables from there to see that they are set. You can see them more quickly with 'kubectl describe pod ' after getting the pod name with 'kubectl get pods'.
Since you are having problems, you also want to check whether your services are working correctly. Since you are using minikube you can do 'minikube service ' to check they can be accessed externally. You'll also want to check the internal access - see Accessing spring boot controller endpoint in kubernetes pod
Your approach of using service names and ports is valid. With a bit of debugging you should be able to get it working. Your setup is similar to an illustration I did in https://dzone.com/articles/kubernetes-namespaces-explained so referring to that might help (except you are using env vars directly instead of through a configmap but it amounts to the same).
I think that in the caller you are injecting the wrong port in the env var - you are putting the caller's own port and not the port of what it is trying to call.
To access services inside Kubernetes you should use this DNS:
http://caller-loadbalancer.default.svc.cluster.local:8080
http://called-loadbalancer.default.svc.cluster.local:8081
First of all - it's completely impossible to understand what do you want. Your post starts from:
We can set...
We must set...
Nobody here doesn't know what do you want to do and it could be much more useful to see some definition of done you are expecting.
This having been said now, I have to turn to your substantive question...
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
This things will be exported by k8s automatically. For example, i have service kibana with a port:80 in service definition:
svc/kibana ClusterIP 10.222.81.249 <none> 80/TCP 1y app=kibana
this is how I can get this within the different pod which is in the same namespace:
root#some-pod:/app# env | grep -i kibana
KIBANA_SERVICE_PORT=80
KIBANA_SERVICE_HOST=10.222.81.249
Moving forward, why do you use LoadBalancer? Without any cloud it will be the similar to NodePort, but seems like ClusterIP is all you need.
Next, service ports can be the same and there won't be any port collisions, just because ClusterIP is unique every time and therefore socket will be unique for each service. Your services could be described like this:
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <--------------------
targetPort: 8080
selector:
app: caller
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <------------------
targetPort: 8081
selector:
app: called
That would simplify using service names just by names without ports specifying:
http://caller-loadbalancer.default.svc.cluster.local
http://called-loadbalancer.default.svc.cluster.local
or
http://caller-loadbalancer.default
http://called-loadbalancer.default
or (within the similar namespace):
http://caller-loadbalancer
http://called-loadbalancer
or (depending on the lib)
caller-loadbalancer
called-loadbalancer
Same things about containerPort/targetPort! Why do you use 8081 and 8080? Who cares about internal container ports? I agree different cases happen, but in this case you have a single process inside and you are definitely not going to run some more processes there, are you? So they also could be the same.
I'd like to advise you to use stackoverflow different way. Do not ask how to do something your way, much better to ask how to do something the best way

How does Kubernetes invoke a Docker image?

I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains:
uwsgi: unrecognized option '--http 127.0.0.1:8080'.
The image definitely has the http option because:
a. uWSGI was installed via pip3 which includes the http plugin.
b. When I run the deployment with --list-plugins, the http plugin is listed.
c. The http option is recognized correctly when run locally.
I am running the Docker image locally with:
$: docker run <image_name> uwsgi --http 127.0.0.1:8080
The container Kubernetes YAML config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: launch-service-example
name: launch-service-example
spec:
replicas: 1
template:
metadata:
labels:
app: launch-service-example
spec:
containers:
- name: launch-service-example
image: <image_name>
command: ["uwsgi"]
args:
- "--http 127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv --test1=3--test2=abc--test3=true"
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: launch-service-example-service
spec:
selector:
app: launch-service-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.
It is happening because of the difference between arguments processing in the console and in the configuration.
To fix it, just split your args like that:
args:
- "--http"
- "127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable"
- "APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv"
- "--test1=3--test2=abc--test3=true"

Establishing PPTP-connection in a Kubernetes POD

I'm trying to set up a pod running a pptp-client.
I want to access a single machine behind the VPN and this works fine locally , my docker container adds records to my localhost's routing table, all is well.
ip route add x.x.x.x dev ppp0
I am only able to establish a connection to the VPN-server as long as privileged is set to true and network_mode is set to "host"
The production environment is a bit different, the "localhost" would be one of our three operating nodes in our Google Container cluster.
I don't know if the route added after the established connection would be only accessible by the containers operating inside that node.. but this is a later problem.
docker-compose.yml
version: '2'
services:
pptp-tunnel:
build: ./
image: eu.gcr.io/project/image
environment:
- VPN_SERVER=X.X.X.X
- VPN_USER=XXXX
- VPN_PASSWORD=XXXX
privileged: true
network_mode: "host"
This seems to be more difficult to achieve with kubernetes, though both options exist and is declared as you can see in my manifest. (hostNetwork, privileged)
Kubernetes Version
Version 1.6.6
pptp-tunnel.yml
apiVersion: v1
kind: Service
metadata:
name: pptp-tunnel
namespace: default
labels:
spec:
type: ClusterIP
selector:
app: pptp-tunnel
ports:
- name: pptp
port: 1723
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pptp-tunnel
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: pptp-tunnel
template:
metadata:
labels:
app: pptp-tunnel
spec:
hostNetwork: true
containers:
- name: pptp-tunnel
env:
- name: VPN_SERVER
value: X.X.X.X
- name: VPN_USER
value: XXXX
- name: VPN_PASSWORD
value: 'XXXXX'
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
image: eu.gcr.io/project/image
imagePullPolicy: Always
ports:
- containerPort: 1723
I've also tried adding capabilities: NET_ADMIN as you can see, without effect. Setting the container in priviliged mode should disable the security, i shouldn't need both.
Would be nice to not have to set the container to priviliged mode and just rely on capabilities to bring the ppp0 interface up and add the routing.
What happens when the POD starts is that the pptp-client simply keeps sending requests and times out.
(This happens with my docker container locally aswell until i turn network_mode "host" on.)
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xa43cd4b4> <pcomp> <accomp>]
LCP: timeout sending Config-Requests
But this is without hostNetwork enabled, if i enable it i simply get a single request sent, and then modem hangup.
using channel 42
Using interface ppp0
Connect: ppp0 <--> /dev/pts/0
sent [LCP ConfReq id=0x7 <asyncmap 0x0> <magic 0xcdae15b8> <pcomp> <accomp>]
Script ?? finished (pid 59), status = 0x0
Script pptp XX.XX.XX.XX --nolaunchpppd finished (pid 60), status = 0x0
Script ?? finished (pid 67), status = 0x0
Modem hangup
Connection terminated.
Declaring the HostNetwork boolean let's me see multiple interfaces shared from the host, so this is working but somehow im not able to establish a connection, i cant figure out why.
Perhaps there is a better solution? I will still need to establish a connection to the VPN-server but adding a routing record to the host may not be the best solution.
Any help is greatly appreciated!

Resources