matching service names in docker-compose and those of kubenetes - docker

I have a configuration in Google Cloud kubernetes that I would like to emulate with docker-compose for local development. My problem is that docker-compose creates a service name with the name of the folder (purplecloud) plus underscore at the front and underscore plus "1" at the end while kubernetes does not. Further, kubenetes does not let me use service names with "_". This causes me the extra step of modifying my nginx config that routes to this micro-service and other microservices with the same naming problem.
Is there a way to name the service in docker-compose to be same as kubernetes?
My Google Cloud yaml includes
apiVersion: v1
kind: Service
metadata:
name: account-service # matches name in nginx.conf for rule "location /account/" ie http://account-service:80/
spec:
ports:
- port: 80
targetPort: 80
type: NodePort
selector:
app: account-pod
I have a nginx pod that needs to route to the above account micro-service. Nginx can route to this service using
http://account-service:80/
My docker-compose yaml includes
version: '3.1' # must specify or else version 1 will be used
services:
account: # DNS name is http://purplecloud_account_1:80/;
build:
context: ./account
dockerfile: account.Dockerfile
image: account_get_jwt
ports:
- '4001:80'
- '42126:42126' # chrome debuger
environment:
- PORT=80
I have a nginx pod that needs to route to the above account micro-service. Nginx can route to this service using
http://purplecloud_account_1:80/
So I need to swap out the nginx config when I go between docker-compose and kubernetes. Is there a way to change the name of the service in docker-compose to be same as kubernetes?

Related

How can access running app in my computer localhost inside kubernetes pod?

i have a question about kubernetes networking.
My working senario:
I have a Jenkins container my localhost and this container up and running. Inside Jenkins, i have a job. Access jenkins , i use "http://localhost:8080" url. (jenkins is not runing inside kubernetes)
My flask app, trigger the Jenkins job with this command:
#app.route("/create",methods=["GET","POST"])
def create():
if request.method =="POST":
dosya_adi=request.form["sendmail"]
server = jenkins.Jenkins('http://localhost:8080/', username='my-user-name', password='my-password')
server.build_job('jenkins_openvpn', {'FILE_NAME': dosya_adi}, token='my-token')
Then, i did Dockerize this flask app. My image name is: "jenkins-app"
If i run this command, everythings perfect:
docker run -it --network="host" --name=jenkins-app jenkins-app
But i want to do samething with kubernetes. For that i wrote this yml file.
apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
hostNetwork: true
containers:
- name: jenkins-app
image: jenkins-app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
With this yml file, i access the flask app using port 5000. While i want to trigger jenkins job, i get an error like this: requests.exceptions.ConnectionError
Would you suggest if there is a way to do this with Kubernetes?
I create an endpoint.yml file and add in this file below commands, this solve my problem:
apiVersion: v1
kind: Endpoints
metadata:
name: jenkins-server
subsets:
- addresses:
- ip: my-ps-ip
ports:
- port: 8080
Then, I change this line in my flask app like this:
server = jenkins.Jenkins('http://my-ps-ip:8080/', username='my-user-name', password='my-password')
First you expose your Jenkins server:
kubectl expose pod jenkins-pod --port=8080 --target-port 5000
Then you check the existence of the service:
kubectl get svc jenkins-pod -o yaml
Use it for your Flask app to connect to your Jenkins server via this service:
... jenkins.Jenkins('http://jenkins-pod.default.svc.cluster.local:8080/...'
Note the assumption is you run everything in default namespace, otherwise change the "default" to your namespace.

Kubernetes / Docker - SSL certificates for web service use

I have a Python web service that collects data from frontend clients. Every few seconds, it creates a Pulsar producer on our topic and sends the collected data. I have also set up a dockerfile to build an image and am working on deploying it to our organization's Kubernetes cluster.
The Pulsar code relies on certificate and key .pem files for TLS authentication, which are loaded over file paths in the test code. However, if the .pem files are included in the built Docker image, it will result in an obvious compliance violation from the Twistlock scan on our Kubernetes instance.
I am pretty inexperienced with Docker, Kubernetes, and security with certificates in general. What would be the best way to store and load the .pem files for use with this web service?
You can mount certificates in the Pod with Kubernetes secret.
First, you need to create a Kubernetes secret:
(Copy your certificate to somewhere kubectl is configured for your Kubernetes cluster. For example file mykey.pem and copy it to the /opt/certs folder.)
kubectl create secret generic mykey-pem --from-file=/opt/certs/
Confirm it was created correctly:
kubectl describe secret mykey-pem
Mount your secret in your deployment (for example nginx deployment):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/etc/nginx/ssl"
name: nginx-ssl
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-ssl
secret:
secretName: mykey-pem
restartPolicy: Always
After that .pem files will be available inside the container and you don't need to include them in the docker image.

kubernetes - Use ports of other containers on the same pod for setting environment variables

I want to communicate with containers which are in the same pod programmatically.
So,I decided to set the ports of the auxiliary containers (bar1-container and bar2-container in this example) as environment variables of the primary container (i.e. foo-container).
However, I could not figure out how to pass the exposed ports of the auxiliary ports can be implicitly in the .yaml file for my deployment configuration:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: web
tier: frontend
spec:
containers:
# Only container to be exposed outside the pod
- name: foo-container
image: foo
env:
- name: BAR_1_PORT
# HOW TO GET PORT HERE IMPLICITLY ???
value: XXXX
- name: BAR_2_PORT
# HOW TO GET PORT HERE IMPLICITLY ???
value: XXXX
ports:
- name: https
containerPort: 443
- name: http
containerPort: 80
# SubContainer 1
- name: bar1-container
image: bar1
# SubContainer 2
- name: bar2-container
image: bar2
I wonder if there is a way to use the ports like ${some-variable-or-so-here} instead of 5300, 80, 9000 or whichever port is exposed from the container.
P.S: I deliberately did not specify ports or containerPort values of auxiliary containers in the yaml configuatin above as they will not be exposed outside the pod.
You are mixing containers, pods and services here. If you have multiple containers within the same pod, to communicate between them, you need no service at all, also you need to pointing it to a hostname as they share the same network namespace. All you need to do is to connect to localhost on port that your particular service is listening on. For example you can have nginx container (listening on 80) connect to the second containers service of php-fpm via localhost:9000.

Linking Containers in POD in K8S

I want to link my selenium/hub container to my chrome and firefox node containers in a POD.
In docker, it was easily defined in the docker compose yaml file.
I want to know how to achieve this linking in kubernetes.
This is what appears on the log.:
This is the error image:
apiVersion: v1
kind: Pod
metadata:
name: mytestingpod
spec:
containers:
- name: seleniumhub
image: selenium/hub
ports:
- containerPort: 4444
hostPort: 4444
- name: chromenode
image: selenium/node-chrome-debug
ports:
- containerPort: 5901
links: seleniumhub:hub
- name: firefoxnode
image: selenium/node-firefox-debug
ports:
- containerPort: 5902
links: seleniumhub:hub
2:
You don't need to link them. The way Kubernetes works, all the containers in the same Pod are already on the same networking namespace, meaning that they can just talk to each other through localhost and the right port.
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat shared networking space that has full communication with other physical computers and pods across the network.
If you want to access the chromenode container from the seleniumhub container, just send a request to localhost:5901.
If you want to access the seleniumhub container from the chromenode container, just send a request to localhost:4444.
Simply use kompose described in "Translate a Docker Compose File to Kubernetes Resources": it will translate your docker-compose.yml file into kubernetes yaml files.
You will then see how the selenium/hub container declaration is translated into kubernetes config files.
Note though that docker link are obsolete.
Try instead to follow the kubernetes examples/selenium which are described here.
The way you connect applications with Kubernetes is through a service:
See "Connecting Applications with Services".

Inject code/files directly into a container in Kubernetes on Google Cloud Engine

How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.
docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
Thanks
It is possible to use ConfigMaps to achieve that goal:
The following example mounts a mariadb configuration file into a mariadb POD:
ConfigMap
apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
MariaDB deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated here.
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.
A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by adding commands or args to the pod spec, or using kubectl exec and echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.
For the first option, the kubectl exec would be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.
Add a comment if I should expand any of these options with more details.
Kubernetes allows you to mount volumes into your pod. One such volume type is hostPath (link) which allows you to mount a directory from the host into the pod.

Resources