I'm using the following docker image: https://github.com/budtmo/docker-android It's Docker image for Android emulators.
I'm run It using Kubernetes with the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: android-deployment
spec:
selector:
matchLabels:
app: android-emulator
replicas: 10
template:
metadata:
labels:
app: android-emulator
spec:
containers:
- name: android-emulator
image: budtmo/docker-android-x86-8.1
ports:
- containerPort: 6080
- containerPort: 5554
- containerPort: 5555
env:
- name: DEVICE
value: "Samsung Galaxy S8"
After the container is running its automatic start the Android emulator (don't know exactly how).
I need to run python script automatic after the container is up for each running container,
How can I do it? What should I change in my deployment file?
You could simply create a Dockerfile to build your own image from the budtmo/docker-android-x86-8.1 base image and deploy this. Within the Dockerfile you define the start command or entrypoint.
UPDATE
I think I understand, correct me, if I am wrong: You want run your python script against the Android emulator running in Kubernetes.
As I said, I am not really firm with Kubernetes but couldn't you run the Android emulator as an init container, and the python script itself in the "main" container?
Like described here: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Related
I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.
I have tried kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.
Also created a privileged docker image, created a pod from it, then used the command kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container
kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app#sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.
This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.
Thank you all for your help and suggestions. I found a way to achieve it using the tool nerdctl - https://github.com/containerd/nerdctl.
I'm new to Kubernetes. I'm making my first ever attempt to deploy an application to Kubernetes and expose it to the public. However, when I try and deploy my configuration, I get this error:
error: unable to recognize "deployment.yml": no matches for kind "Service" in version "apps/v1"
So, let's run through the details.
I'm on Ubuntu 18.04. I'm using MiniKube with VirtualBox as the HyperVisor driver. Here is all the version info:
MiniKube = v1.11.0
VirtualBox = 6.1.0
Kubectl = Client Version 1.18.3, Server Version 1.18.3
The app I'm trying to deploy is a super-simple express.js app that returns Hello World on request.
const express = require('express');
const app = express();
app.get('/hello', (req, res) => res.send('Hello World'));
app.listen(3000, () => console.log('Running'));
I have a build script I've used for deploying express apps to docker before that zips up all the source files. Then I've got my Dockerfile:
FROM node:12.16.1
WORKDIR /usr/src/app
COPY ./build/TestServer-*.zip ./TestServer.zip
RUN unzip TestServer.zip
RUN yarn
CMD ["yarn", "start"]
So now I run some commands. eval $(minikube docker-env) makes me use MiniKube's docker environment so I don't need to deploy this container to the cloud. docker build -t testserver:v1 . builds and tags the container.
Now, let's go to my deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testserver
spec:
replicas: 1
selector:
matchLabels:
app: testserver
template:
metadata:
labels:
app: testserver
spec:
containers:
- name: testserver
image: testserver:v1
ports:
- containerPort: 3000
env:
imagePullPolicy: Never
---
apiVersion: apps/v1
kind: Service
metadata:
name: testserver
spec:
selector:
app: testserver
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
I'm trying to create a deployment with a pod and a service to expose it. I'm sure there are various issues in here, this is the newest part to me and I'm still trying to learn and understand the spec. However, the problem I'm asking for help with occurs when I try to use this config. I run the create command, and get the error.
kubectl create -f deployment.yml
deployment.apps/testserver created
error: unable to recognize "deployment.yml": no matches for kind "Service" in version "apps/v1"
The result of this is I see my app listed as a deployment and as a pod, but the service part has failed. I've been scouring the internet for documentation on why this is happening, but I've got nothing.
A service is of apiVersion: v1 instead of apiVersion: apps/v1 (like a deployment). You can check it in the official docs. You also need to use a Service of type NodePort (or ClusterIP) if you want to expose your deployment. Type LoadBalancer will not work in minikube. This is mostly used in k8s clusters managed in the cloud where a service of type LoadBalancer will create a loadbalancer (like an ALB in AWS).
To check the apigroup of a resource you can use: kubectl api-resources
I'm using Kubernetes+Helm - and wanting to ask if it possible to get the Docker version as specified in the spec containers. So forexample, I have the deployment below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: myrepo.com/animage:0.0.3
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: [“work”]
envFrom:
- configMapRef:
name: test
I then have another deployment where I would like to get that Docker version number 0.0.3 and set it as an env var.
Any ideas appreciated.
Thanks.
Short answer: No. At least not directly. Although there are two workarounds I can see that you might find viable.
First, apart of providint image with your version tag, set a label/annotation on the pod indicating it's version and use Downward API to pass that data down to your container
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Second, if you actually own the process to build that image, you can easily bake the version in during docker build with something like :
Dockerfile:
FROM ubuntu
ARG VERSION=undefined
ENV VERSION=$VERSION
with build command like
docker build --build-arg VERSION=0.0.3 -t myrepo.com/animage:0.0.3 .
which will give you an image with a baked in env var with your version value
I have this repo, and docker-compose up will launch the project, create 2 containers (a DB and API), and everything works.
Now I want to build and deploy to Kubernetes. I try docker-compose build but it complains there's no Dockerfile. So I start writing a Dockerfile and then discover that docker/Dockerfiles don't support loading ENV vars from an env_file or .env file. What gives? How am I expected to build this image? Could somebody please enlighten me?
What is the intended workflow for building a docker image with the appropriate environment variables?
Those environment variables shouldn't be set at docker build step but at running the application on Kubernetes or docker-compose.
So:
Write a Dockerfile and place it at root folder. Something like this:
FROM node
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["npm", "start"]
Modify docker-compose.yaml. In the image field you must specify the name for the image to be built. It should be something like this:
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
There is no need to set user and working_dir
Build the image with docker-compose build (you can also do this with docker build)
Now you can use docker-compose up to run your app locally, with the .env file
To deploy it on Kubernetes you need to publish your image in dockerhub (unless you run Kubernetes locally):
docker push YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
Finally, create a Kubernetes manifest. Sadly kubernetes doesn't support env files as docker-compose do, you'll need to manually set these variables in the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-api
labels:
app: platform-api
spec:
replicas: 1
selector:
matchLabels:
app: platform-api
template:
metadata:
labels:
app: platform-api
spec:
containers:
- name: platform-api
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-db
labels:
app: platform-db
spec:
replicas: 1
selector:
matchLabels:
app: platform-db
template:
metadata:
labels:
app: platform-db
spec:
containers:
- name: arangodb
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8529
env:
- name: ARANGO_ROOT_PASSWORD
value: localhost
Deploy it with kubectl create
Please note that this code is just indicative, I don't know exactly your user case. Find more information in docker-compose and kubernetes docs and tutorials. Good luck!
I've updated the project on github, it now all works, and the readme documents how to run it.
I realized that env vars are considered runtime vars, which is why --env-file is an option for docker run and not docker build. This must also (I assume) be why docker-compose.yml has the env_file option, which I assume just passes the file to docker build. And in Kubernetes, I think these are passed in from a configmap. This is done so the image remains more portable; same project can be run with different vars passed in, no rebuild required.
Thanks ignacio-millán for the input.
I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains:
uwsgi: unrecognized option '--http 127.0.0.1:8080'.
The image definitely has the http option because:
a. uWSGI was installed via pip3 which includes the http plugin.
b. When I run the deployment with --list-plugins, the http plugin is listed.
c. The http option is recognized correctly when run locally.
I am running the Docker image locally with:
$: docker run <image_name> uwsgi --http 127.0.0.1:8080
The container Kubernetes YAML config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: launch-service-example
name: launch-service-example
spec:
replicas: 1
template:
metadata:
labels:
app: launch-service-example
spec:
containers:
- name: launch-service-example
image: <image_name>
command: ["uwsgi"]
args:
- "--http 127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv --test1=3--test2=abc--test3=true"
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: launch-service-example-service
spec:
selector:
app: launch-service-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.
It is happening because of the difference between arguments processing in the console and in the configuration.
To fix it, just split your args like that:
args:
- "--http"
- "127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable"
- "APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv"
- "--test1=3--test2=abc--test3=true"