kubectl run fail in interactive mode on CI - docker

have issue with
kubectl run -ti
in gitlab ci.
For testing in CI we run docker container with "npm t" command in interactive mode and it was perfectly work on docker.
After migrate to Kubernetes have issue, as kubectl run give next error: Unable to use a TTY - input is not a terminal or the right kind of file
Job run in image: lachlanevenson/k8s-kubectl
If run kubectl run from local machine all work.
Pls help

The PodSpec container: has a tty attribute, which defaults to false but which one can set to true (that's what the -t option, which is a shortcut for --tty=true, does in kubectl exec). You can experiment with setting stdin: true but at your peril, since it can hang the Pod waiting for "someone" to type something.

Related

how to stop I/O log container in kubernetes

when i access docker container in GKE like kubectl exec -it [pod-name] bash -n seunghwan
i see lots of logs(I don't know what it is)
enter image description here
this container I accessed have nodejs application but i tried mysql server it looks same.
I am the only one can see these logs. others can't see these logs so I think this problem is from my local
if need more info please tell me thank you
when i access container in GKE there are no logs in bash terminal.
I managed to reproduce the issue by adding DEBUG=true before the exec command, such as:
DEBUG=true kubectl exec -it [pod-name] bash -n seunghwan
Try to set the DEBUG environment variable to false either at the beginning of your command or in your env var.
DEBUG=false kubectl exec -it [pod-name] bash -n seunghwan

Is it possible to run sls files parallel?

I am running an sls file that starts up a docker container that should remain active in the background. It does work, the container is up and running. However, until I kill the containers on my minions, I am unable to run any other state.apply commands because I get:
The function "state.apply" is running as PID 44455 and was started at 2020, Aug 19 18:49:13.242099 with jid 20200819184913242099
Now, I have found the following documentation: https://docs.saltstack.com/en/latest/ref/states/parallel.html which would imply that it actually is possible. However, when I add it to my SLS file, it does not work. I am still unable to call a new state.apply until I kill the containers.
This is what my file looks like:
docker.io:
pkg.installed: []
require:
- pkgrepo: docker_prerequisites
- pkg: docker_prerequisites
service.running:
- parallel: True
- name: docker
- enable: True
- restart: True
- image: ubuntu
- port_bindings: 800:80
docker:
cmd.run:
- name: docker run -t ubuntu
Am I using the command wrong? This is how I figured it should be based on the documentation. Or is there possibly a different way to start a docker container that stays active from an sls file?
If you check your salt logs or your process manager, you will very likly be able to find out whats happening here. The state is running aslong as the command runs.
The problem is within your cmd.run. Salt executes the cmd.run as long as the command in it is being executed. Salt will only excecute the next cmd.run if no other cmd.run is currently running. Otherwise you will get your mentioned error.
- name: docker run -t ubuntu
The command that you used would attach the containers shell. (-t) Therefore salt is waiting for the command to end. In this case the command will not end until you close the containers shell.
The solution is, to detach the containers shell so it is present as a background process and attach it afterwards.
docker:
cmd.run:
- name: docker run -t -d ubuntu
Simply by adding -d parameter you detach the containers shell and the container runs in the background. By adding this parameter, your saltstate should finish.
Now as the container runs in the background you can attach yourself to the containers shell with the following command:
docker exec -it <container_id> /bin/bash
However heres how you run saltstates parallel:
When applying a saltstate to your target minion, try to add the following parameter to your salt or salt-call command at the end: concurrent=true
Please refer to this documentation and search for concurrent: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html
This article describes the problem. However you should be careful with that as it could be dangerous to run the same salt module twice simultaneously. so try to avoid this.

Kubernetes Helm pod restart infinitly

I'm trying to deploy Spinnaker into a Kubernetes cluster. To do the trick, I
use Halyard which uses Helm.
While I'm trying to run my Helm pod I have the following output:
Cluster "default" set.
Context "default" created.
User "user" set.
Context "default" modified.
Switched to context "default".
Creating /home/spinnaker/.helm
Creating /home/spinnaker/.helm/repository
Creating /home/spinnaker/.helm/repository/cache
Creating /home/spinnaker/.helm/repository/local
Creating /home/spinnaker/.helm/plugins
Creating /home/spinnaker/.helm/starters
Creating /home/spinnaker/.helm/cache/archive
Creating /home/spinnaker/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/spinnaker/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
All seems correct. But my pod raise a CrashLoopBackOff event without any other error and my pod restart again for no apparent reason.
The dockerfile I'm using to build the helm docker image is the following:
FROM gcr.io/spinnaker-marketplace/halyard:stable
ARG GCP_SPINNAKER_GCR_KEY
# install helm
WORKDIR /home/spinnaker
# get helm
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
RUN sed -i 's/\/usr\/local\/bin/\/home\/spinnaker/g' get_helm.sh
# sudo user workaround
RUN sed -i 's/sudo //g' get_helm.sh
RUN chmod u+x get_helm.sh
# add the current folder to the path
ENV PATH="/home/spinnaker:${PATH}"
# install helm
RUN ./get_helm.sh
# importing the configuration script
ADD shell/halyard-configure.sh .
# auhtorize the spinnaker user to execute the configuration script
USER root
RUN chown -R spinnaker halyard-configure.sh
USER spinnaker
# create the gcp key directory for docker registry
RUN mkdir -p ~/.gcp
RUN echo $GCP_SPINNAKER_GCR_KEY | base64 -d > ~/.gcp/gcr-account.json
ENTRYPOINT [ "./halyard-configure.sh" ]
CMD "/opt/halyard/bin/halyard"
And here is the content of the halyard-configure.sh shell script:
#!/usr/bin/env bash
set -e
# configure kubectl
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default
# configure helm
helm init --service-account tiller --upgrade
Your entrypoint script needs to end with the magic line exec "$#".
In Docker in general, a container startup launches the container entrypoint, passing it the command as parameters. (A Kubernetes pod spec refers to these parts as "command" and "args".) Once the entrypoint completes, the container exits. Since your entrypoint script just runs kubectl config and helm init commands which all complete promptly, the container exits almost immediately; when it does, Kubernetes restarts it; and when it has to restart it more than two or three times, it goes into CrashLoopBackOff state.
The usual way to get around this is to set up the entrypoint script to do any required first-time setup, then exec the command that was passed to it as parameters. Then the command (in your case, /opt/halyard/bin/halyard) winds up being "the main container process", and has the magic process ID 1, and will receive signals at container termination time.
Also note that there is a reasonably standard pattern for accessing the Kubernetes API from a pod that involves configuring a service account for the pod and using an official API, or else launching a kubectl proxy sidecar. You might be able to use that in place of the manual setup steps you have here. (I've never tried to launch Helm from inside a Kubernetes pod, though.)

how to debug container images using openshift

Let's say I have a docker image created using a Dockerfile. At the time of writing the Dockerfile I had to test it repeatedly to realize what I did wrong. To debug a docker image I can simply run a test container and look at its stdout/stderr to see what's wrong with the image.
IMAGE_NAME=authoritative-dns-bind
IMAGE_OPTIONS="
-v $(pwd)/config.yaml:/config.yaml:ro
-p 127.0.0.1:53:53
-p 127.0.0.1:53:53/udp"
docker run -t -i $IMAGE_OPTIONS $IMAGE_NAME
Learning the above was good enough to iteratively create and debug a minimal working Docker container. Now I'm looking for a way to do the same for OpenShift.
I'm pretty much aware of the fact that the container is not ready for OpenShift. My plan is to run it and watch its stdoud/stderr like I did with Docker. One of the people I asked for help came up with a command that looked like exactly what I need.
oc run -i -t --image $IMAGE_NAME --command test-pod -- bash
And the above command seemed for me for fedora:24 and fedora:latest images from the docker registry and I got a working shell. But the same wouldn't happen for my derived image with a containerized service. My explanation is that it probably does an entirely different thing and instead of starting the command interactively it starts it non-interactively and then tries to run bash inside a failed container.
So what I'm looking for is a reasonable way to debug a container image in OpenShift. I expected that I would be able to at least capture and view stdin/stdout of OpenShift containers.
Any ideas?
Update
According to the comment by Graham oc run should indeed work as docker run but it doesn't seem to be the case. With original Fedora images the bash always appears at least upon hitting enter.
# oc run -i -t --image authoritative-dns-bind --command test-auth13 -- bash
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
...
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
^C
#
I wasn't able to try out the suggested oc debug yet as it seems to require more configuration than just simple image. There's another problem with oc run as that command creates new and new containers that I don't really need. I hope there is a way to start the debug easily and get the container automatically distroyed afterwards.
There are three main commands to debug pods:
oc describe pod $pod-name -- detailed info about the pod
oc logs $pod-name -- stdout and stderr of the pod
oc exec -ti $pod-name -- bash -- get a shell in running pod
To your specific problem: oc run default pull policy is set to Always. This means that OpenShift will try to pull the image until successful and refuse to use the local one.
Once this kuberenetes patch lands in OpenShift origin, the pull policy will be easily configurable.
Please do not consider this a final answer to the question and supersede it with your own better answers...
I'm now using a pod configuration file like the following...
apiVersion: v1
kind: Pod
metadata:
name: "authoritative-dns-server" # pod name, your reference from command line
namespace: "myproject" # default namespace in `oc cluster up`
spec:
containers:
- command:
- "bash"
image: "authoritative-dns-bind" # use your image!
name: "authoritative-dns-bind-container" # required
imagePullPolicy: "Never" # important! you want openshift to use your local image
stdin: true
tty: true
restartPolicy: "Never"
Note the command is explicitly set to bash. You can then create the pod, attach to the container and run the docker command yourself.
oc create -f pod.yaml
oc attach -t -i authoritative-dns-server
/files/run-bind.py
This looks far from ideal and it doesn't really help you debug an ordinary openshift container with standard pod configuration, but at least it's possible to debug, now. Looking forward to better answers.

"Running Kubernetes Locally via Docker" Guide is not working at all for MacOS, ssh command just hanging

I am following this Guide at http://kubernetes.io/docs/getting-started-guides/docker/ to start using Kubernetes on MacOS. Is this guide valid?
When I am doing this step:
docker-machine sshdocker-machine active-N -L 8080:localhost:8080
They command is hanging, no repsponse at all;
Looking at docker ps -l, I have
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b1e95e26f46d
gcr.io/google_containers/hyperkube-amd64:v1.2.3 "/hyperkube apiserver"
About an hour ago Up About an hour
k8s_apiserver.c08c1df_k8s-master-127.0.0.1_default_d95a6048198f747c5fcb74ee23f1f25c_d0c6d2fc
So it means kubernete is running
I run this command:
docker-machine `sshdocker-machine active` -L 8080:localhost:8080
I can login to docker machine, then exit, run kubectl get nodes again, hanging, no response
Anything wrong here?
If this step can not pass, how can I use Kubernetes?
"docker-machine ssh docker-machine active -N -L 8080:localhost:8080" sets up a ssh tunnel. Similar to ssh tunnel, you can run that command in the background by passing the -f option. More useful tips here
I'd recommend running the ssh command in a separate terminal, so that it will be easy to bring down the tunnel.
As long as the above command is running, kubectl should work.

Resources