Here is my code
- step:
name: SSH to Digital Ocean and update docker image
script:
- head ~/.ssh/config
- ssh -i ~/.ssh/config root#XXX.XXX.XXX.XXX
- docker ps
- docker rm -f gvcontainer
- docker image rm -f myrepo/myimage:tag
- docker pull myrepo/myimage:tag
- docker run --name gvcontainer -p 12345:80 -d=true --restart=always myrepo/myimage:tag
services:
- docker
Here I can see that the Pipeline ssh into my DO droplet successfully, but for some reason(I guess, it was too quick to type the "docker ps". it should to wait a few seconds, but I just don't know how to postpone the operation) it could not find the container.
So I manually ssh into my droplet and checked, the gvcontainer is there.
Please enlighten me with any possible reasons.
Thanks
The commands listed after your SSH session are not being run on the remote system - they're being run in Pipelines. Since the Pipelines container doesn't have a gvcontainer to remove, it returns that error.
You have several options, one of which I outlined in answering your other question (pass the commands as arguments to SSH, as in ssh -i /path/to/key user#host "command1 && command2"). Another option would be to put a script on the droplet that does all the things you want, and have Pipelines execute it via SSH (ssh -i /path/to/key user#host "./do-all-the-things.sh").
Related
I have a python code that passes all the test on my local machine. The code uses tkinter and provides a GUI. However, none of the test functions actually open the GUI. (They call tk.Tk() though).
I created a docker container locally and could use X11 forwarding to pass the tests on the "local" container as well.
Now, I'm trying to run the tests on Jenkins that I have set up on an EC2 instance. Jenkins is supposed to create a docker container using the Dockerfile that is on my repository. And then call "docker run -e ... -v ..." (similar to what I had in my local computer) to check the tests. I understand my ec2 instance does not have a gui and therefore x11 forwarding is not as simple as it was on my computer. There should be a way for tests using a gui to be checked through Jenkins setup on AWS. Any help is appreciated.
EDIT
Here is the build script that I have on AWS, it creates the docker container using the Dockerfile:
IMAGE_NAME="test-image"
CONTAINER_NAME="deidentifier_clinical"
echo "Check current working directory"
pwd
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo $DISPLAY
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY $IMAGE_NAME bash -c "cd /$CONTAINER_NAME;make test"
echo "Copy coverage.xml into Jenkins container"
rm -rf reports; mkdir reports
docker cp $CONTAINER_NAME:/deidentifier_clinical/htmlcov/* reports/.
echo "Cleanup"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
docker rmi $IMAGE_NAME
This fails on the docker run line. This same script runs with no problem on my local computer after setting up the X11-forwarding.
Here's the situation:
I have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container.
Manually, everything works in the container. However, when Jenkins executes the job, it doesn't "wait" for the docker exec command to run to completion.
Below, is an extract from the Jenkinsfile. The short-lived printenv command runs correctly, and prints the environment variables. The next command (python) just gets run and then Jenkins moves on immediately, not waiting for completion. The Jenkins agent (slave) is running on an Ubuntu image. Running all these commands outside Jenkins work as expected.
echo "Running the app docker container in detached tty mode to keep it up"
docker run --detach --tty --name "${CONTAINER_NAME}" "${IMAGE_NAME}"
echo "Listing environment variables"
docker exec --interactive "${CONTAINER_NAME}" bash -c "printenv"
echo "Running test coverage"
docker exec --interactive "${CONTAINER_NAME}" bash -c "python -m coverage run --source . --branch -m pytest -vs"
It seems maybe related to this question.
Please can anyone explain how to get Jenkins to wait for the docker exec command to complete before proceeding to the next step.
Have considered alternatives, like the Docker Pipeline Plugin, but would much prefer to use something close to what I have above where possible.
Ok, another approach, I've tried using Docker Pipeline plugin here.
You can use docker.sock as volume mount to orchestrate containers on your host machine like this in your docker-compose.yml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Depending on your setup you might need to run
chmod 666 /var/run/docker.sock
to get going in the first place.
This works on macOS as well as Linux.
Ugh. This was down to the way that I'd set up docker support on the slave container.
I'd used socat to provide a TCP server proxy. Instead, switched that out for a plain old docker.sock volume between host & container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The very first time, I had to also sort out a permissions issue by doing (inside the container):
rm -Rf ~/.docker
chmod 666 /var/run/docker.sock
After that, everything "just worked". Very painful experience.
I'm new to CI / CD and I'm trying with CircleCI to build and push my app on DockerHub.
I researched some things on the internet, and tried some things, without success.
I'm having an error:
#!/bin/bash -eo pipefail
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAMEr
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Exited with code 1
My config-yml where I am having trouble:
# run tests!
- run: mvn integration-test
- setup_remote_docker
- run:
name: Build and deploy docker images
command: |
docker build -t $HUB_NAME:latest .
- deploy:
name: Push application Docker image
command: |
sudo docker login -u $DOCKER_LOGIN -p $DOCKER_PASSWORD
sudo docker tag $HUB_NAME $DOCKER_LOGIN/$HUB_NAME
sudo docker push $DOCKER_LOGIN/$HUB_NAME
It seems to me you should not be using sudo docker in your login, tag and push commands.
Just use docker login, docker tag and docker push without sudo and you should be good to go.
Explanation
The whole point of the setup_remote_docker step, which you are using in your configuration, is to set the environment variables that allow the docker command to access a remote docker environment with the current docker user.
In your pipeline output, if you open the step with the label Setup a remote Docker engine, you'll likely see an output like:
Allocating a remote Docker Engine
[ ... skip some output ...]
Remote Docker engine created. Using VM '...'
Created container accessible with:
DOCKER_CERT_PATH=/tmp/docker-certs(...)
DOCKER_HOST=tcp://XXX.XXX.XXX.XXX:YYYY
DOCKER_MACHINE_NAME=ZZZZ
DOCKER_TLS_VERIFY=1
NO_PROXY=127.0.0.1,localhost,circleci-internal-outer-build-agent,XXX.XXX.XXX.XXX:YYYY
[ ... some more output ...]
If you sudo into another user, you'll be missing out on those environment variables, and the docker command will attempt to connect to the standard docker unix socket in the local machine. Which is why you see:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Check the Building Docker Images documentation to see that they don't use sudo anywhere.
You probably copied those sudo commands from your own environment where your local machine restricts access to the docker unix socket.
I'm having my first steps into Docker/CI/CD.
For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:
Dockerfile.yml
# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace
## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build
# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}
docker-compose.yml
## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
- docker:dind
variables:
DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test that everything is setup correctly
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
stages:
- deploy
deploy:
stage: deploy
script:
## Login this machine into Docker registry, creates a production build and push it to the registry.
- docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
- docker build -t ${DOCKER_IMAGE_TAG} .
- docker push ${DOCKER_IMAGE_TAG}
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
# Everything works, exit.
- exit 0
only:
- master
In a nutshell, on Gitlab CI, I do the following:
(before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;
(deploy) I build my image and push it to my public docker hub repository;
(deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.
The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.
If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).
I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a, it doesn't listen anything.
I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"], which shouldn't make my container gets deleted because it has a process running.
What I'm doing wrong? I'm lost.
Thank you in advance.
My question was answered by d g - thank you very much!
The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh connection instruction.
Changed my .gitlab.yml file from:
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
To:
# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"
On a VPS server I use the following command to run Jenkins:
docker run -d -p 8080:8080 jenkins
But sometimes, my config error will stop the container, then all my jobs configured in Jenkins are lost.
I follow this video to run Jenkins in Docker.
Is this the right way to run Jenkin in Docker? How to save my jobs in my pulled Jenkins image?
You have to attach a volume to the container that point to jenkins home directory. Usually I use:
docker run -d -p 80:8080 -v /my-absolute-path/where-is-jenkins_home:/var/jenkins_home jenkins