How to execute a "docker run" command in circleci job - docker

I need to add a circleCI job, after pulling a docker image (abc) i need to execute a "docker run" command on the container which is created by image abc to finish the job.
circleci_job:
docker:
- image: xyz.ecr.us-west-2.amazonaws.com/abc
steps:
- checkout
- run:
name: execute docker run command
command: |
export env1=https://example.com
docker run abc --some command
I am getting below error:
/bin/bash: line 1: docker: command not found
I wanted to know if i am using a wrong executer type ? or i am missing something here ?

I see two issues here.
You need to use an image that has the Docker client already installed or you need to install it on the fly in your job. Right now it appears that the image xyz.ecr.us-west-2.amazonaws.com/abc doesn't have Docker client installed.
With the Docker executor, in order for Docker commands such as docker run or docker pull to work, you need the special CircleCI step - setup_remote_docker to be run BEFORE you try using Docker.

Related

GitLab CI/CD - deploy images

I have problem with GitLab CI/CD. I try build image and run to server where i have runner. My gitlab-ci.yaml
image: docker:latest
services:
- docker:dind
variables:
TEST_NAME: registry.gitlab.com/pawelcyrklaf/learn-devops:$CI_COMMIT_REF_NAME
stages:
- build
- deploy
before_script:
- docker login -u pawelcyrklaf -p examplepass registry.gitlab.com
build_image:
stage: build
script:
- docker build -t $TEST_NAME .
- docker push $TEST_NAME
deploy_image:
stage: deploy
script:
- docker pull $TEST_NAME
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- docker run -dt -p 8080:80 --name gitlab_learn $TEST_NAME
My Dockerfile
FROM centos:centos7
RUN yum install httpd -y
COPY index.html /var/www/html/
CMD [“/usr/sbin/httpd”,” -D”,” FOREGROUND”]
EXPOSE 80
Docker images is build successfully it is in registry, also deploy is successful, but when i execute docker ps, i don't have running this image.
I do all this same with this tutorial https://www.youtube.com/watch?v=eeXfb05ysg4
What I do wrong?
Job is scheduled in container together with another service container which has docker inside. It works, it starts container but after job finish, neighbour service with docker stops too. You are checking, and see no container on the host.
Try to remove:
services:
- docker:dind
Also, check out predefined list of CI variables. You can omit using hardcoded credentials and image path.
P.S. you use to kill and rm all containers and your CI will someday remove containers which are not managed buy this repo...
when i execute docker ps, i don't have running this image
You didn't mention how you check running container so I assume next considerations
Make sure you physically check at the right runner.
As soon you didn't set any tags on jobs it will pick first available. You can see at which runner it executed at the job page
Make sure your container is not down or finished.
To see all containers use docker ps -a — it shows all container even stopped one. There would be exit code by which you could determine the reason. Debug it with docker logs {container_id} (put container_id without braces)
Gitlab.com:
Not sure you can run a docker application in your Gitlab CI, try removing the -d option in your docker run command which will run the docker in the background.
$ docker run -t -p 8080:80 --name gitlab_learn $TEST_NAME
If this does work, it will probably force the pipeline to never finish and it will drain your CI/CD minutes.
Self-hosted Gitlab:
Your Gitlab CI is meant to run actions to build and deploy your application, so it doesn't make sense to have your application running on the same instance your Gitlab CI runner does. Even if you want to run the app on the same instance, it shouldn't be running on the same container the runner does and to achieve this you should configure Gitlab CI runner to use the docker on the host.
Anyways, would strongly recommend deploying somewhere outside where your Gitlab runner is running and even better to a managed docker service, Kubernetes or AWS ECS.
You did not specify what your setup is, but based on information in your question I can deduce that you're using gitlab.com (as opposed to private GitLab instance) and self-hosted runner with Docker executor.
You cannot use a runner with Docker executor to deploy containers directly to the underlying Docker installation.
There are two ways to do this:
Use a runner with a shell executor as described in the YT tutorial you posted.
Prepare a helper image that will use SSH to connect to your server and run docker commands there.

Docker-compose with Gitlab CI

I'd like to set up continuous integration with Gitlab. My application is set up through a number of docker containers, which are put together using docker-compose. My .gitlab-ci.yml looks like:
image: "docker/compose:1.25.0-rc2-debian"
before_script:
- docker --version
- docker info
- docker-compose build
- ./bin/start-docker
rspec:
script:
- bundle exec rspec
rubocop:
script:
- bundle exec rubocop
When I push, it tries to run docker-compose build, which in turn fails to find the docker daemon. This is not completely surprising, because I haven't tried to start the docker daemon. But I would usually do that with systemctl start docker - this fails because the runner doesn't use systemd.
How can I get docker-compose to build?
Some notes: docker --version and docker-compose --version indicate that both docker and docker-compose are installed correctly. If I try docker info, then I get the "cannot find docker daemon` error.
image: "docker/compose:1.25.0-rc2-debian" indicates that you are running your pipeline on docker runner. Try running it on shell runner with docker and docker-compose installed and docker daemon running.
Other way would be to rewrite your docker-compose to .gitlab-ci.yml with proper dependencies.

Jenkins docker setup

I am using Jenkins to make build of project, but now my client wants to make builds inside of a Docker image. i have installed docker on server and its running on 172.0.0.1:PORT. I have installed Docker plugin and assigned this TCP URL to Docker URL. I have also created an image with the name jenkins-1
In configure project I use Build environment Build with Docker Container and provide image name. and then in Build in put Execute Shell and then Build it
But it gives the Error:
Pull Docker image jenkins-1 from repository ...`
$ docker pull jenkins-1`
Failed to pull Docker image jenkins-1`
FATAL: Failed to pull Docker image jenkins-1`
java.io.IOException: Failed to pull Docker image jenkins-1``
at com.cloudbees.jenkins.plugins.docker_build_env.PullDockerImageSelector.prepare DockerImage(PullDockerImageSelector.java:34)`
at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerB`uildWrapper.java:169)`
at hudson.model.Build$BuildExecution.doRun(Build.java:156)`
at `hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)`
at hudson.model.Run.execute(Run.java:1720)`
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)`
at hudson.model.ResourceController.execute(ResourceController.java:98)`
at hudson.model.Executor.run(Executor.java:404)`
Finished: FAILURE`
I just have run into the same issue. There is a 'Verbose' check-box in the configuration of build environment after selecting 'Advanced...' link to expand on the error details:
CloudBees plug-in Verbose option
In my case I ran out of space downloading the build Docker images. Expanding ec2 volume has resolved the issue.
But there is an ongoing trouble with space as docker does not auto cleans images and I have ended up adding a manual cleanup step into the build:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Complete build script:
https://bitbucket.org/vk-smith/dotnetcore-api/src/master/ci-build.sh?fileviewer=file-view-default

Bamboo "cannot connect to Docker daemon"

My Bamboo build plan (running on an linux64 agent) has a stage to do a source code checkout from my GitHub repo, and then a stage to build an image with that Dockerfile, which looks like this:
set -o xtrace
set -o errexit
${bamboo_DOCKER_SIGNATURE} build ${bamboo_DOCKER_BUILD_EXTRAS} -t myname:${bamboo_buildNumber} -f Dockerfile .
The next stage I want is a script that pushes this image to my Docker registry (on Quay.io). The script I have so far is below, but the build fails with the error "Cannot connect to the Docker daemon. Is the docker daemon running on this host?".
set -o xtrace
set -o errexit
# service docker start # commented out b/c this did not solve the docker daemon issue
# This is where the build fails:
docker login -e="." -u=${bamboo.QUAY_ROBOT_name} -p=${bamboo.QUAY_ROBOT_token} quay.io
# Push the image to 'my_repo' in the Quay.io organization 'my_team', with tag 'bamboo_build'
docker push quay.io/my_team/my_repo:bamboo_build${bamboo_buildNumber}
FWIW the same login command works as expected from my local command line. How can I remedy this? Also, using Bamboo's built in Docker task does not work -- it's unable to login to the registry, but for some reason does not have the "docker daemon" issue. Thank you in advance for any help!
The trick was to use the Bamboo variable ${bamboo_DOCKER_SIGNATURE} instead of docker. This variable says to use a specific host--i.e., docker -H <host address>.

CI & Docker-in-a-Docker

I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.

Resources