I want to run shell script(bash) when my ec2 pull docker image successfully.
Is there any way I can try?
I just found "docker events" command, but I don't know how to trigger script with this.
Related
I need to add a circleCI job, after pulling a docker image (abc) i need to execute a "docker run" command on the container which is created by image abc to finish the job.
circleci_job:
docker:
- image: xyz.ecr.us-west-2.amazonaws.com/abc
steps:
- checkout
- run:
name: execute docker run command
command: |
export env1=https://example.com
docker run abc --some command
I am getting below error:
/bin/bash: line 1: docker: command not found
I wanted to know if i am using a wrong executer type ? or i am missing something here ?
I see two issues here.
You need to use an image that has the Docker client already installed or you need to install it on the fly in your job. Right now it appears that the image xyz.ecr.us-west-2.amazonaws.com/abc doesn't have Docker client installed.
With the Docker executor, in order for Docker commands such as docker run or docker pull to work, you need the special CircleCI step - setup_remote_docker to be run BEFORE you try using Docker.
I am using GitLab CI/CD to build and push a docker image to my private GitLab registry.
I am able to successfully SSH into my server from the pipeline runner, but any commands passed into the SSH session doesn't run.
I am trying to pull the latest image from my GitLab container registry, run it, and exit the session to gracefully (successfully) pass the data to my pipeline.
The command I am running is:
ssh -t user#123.456.789 "docker pull registry.gitlab.com/user/project:latest & docker run project:latest"
The above command connects me to my server, and I see the typical welcome message, but the session hangs and no commands are ran.
I have tried using the heredoc format to pass in multiple commands at once, but I can't get a single command to work.
Any advice is appreciated.
For testing, you can try
ssh user#123.456.789 ls
To chain command, avoid using the '&', which would make the first command run in the background, while acting as command separator.
Try:
ssh user#123.456.789 "ls; pwd"
If this work, then try the two docker command, separated by ';'
Try with a docker run -td (that I mentioned here) in order to detach the docker process, without requiring a tty.
I am running Jenkins on EKS with Kubernetes plugin.
I have one cloud setup, and a template running my own container image for alpine with docker ( to execute docker commands )
i have only 1 job currently that only does "docker service ls" as bash
i get the error
"/tmp/jenkins8475081645730667159.sh: line 2: docker: command not
found"
while going inside the container using exec and switching to "jenkins" user i am able to run "docker".
it looks like my pod contains both jnlp container and my alpine-docker container-when write to file , it will write it to the alpine container while if i run "docker" it will try to run it on the jnlp container, does this make any sense ? Thanks
you have to run docker from your container
In your pipeline
container('mycontainer') {
sh 'docker service ls'
}
You can't use a container other than the jnlp one if you are using freestyle jobs, only pipeline jobs
Thanks to the bitcoin.stack community I have successfully launched a bitcoind docker with an external volume which has the block data
Currently its in 100% sync but I am facing an issue to get information using bitcoin-cli I need to run bitcoind -reindex and then add txindex=1 into bitcoin.conf
As I pulled the docker image from docker hub I do not have any control over its docker file and I have 140GB+ blockchain data that I do not wanna discard and start over
How do I run --reindex on an docker container ?
While your container is running you can run docker exec -it <mybitcoindcontainer> /bin/sh. This should give you a shell inside your running container. You can then run your choice of commands at the shell prompt.
I've got a jenkins declarative pipeline build that runs gradle and uses a gradle plugin to create a docker image. I'm also using a dockerfile agent directive, so the entire thing runs inside a docker container. This was working great with jenkins itself installed in docker (I know, that's a lot of docker). I had jenkins installed in a docker container on docker for mac, with -v /var/run/docker.sock:/var/run/docker.sock (DooD) per https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/. With this setup, the pipeline docker agent ran fine, and the docker build command within the pipeline docker agent ran fine as well. I assumed jenkins also mounted the docker socket on its inner docker container.
Now I'm trying to run this on jenkins installed on an ec2 instance with docker installed properly. The jenkins user has the docker group as its primary group. The jenkins user is able to run "docker run hello-world" successfully. My pipeline build starts the docker agent container (based on the gradle image with various things added) but when gradle attempts to run the docker build command, I get the following:
* What went wrong:
Execution failed for task ':docker'.
> Docker execution failed
Command line [docker build -t config-server:latest /var/lib/****/workspace/nfig-server_feature_****-HRUNPR3ZFDVG23XNVY6SFE4P36MRY2PZAHVTIOZE2CO5EVMTGCGA/build/docker] returned:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Is it possible to build docker images inside a docker agent using declarative pipeline?
Yes, it is.
The problem is not with Jenkins' declarative pipeline, but how you're setting up and running things.
From the error above, looks like there's a missing permission which needs to be granted.
Maybe if you share what your configuration looks like and how your're running things, more people can help.