how to use multiple command in docker-compose cli - docker

developers
I need your help.
I want to use docker-compose command in cli like below.
docker-compose run --entrypoint="" my_app apt-get update && pip install -U pip
The reason why I use this is I want to use another command for test my container, cause there are my build command for django in my entrypoint.sh.
Can I use multiple command in cli environment?

To execute multiple commands use bash.
Example:
docker exec <container_name> bash "-c" "cd / && ls "

Related

Can't add jenkins-job-builder in jenkins docker image

I'm new in docker. I want to create a docker container with Newman, Jenkins, Jenkins-job-builder. Please help me.
I built a docker image which bases on official Jenkins image https://hub.docker.com/r/jenkins/jenkins.
I used DockerFile. The build was successful, Jenkins app also runs successfully.
After running Jenkins I opened container as root
docker exec -u 0 -it jenkins bash and tryed to add new job with jenkins-job-builder
jenkins-jobs --conf ./jenkins_jobs.ini update ./jobs.yaml
but I got bash: jenkins-jobs: command not found
There is my Dockerfile
FROM jenkins/jenkins
USER root
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash
RUN apt-get -y install nodejs
RUN npm install -g newman
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python get-pip.py
RUN pip install --user jenkins-job-builder
USER jenkins
When building your image, you get some warnings. Especially this one is interesting:
WARNING: The script jenkins-jobs is installed in '/root/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Simply remove the --user flag from RUN pip install --user jenkins-job-builder and you're fine.

Using docker with running process

I've created this docker file which works for
FROM debian:9
ENV CF_CLI_VERSION "6.21.1"
# Install prerequisites
RUN ln -s /lib/ /lib64
RUN apt-get update && apt-get install curl -y
RUN curl -L "https://cli.run.pivotal.io/stable?release=linux64-binary&version=${CF_CLI_VERSION}" | tar -zx -C /usr/local/bin
And it works as expected, now I run it like following
docker run -i -t cf-cli cf -v
and I see the version
Now every command which I want to run is something like
docker run -i -t cf-cli cf -something
my question is how can I enter into container and do ls etc without every-time doing
docker run -i -t cf-cli ...
I want to enter to the container like you enter to machine.
Step 1:
Run the container in background:
docke run -d --name myapp dockerimage
Step2:
Exec into the containr myapp:
docker exec -it myapp bash
run any commands inside as u wish
Have a look at docker exec. You'll probably want something like docker exec -it containername bash depending on the shell installed in the container.
If I correcly understand you just need
docker exec -it <runningcontainername> bash

Travis ci in docker failing

Travis Ci .yml file
sudo: true
language: cpp
compiler:
- g++
services:
- docker
before_install:
- docker run -it ubuntu bash
- apt-get install graphicsmagick
install:
- apt-get install qt5-default
- exit
script: "bash -c ./build.sh"
build.sh is just a simple make file.
Can someone explain the difference between running.
docker run -it ubuntu bash
docker run -it ubuntu /bin/bash
To answer your question:
docker run -it ubuntu bash
executes the first binary called bash in the container's $PATH
docker run -it ubuntu /bin/bash
executes the bash binary in the /bin/ directory specifically.
For the ubuntu container both forms are very likely functionally equivalent.
To answer what I think could be your actual problem:
You're not using docker as is intended. Your script section does not execute in the container for example. You need to run all the commands, probably as a script, with a docker run without the interactive flag.

How to write docker file to run a docker run command inside an image

I have a shell script which creates and executes docker containers using docker run command. I want to keep this script in a docker image and want to run this shell script. I know that we cannot run docker inside a container. Is it possible to create a docker file to achieve this?
Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y vim-gnome curl
RUN curl -L https://raw.githubusercontent.com/xyz/abx/test/testing/testing_docker.sh -o testing_docker.sh
RUN chmod +x testing_docker.sh
CMD ["./testing_docker.sh"]
testing_docker.sh:
docker run -it docker info (sample command)

How can I run two commands in CMD or ENTRYPOINT in Dockerfile

In the Dockerfile builder, ENTRYPOINT and CMD run in one time by using /bin/sh -c in back.
Are there any simple solution to run two command inside without extra script
In my case, I want to setup docker in docker in jenkins slave node, so I pass the docker.sock into container, and I want to change the permission to be executed by normal user, so it shall be done before sshd command.
The normal is like jenkins, which will be login into container via ssh command.
$ docker run -d -v /var/run/docker.sock:/docker.sock larrycai/jenkins-slave
In larrycai/jenkins-slave Dockerfile, I hope to run
CMD chmod o+rw /docker.sock && /usr/sbin/sshd -D
Currently jenkins is given sudo permission, see larrycai/jenkins-slave
I run docker in docker in jenkins slave:
First: my slave know run docker.
Second: I prepare one docker image who knows run docker in docker. See one fragment of dockerfile
RUN echo 'deb [trusted=yes] http://myrepo:3142/get.docker.io/ubuntu docker main' > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy iptables ca-certificates lxc apt-transport-https lxc-docker
ADD src/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
VOLUME /var/lib/docker
Third: The jenkins job running in this slave contain one .sh file with a set of command to run over app code like:
export RAILS_ENV=test
# Bundle install
bundle install
# spec_no_rails
bundle exec rspec spec_no_rails -I spec_no_rails
bundle exec rake db:migrate:reset
bundle exec rake db:test:prepare
etc...
Fourth: one run shell step job with something like this
docker run --privileged -v /etc/localtime:/etc/localtime:ro -v `pwd`:/code myimagewhorundockerindocker /bin/bash -xec 'cd /code && ./myfile.sh'
--privileged necessary for run docker in docker
-v /etc/localtime:/etc/localtime:ro for synchronize host clock vs container clock
-v pwd:/code for share jenkins workspace (app-code) previously cloned from VCS with /code inside container
note: If you have service dependencies you can use fig with similar strategy.

Resources