How to install Composer with docker exec - docker

I'm trying to install composer on docker container. I have a container laravel55 and I'm gonna to install composer insite it.
docker exec laravel55 curl --silent --show-error
https://getcomposer.org/installer | php
#result
Composer (version 1.6.5) successfully installed to: /root/docker-
images/docker-php7-apache2/composer.phar
Use it: php composer.phar
Aftar installation, I'm trying to using composer but it doesn't work:
docker exec -w /var/www/html laravel55 php composer.phar install
#result
Could not open input file: composer.phar
It seems that Composer had not installed!
How can I install composer on a docker container?

Well with your command you're actually installing composer.phar locally on your host, you just execute the curl command inside the container. The part behind the pipe symbol | is not executed in your docker container but on your host. In your second command you switch your working directory to /var/www/html where you apparently expect the composer.phar but not in the first command.
So to make the whole command run in the container, you can try the following:
docker-compose exec -w /var/www/html laravel55 \
sh -c "curl --silent --show-error https://getcomposer.org/installer | php"

You could use official composer image from dockerhub and mount on it a volume from your app container
i.e
docker run -td --name my_app --volume /var/www myprivateregistry/myapp
docker run --rm --interactive --volumes-from my_app --volume /tmp:/tmp --workdir /var/www composer install

Related

docker volume masks parent folder in container?

I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.

Is it possible to install mysqli extensions via docker run command

I created a docker volume with index.php file.
Now, every time I run a new container I want to mount this file (I know how to do that), but what if I want to add mysqli extension to any new container,
Is it possible????
docker run -d -it -p 80:80 test --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache ----
docker-php-ext-install mysqli
See this image's Dockerfile & it's entrypoint:
If you add command to install extension at the end of docker run which will act as CMD of entrypoint, it will make apache2-foreground has no chance to start.
So, the only way in runtime is:
Step1: start the container
docker run -d -it -p 80:80 --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache
Step2: install extension with exec
docker exec -it www1 docker-php-ext-install mysqli
Step3: restart the container:
docker stop www1 && docker start www1
And in fact, the typical way to do this is to customize it in your own dockerfile, but it maybe not you want:
Dockerfile:
FROM php:7.2.2-apache
RUN xxx // install things as you like here
I know that it's a bit later, but if someone were facing the same issue and don't want to use a Dockerfile here's how you can do it.
If you look at php-apache's 7.2 Dockerfile where CMD is set you can find the apache2-foreground command.
You can just invoke the mysqli install command on docker run and append the CMD content like:
docker run -d -it -p 80:80 --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache sh -c 'docker-php-ext-install mysqli && docker-php-ext-enable mysqli && apache2-foreground'
Doing that way the mysqli extension will be installed, loaded and the container entrypoint properly loaded.

Using docker with running process

I've created this docker file which works for
FROM debian:9
ENV CF_CLI_VERSION "6.21.1"
# Install prerequisites
RUN ln -s /lib/ /lib64
RUN apt-get update && apt-get install curl -y
RUN curl -L "https://cli.run.pivotal.io/stable?release=linux64-binary&version=${CF_CLI_VERSION}" | tar -zx -C /usr/local/bin
And it works as expected, now I run it like following
docker run -i -t cf-cli cf -v
and I see the version
Now every command which I want to run is something like
docker run -i -t cf-cli cf -something
my question is how can I enter into container and do ls etc without every-time doing
docker run -i -t cf-cli ...
I want to enter to the container like you enter to machine.
Step 1:
Run the container in background:
docke run -d --name myapp dockerimage
Step2:
Exec into the containr myapp:
docker exec -it myapp bash
run any commands inside as u wish
Have a look at docker exec. You'll probably want something like docker exec -it containername bash depending on the shell installed in the container.
If I correcly understand you just need
docker exec -it <runningcontainername> bash

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

How can I run two commands in CMD or ENTRYPOINT in Dockerfile

In the Dockerfile builder, ENTRYPOINT and CMD run in one time by using /bin/sh -c in back.
Are there any simple solution to run two command inside without extra script
In my case, I want to setup docker in docker in jenkins slave node, so I pass the docker.sock into container, and I want to change the permission to be executed by normal user, so it shall be done before sshd command.
The normal is like jenkins, which will be login into container via ssh command.
$ docker run -d -v /var/run/docker.sock:/docker.sock larrycai/jenkins-slave
In larrycai/jenkins-slave Dockerfile, I hope to run
CMD chmod o+rw /docker.sock && /usr/sbin/sshd -D
Currently jenkins is given sudo permission, see larrycai/jenkins-slave
I run docker in docker in jenkins slave:
First: my slave know run docker.
Second: I prepare one docker image who knows run docker in docker. See one fragment of dockerfile
RUN echo 'deb [trusted=yes] http://myrepo:3142/get.docker.io/ubuntu docker main' > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy iptables ca-certificates lxc apt-transport-https lxc-docker
ADD src/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
VOLUME /var/lib/docker
Third: The jenkins job running in this slave contain one .sh file with a set of command to run over app code like:
export RAILS_ENV=test
# Bundle install
bundle install
# spec_no_rails
bundle exec rspec spec_no_rails -I spec_no_rails
bundle exec rake db:migrate:reset
bundle exec rake db:test:prepare
etc...
Fourth: one run shell step job with something like this
docker run --privileged -v /etc/localtime:/etc/localtime:ro -v `pwd`:/code myimagewhorundockerindocker /bin/bash -xec 'cd /code && ./myfile.sh'
--privileged necessary for run docker in docker
-v /etc/localtime:/etc/localtime:ro for synchronize host clock vs container clock
-v pwd:/code for share jenkins workspace (app-code) previously cloned from VCS with /code inside container
note: If you have service dependencies you can use fig with similar strategy.

Resources