Is it possible to install mysqli extensions via docker run command - docker

I created a docker volume with index.php file.
Now, every time I run a new container I want to mount this file (I know how to do that), but what if I want to add mysqli extension to any new container,
Is it possible????
docker run -d -it -p 80:80 test --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache ----
docker-php-ext-install mysqli

See this image's Dockerfile & it's entrypoint:
If you add command to install extension at the end of docker run which will act as CMD of entrypoint, it will make apache2-foreground has no chance to start.
So, the only way in runtime is:
Step1: start the container
docker run -d -it -p 80:80 --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache
Step2: install extension with exec
docker exec -it www1 docker-php-ext-install mysqli
Step3: restart the container:
docker stop www1 && docker start www1
And in fact, the typical way to do this is to customize it in your own dockerfile, but it maybe not you want:
Dockerfile:
FROM php:7.2.2-apache
RUN xxx // install things as you like here

I know that it's a bit later, but if someone were facing the same issue and don't want to use a Dockerfile here's how you can do it.
If you look at php-apache's 7.2 Dockerfile where CMD is set you can find the apache2-foreground command.
You can just invoke the mysqli install command on docker run and append the CMD content like:
docker run -d -it -p 80:80 --name=www1 --mount source=myvol1,destination=/var/www/html php:7.2.2-apache sh -c 'docker-php-ext-install mysqli && docker-php-ext-enable mysqli && apache2-foreground'
Doing that way the mysqli extension will be installed, loaded and the container entrypoint properly loaded.

Related

How to retrieve file from docker container?

I have a simple Dockerfile which creates a zip file and I'm trying retrieve the zip file once it is ready. My Dockerfile looks like this:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["/lib64.zip", "/lib64"]
After reading through the docs I fee like something like this should do it but I can't quite get it to work.
docker build -t ubuntu-libs .
docker run -d --name ubuntu-libs --mount source=$(pwd)/,target=/lib64.zip ubuntu-libs
One other side question: Is is possible to rename the zip file from the command line?
Edit:
This is different than the duplicate question mentioned in the comments because while they're using cp to copy file from a running Docker container I'm trying to mount a directory upon instantiation.
There are multiple ways to do this.
Using docker cp:
docker cp <container_hash>:/path/to/zip/file.zip /path/on/host/new_name.zip
Using docker volumes:
As you were leading to in your question, you can also mount a path from the container to your host. You can either do this by specifying where on the host you want the mount point to be or don't specify where the mount point is and let docker choose. Both these paths require different approaches.
Let docker choose host mount location
docker volume create random_volume_name
docker run -d --name ubuntu-libs -v random_volume_name:<path/to/mount/in/container> ubuntu-libs
The content will be located on your host, here:
ls -l /var/lib/docker/volumes/random_volume_name/_data/
Let me choose host mount location
docker run -d --name ubuntu-libs -v <existing/mount/point/on/host>:<path/to/mount/in/container> ubuntu-libs
This creates a clean/empty location that is shared as per the locations defined in the command. Now you need to modify your Dockerfile to copy the artifacts to this path, something like:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["sh", "-c", "/lib64.zip", "/lib64", "cp", "path/to/zip/file.zip", "<path/to/mount/in/container>"]
The content will now be located on your host, here:
ls -l <existing/mount/point/on/host>
I got to give a shout out to #joaofnfernandes from here, who does a great job explaining.
As #flagg19 commented, you should be binding a directory onto a directory. You can make up directories inside the container, and you can override the RUN arguments. Doing both plus adding type=bind leads to great success:
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs /out/lib64.zip /lib64
Or of course you could change the Dockerfile RUN command to write to /out/lib64.zip instead of /lib64.zip:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc && mkdir /out
ENTRYPOINT ["zip","-r","-9"]
CMD ["/out/lib64.zip", "/lib64"]
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs
Either way, I recommend adding --rm and getting rid of --name. No need to keep around the container after it's done.

How to install Composer with docker exec

I'm trying to install composer on docker container. I have a container laravel55 and I'm gonna to install composer insite it.
docker exec laravel55 curl --silent --show-error
https://getcomposer.org/installer | php
#result
Composer (version 1.6.5) successfully installed to: /root/docker-
images/docker-php7-apache2/composer.phar
Use it: php composer.phar
Aftar installation, I'm trying to using composer but it doesn't work:
docker exec -w /var/www/html laravel55 php composer.phar install
#result
Could not open input file: composer.phar
It seems that Composer had not installed!
How can I install composer on a docker container?
Well with your command you're actually installing composer.phar locally on your host, you just execute the curl command inside the container. The part behind the pipe symbol | is not executed in your docker container but on your host. In your second command you switch your working directory to /var/www/html where you apparently expect the composer.phar but not in the first command.
So to make the whole command run in the container, you can try the following:
docker-compose exec -w /var/www/html laravel55 \
sh -c "curl --silent --show-error https://getcomposer.org/installer | php"
You could use official composer image from dockerhub and mount on it a volume from your app container
i.e
docker run -td --name my_app --volume /var/www myprivateregistry/myapp
docker run --rm --interactive --volumes-from my_app --volume /tmp:/tmp --workdir /var/www composer install

Run docker with dynamic parameter

I'm trying to run a java application on a docker container where the jvm argument of the java program would be dynamic.
Dockerfile:
FROM amazonlinux
ADD http://company.com/artifactory/bins-release-local/com/marc/1.3.1/marc-1.3.1.tar.gz /root/
ADD log4j2.xml /root/
RUN tar xzf /root/marc-1.3.1.tar.gz -C /root && rm -f /root/marc-1.3.1.tar.gz
RUN yum install -y java
ENTRYPOINT ["/bin/bash", "-c", "/usr/bin/java", "${JVM_ARGS}", "-jar", "/root/marc.jar"]
I try to run the container like so:
docker run --rm -it --env-file jvm_args.env -e CLIENT=google moshe/java:latest
And the jvm_args.env is:
JVM_ARGS=-d64
-Dicmq=${CLIENT}
-Dlog4j.configurationFile=/root/log4j2.xml
-server
I couldn't seem to get it to work. I need the client to be dynamic and the JVM_ARGS should contain the client.
Ideas?

Running multiple commands after docker create

I want to make a script run a series of commands in a Docker container and then copy a file out. If I use docker run to do this, I don't get back the container ID, which I would need for the docker cp. (I could try and hack it out of docker ps, but that seems risky.)
It seems that I should be able to
Create the container with docker create (which returns the container ID).
Run the commands.
Copy the file out.
But I don't know how to get step 2. to work. docker exec only works on running containers...
If i understood your question correctly, all you need is docker "run exec & cp" -
For example -
Create container with a name --name with docker run -
$ docker run --name bang -dit alpine
Run few commands using exec -
$ docker exec -it bang sh -c "ls -l"
Copy a file using docker cp -
$ docker cp bang:/etc/hosts ./
Stop the container using docker stop -
$ docker stop bang
All you really need is Dockerfile and then build the image from it and run the container using the newly built image. For more information u can refer to
this
A "standard" content of a dockerfile might be something like below:
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Ubuntu Software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor && \
rm -rf /var/lib/apt/lists/*
#Define the ENV variable
ENV nginx_vhost /etc/nginx/sites-available/default
ENV php_conf /etc/php/7.0/fpm/php.ini
ENV nginx_conf /etc/nginx/nginx.conf
ENV supervisor_conf /etc/supervisor/supervisord.conf
#Copy supervisor configuration
COPY supervisord.conf ${supervisor_conf}
# Configure Services and Port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

Resources