I have a Dockerfile to build mongodb on Centos 6:
# Build docker image for mongodb
FROM test
MAINTAINER ...
# Set up mongodb yum repo entry
# https://www.liquidweb.com/kb/how-to-install-mongodb-on-centos-6/
RUN echo -e "\
[mongodb]\n\
name=MongoDB Repository\n\
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/\n\
gpgcheck=0\n\
enabled=1\n" >> /etc/yum.repos.d/mongodb.repo
# Install mongodb
RUN yum install -y mongodb-org
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
# Start mongodb
ENTRYPOINT ["/usr/bin/mongod"]
CMD ["--port", "27017", "--dbpath", "/data/mongodb", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
I build it:
% docker build -t mongodb .
...
Step 8 : CMD ["--port", "27017", "--dbpath", "/data/mongodb", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
---> Running in 4025f6c86f81
---> 46d71068d2e0
Removing intermediate container 4025f6c86f81
Successfully built 46d71068d2e0
I run it:
% docker run -d -P 46d71068d2e0
0bdbfd6fe86ca31c678ba45ac1328958e8126b05c717877287e725924451f7f8
I inspect it:
% docker inspect 0bdbfd6fe86ca31c678ba45ac1328958e8126b05c717877287e725924451f7f8
[{
...
"NetworkSettings": {
"Bridge": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"PortMapping": null,
"Ports": null
},
I have no IPAddress.
My /etc/default/docker has only DNS options in it:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
Why? I am finding it challenging to use this without an address.
If I replace the ENTRYPOINT and CMD with CMD [], I can run the container (docker run -t -i mongodb /bin/sh) and it has an address and I can start mongod from within the container. And it is externally accessible as a mongodb server that I can insert rows into and query.
So the answer is that mongodb was failing to start up because I had specified a non-existent path for the data:
"--dbpath", "/data/mongodb"
when I had created an entirely different directory earlier:
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
When started with docker run -P -d, there was no output. I tried to docker run -t -i this container to see results in a terminal, but I was getting this error:
CMD = /bin/sh when using ENTRYPOINT but empty CMD
"When using entrypoint without a CMD the CMD is set to /bin/sh instead of the expected nothing."
Related
I have a custom image with docker installed (docker-in-docker). When running the image, the user needs to be $USERNAME (and not root). However the docker service required root to be started.
Getting docker to run as non-root seems to be overly complicated. So I have attempted to use su in the entry-point instead, which works, but it is not interactive.
FROM ubuntu:18.04
# ... A lot of steps here to install stuff that are not really relevant to the problem.
COPY container-helpers/entrypoint.sh .
USER root
ENV ENTRYUSER $USERNAME
ENTRYPOINT [ "./entrypoint.sh" ]
CMD "pulumi up"
And entrypoint.sh is:
#!/bin/bash
set -e
service docker start
export ENV_PATH=$PATH
su $ENTRY_USER -lp <<EOSU
set -e
export PATH=$ENV_PATH
. $NVM_DIR/nvm.sh
pulumi stack select -c dev
npx meteor-deploy stack configure default
$# # Run given argument as a command
EOSU
I run it as:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
error: --yes must be passed in to proceed when running in non-interactive mode
Or, if you don't want to take pulumi's word for it:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard bash; echo "exited"
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
exited
Any idea how I can pass on the tty to the su command properly?
Please help. When I want to go into a container is says
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
My DockerFile:
FROM ubuntu:16.04
MAINTAINER Anton Lapitski <a.lapitski#godeltech.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./ /usr/src/app
EXPOSE 80
ENTRYPOINT ["/bin/sh", "-c", "/usr/src/app/entry.sh"]
Starting script - start.sh:
sudo docker build -t starter .
sudo docker run -t -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter
entry.sh script:
echo "Hello World"
ls -l
pwd
if mountpoint -q /mounted-directory
then
echo "mounted"
else
echo "not mounted"
fi
sudo docker ps -a gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90599013c666 starter "/bin/sh -c /usr/src…" 18 minutes ago Exited (0) 18 minutes ago thirsty_wiles
And mosе important:
sudo docker exec -it 90599013c666 bash
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
Please could you tell what I am doing wrong?
P.S adding -d flag when running not helped.
Once the ENTRYPOINT completes (in any form), the container exits.
Once the container exits, you can't docker exec into it.
If you want to get a shell on the image you just built to poke around in it, you can
sudo docker run --rm -it --entrypoint /bin/sh starter
To make this slightly easier to run, you might change ENTRYPOINT to CMD in your Dockerfile. (Docker will run the ENTRYPOINT passing the CMD as command-line arguments; or if there is no entrypoint just run the CMD.)
...
RUN chmod +x ./app.sh
CMD ["./app.sh"]
Having done that, you can more easily override the command
sudo docker run --rm -it starter /bin/sh
You can try
docker start container_id and then docker exec -ti container_id bash for a stopped container.
You cannot execute the container, because your ENTRYPOINT script has been finished, and the container stopped. Try this:
Remove the ENTRYPOINT from your Dockerfile
Rebuild the image
run it with sudo docker run -it -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter sh
The key is the i flag and the sh at the end of the command.
I tried these two commands and it works:
sudo docker start <container_id>
docker exec -it <containerName> /bin/bash
I had docker 1.7.1, I was following james turbnall the docker book .
created two images (apache, jekyll) , created a jekyll container then
I created apache container as well with the -P tag with EXPOSE 80 in the docker file but as i ran docker port 80 but no public port error.
then i upgraded to docker 1.8.1 but nothing changed I tried -P , -p , EXPOSE and every argument but i can't get it fixed .
My ports column when running Docker ps -a is always empty.
Dockerfile:
FROM ubuntu:14.04
MAINTAINER AbdelRhman Khaled
ENV REFRESHED_AT 2015-09-05
RUN apt-get -yqq update
RUN apt-get -yqq install apache2
VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www.data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
EXPOSE 80
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD [ "-D", "FOREGROUND" ]
commands:
$ sudo docker build -t jamtur01/apache .
$ sudo docker run -d -P --volumes-from james_blog jamtur01/apache
$ sudo docker port container_ID 80
Error: No public port '80/tcp' published for aa99fef6544a
Just changed the Image Tag name and All is working fine now.
I am trying to connect to a web app running on tomcat8 in a docker container.
I am able to access it from within the container doing lynx http://localhost:8080/myapp, but when I try to access it from the host I only get HTTP request sent; waiting for response.
I am exposing port 8080 in the Dockerfile, I am using sudo docker inspect mycontainer | grep IPAddress to get the ip address of the container.
The command I am using to run the docker container is this:
sudo docker run -ti --name myapp --link mysql1:mysql1 --link rabbitmq1:rabbitmq1 -e "MYSQL_HOST=mysql1" -e "MYSQL_USER=myuser" -e "MYSQL_PASSWORD=mysqlpassword" -e "MYSQL_USERNAME=mysqlusername" -e "MYSQL_ROOT_PASSWORD=rootpassword" -e "RABBITMQ_SERVER_ADDRESS=rabbitmq1" -e "MY_WEB_ENVIRONMENT_ID=qa" -e "MY_WEB_TENANT_ID=tenant1" -p "8080:8080" -d localhost:5000/myapp:latest
My Dockerfile:
FROM localhost:5000/web_base:latest
MAINTAINER "Me" <me#my_company.com>
#Install mysql client
RUN yum -y install mysql
#Add Run shell script
ADD run.sh /home/ec2-user/run.sh
RUN chmod +x /home/ec2-user/run.sh
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
CMD ["/home/ec2-user/run.sh"]
My run.sh:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Any ideas why I can access it from within the container but not from the host?
Thanks
What does your docker run command look like? You still need to do -p 8080:8080. Expose in the dockerfile only exposes it for linked containers not to the host vm.
I am able to access the tomcat8 server from the host now. The problem was here:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Tomcat8 must be started as a service instead:
sudo service tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Hit given command to find IP address of docker-machine
$ docker-machine ls
The output will be like :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
Now run your application from host machine as : http://192.168.99.100:8080/myapp
docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox