I can't figure out how to use my containers after they are running in Docker (through docker-compose).
I've been reading up on Docker for the last few weeks and I'm interested in building an environment that I could develop on that and I could replicate efficiently to collaborate with my colleagues.
I've read through the following articles:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-centos-7
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-work-with-docker-data-volumes-on-ubuntu-14-04
I've installed Docker and Docker Compose through a Bash script made up of the commands found in the previous articles: (Run as sudo)
## Install Docker package
function docker_install {
echo "__ installing docker"
# Run installer script
wget -qO- https://get.docker.com/ | sh
# Add user parameter to docker group
usermod -aG docker $1
# Enable docker on boot
sudo systemctl enable docker.service
# Start docker now
sudo systemctl start docker.service
}
## Install docker-compose package
function docker-compose_install {
echo "__ installing docker-compose"
# Update yum repo
sudo yum install epel-release
# Install pip with "yes" flag
sudo yum install -y python-pip
# Install SSL Extension
sudo pip install backports.ssl_match_hostname --upgrade
# Install docker-compose through pip
sudo pip install docker-compose
# Upgrade python
sudo yum upgrade python*
}
# Call Functions
docker_install
docker-compose_install
And I have a docker-compose.yml file for my Docker images. (For now I'm only pulling PHP and Apache as a proof-of-concept, I will include MySQL once can get PHP and Apache working together)
php:
image: php
links:
- httpd
httpd:
image: httpd
ports:
- 80:80
I call a sudo docker-compose up -d and I don't receive any errors:
Creating containerwrap_httpd_1
Creating containerwrap_php_1
Any my question is:
When I run a php -v and service httpd status after the images are running I receive:
-bash: php: command not found
and
● httd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
If the images are running why am I not able to use PHP or see the status of Apache? How am I supposed to utilize my PHP and Apache containers?
Update
I've asked a more informed version of this question again.
What you are missing is that containers are like different machines. Running the commands at your droplet, you will not see anything running in those machines. You need to connect to them. One easy way is to use something:
docker exec -it CONTAINER_ID /bin/bash. This will give you access to each containers bash.
For your Apache server that would be containerwrap_httpd_1 and it will always change unless you have your docker-compose.yaml set up to use a constant name each time the service is created and started. Of course you can either curl localhost or open a browser and browse to your Droplet's IP address, provided it forwards http on the default port(or use the forwarding port). This will have the effect to see the Apache's default web page, because you have set up the export 80:80 rule for the service.
Seems you have some misunderstanding of Docker's concepts. You can image that each docker container as a lightweight Virtual Machine(of course Docker container is different with real VM). So basically after you created php and httpd containers, these php and httpd commands only available in the containers' bash. You cannot perform these commands in your host, because your host is a different machine from your containers. If you want to attach to the container bash, check out this command exec. You should be able to run php or httpd commands in the containers' bash.
If you want connect to your php container from your host, you can try docker run -p parameter, which will publish a container's port(s) to the host.
Or you want connect your php and httpd containers together, you should consider to read docker's network documents to figure out how to use link or docker network.
Related
So I have created a puppet container for a certificate authority. It works, but does not start correctly. Here is my Dockerfile:
FROM centos:6
RUN yum update -y
RUN rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y ruby puppet-server
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/var/lib/puppet/ssl/"]
EXPOSE 9140
#NOTHING BELOW THIS COMMENT RUNS
RUN puppet master --verbose
CMD service puppetmaster start
CMD chkconfig puppetmaster on
CMD []
I can then start the container with the following run(note that I named the image ca-puppet):
docker run -d -p 9140:9140 -it --name ca-puppet \
-v /puppet-host/ssl:/var/lib/puppet/ssl \
ca-puppet bash
The issue is that I need to docker exec into the container and run the following commands to get it started and create the ca certificates in its ssl directory:
puppet master --verbose
service puppetmaster start
chkconfig puppetmaster on
I have a feeling I should be using some other Docker file commands to run the last 3 commands. What should that be?
There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
also
If the user specifies arguments to docker run then they will override
the default specified in CMD.
However, using the default process manager (e.g., SysV, systemd, etc) in Docker for most mainstream distros can cause problems (without making a lot of modifications). You generally don't need it, however -- particularly if you're only running one application (as is often considered best practice). In a Docker container, you generally want your primary application to be the first process (PID 1).
You can do this by not daemonizing puppet and start it as the default container command via something like:
CMD puppet master --verbose --no-daemonize
And use the Docker host to manage it (via restart policy, etc.).
I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.
I have a Jenkins container running inside Docker and I want to use this Jenkins container to spin up other Docker containers when running integration tests etc.
So my plan was to install Docker in the container but this doesn't seem to work so well for me. My Dockerfile looks something like this:
FROM jenkins
MAINTAINER xxxx
# Switch user to root so that we can install apps
USER root
RUN apt-get update
# Install latest version of Docker
RUN apt-get install -y apt-transport-https
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
RUN sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
RUN apt-get update
RUN apt-get install -y lxc-docker
# Switch user back to Jenkins
USER jenkins
The jenkins image is based on Debian Jessie. When I start bash terminal inside container based on the generated image and do for example:
docker images
I get the following error message:
FATA[0000] Get http:///var/run/docker.sock/v1.16/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
I suspect that this could be because the docker service is not started. But my next problem arise when I try to start the service:
service docker start
This gives me the following error:
mount: permission denied
I've tracked the error in /etc/init.d/docker to this line:
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
So my questions are:
How do I actually start a Docker host inside a container? Or is this
something that should be avoided?
Is there something special I need to do if I'm running Mac and boot2docker?
Perhaps I should instead link to the Docker on the host machine as described here?
Update: I've tried the container as user root and jenkins. sudo is not installed.
A simpler alternative is to mount the docker socket and create sibling containers. To do this, install docker on your image and run something like:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock myimage
In the container you should now be able to run docker commands as if you were on the host. The advantage of this method is that you don't need --privileged and get to use the cache from the host. The disadvantage is that you can see all running containers, not just the ones the created from the container.
1.- The first container you start (the one you launch other one inside) must be run with the --privileged=true flag.
2.- I think there is not.
3.- Using the privileged flag you don't need to mount the docker socket as a volume.
Check this project to see an example of all this.
I just started playing with docker. The first thing I did was to install it, and then install Rstudio-server. (I'm running ubuntu 14.04)
sudo apt-get install docker.io
sudo docker run -d -p 8787:8787 -e USER='some_user_name' -e PASSWORD='super_secret_password' rocker/hadleyverse
Is it possible to run a docker rstudio server without sudo? If so, how?
Thanks!
From this answer:
The docker manual has this to say about it:
Giving non-root access
The docker daemon always runs as the root user, and since Docker version 0.5.2, the docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root, and so, by default, you can access it with sudo.
Starting in version 0.5.3, if you (or your Docker installer) create a Unix group called docker and add users to it, then the docker daemon will make the ownership of the Unix socket read/writable by the docker group when the daemon starts. The docker daemon must always run as the root user, but if you run the docker client as a user in the docker group then you don't need to add sudo to all the client commands. As of 0.9.0, you can specify that a group other than docker should own the Unix socket with the -G option.
Warning: The docker group (or the group specified with -G) is root-equivalent; see Docker Daemon Attack Surface details.
Example:
Add the docker group if it doesn't already exist.
sudo groupadd docker
Add the connected user "${USER}" to the docker group. Change the user name to match your preferred user.
sudo gpasswd -a ${USER} docker
Restart the Docker daemon:
sudo service docker restart
If you are on Ubuntu 14.04 and up use docker.io instead:
sudo service docker.io restart
You need to log out and log back in again if you added the current logged in user.
I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.